I need to optimize a database used by approx 10 people, I don't need to
have the perfect config, simply to avoid stupid bottle necks and follow
the best practices...
The database is used from a web interface the whole work day with
"normal" requests (nothing very special).
And each morning huge tables are DELETED and all data is INSERTed new
from a script. (Well, "huge" is very relative, it's only 400'000 records)
For now, we only planned a VACUUM ANALYSE eacha night.
But the database complained about checkpoint_segments (currently = 3)
What should be changed first to improve speed ?
* memory ?
Thanks a lot for any advice (I know there are plenty of archived
discussions on this subject but it's always difficult to know what very
important, and what's general as opposed to specific solutions)
Have a nice day !
pgsql-performance by date
|Next:||From: Grzegorz Jaśkiewicz||Date: 2009-10-28 12:26:29|
|Subject: Re: Postgresql optimisation|
|Previous:||From: Robert Haas||Date: 2009-10-28 01:24:49|
|Subject: Re: bitmap heap scan way cheaper than seq scan on the same amount of tuples (fts-search).|