> (I tried many values, this seems to work well for us - 12GB RAM)
800MB for sort mem? Are you sure you typed that correctly? You must be
counting on not having a lot of concurrent queries. It sure will speed up
index updating, though!
I think you might do well to experiment with using the checkpoint_delay and
checkpoint_sibilings settings in order to get more efficient batch processing
of updates while selects are going on. I would also suggest increasing
checkpoint segments as much as your disk space will allow; I know one
reporting database I run that does batch loads is using 128 (which is about a
gig of disk, I think).
What have you set max_fsm_relations and max_fsm_pages to? The latter should
be very high for you, like 10,000,000
For that matter, what *version* of PostgreSQL are you running?
Also, make sure that your tables get vaccuumed regularly.
> Any help/suggestions would be greatly appreciated... Even if it's
> something like "you need a faster db box, there's no other way" ;-)
Well, a battery-backed RAID controller with a fast cache would certainly help.
You'll also be glad to know that a *lot* of the improvements in the upcoming
PostgreSQL 7.5 are aimed at giving better peformance on large, high-activity
databases like yours.
Aglio Database Solutions
In response to
pgsql-performance by date
|Next:||From: Manfred Koizar||Date: 2004-02-13 09:26:16|
|Subject: Re: optimization ideas for frequent, large(ish) updates in frequently accessed DB?|
|Previous:||From: Marinos J. Yannikos||Date: 2004-02-13 00:58:34|
|Subject: optimization ideas for frequent, large(ish) updates in frequently