Joshua Marsh <icub3d(at)gmail(dot)com> writes:
> shared_buffers = 1000 # min 16, at least max_connections*2, 8KB each
This is on the small side for an 8G machine. I'd try 10000 or so.
> sort_mem = 4096000
Yikes. You do realize you just said that *each sort operation* can use 4G?
(Actually, it's probably overflowing internally; I dunno what amount of
sort space you are really ending up with but it could be small.) Try
something saner, maybe in the 10 to 100MB range.
> vacuum_mem = 1024000
This is probably excessive as well.
> #max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each
> #max_fsm_relations = 1000 # min 100, ~50 bytes each
You will need to bump these up a good deal to avoid database bloat.
> Occasionally, because we store data from several sources, we will have
> requests for data from several sources. We simply intersect the
> view_of_data table with a sources table that lists what acctno belong
> to what source. This query would look something like this:
> SELECT acctno FROM view_of_data WHERE has_name AND is_active_member
> AND state = 'OH' INTERSECT SELECT acctno FROM sources_data WHERE
> source = 175;
IMHO you need to rethink your table layout. There is simply no way that
that query is going to be fast. Adding a source column to view_of_data
would work much better.
If you're not in a position to redo the tables, you might try it as a
SELECT acctno FROM view_of_data JOIN sources_data USING (acctno)
WHERE has_name AND is_active_member AND state = 'OH'
AND source = 175;
but I'm not really sure if that will be better or not.
regards, tom lane
In response to
pgsql-performance by date
|Next:||From: Anjan Dave||Date: 2004-10-26 17:42:21|
|Subject: Re: can't handle large number of INSERT/UPDATEs|
|Previous:||From: Greg Stark||Date: 2004-10-26 15:30:23|
|Subject: Re: [PATCHES] ARC Memory Usage analysis|