On Sun, 7 Mar 2004 18:54:12 -0500
"Dean Sheppard" <dean(dot)sheppard(at)rogers(dot)com> threw this fish to the penguins:
> I have a postgres SQL Table which contains 30 Million Records. It is a simple flat table with 4 columns. Each column has an index and I select from this table using a single column at any time.
> Here's the problem. A simple select statement takes upto 30 seconds to respond, if I add in connection pooling and 15 - 20 concurrent requests the responses are now 15-20 minutes!! I have no idea what the tuning options may be, but I need to get these queries running in 30 seconds or so.
For questions like this, please state the exact version of postgres,
the platform, and OS. Show the table definition, i.e. do:
and the output of "explain analyze" on the query, i.e. do:
explain analyze select x,y,z from mybigtable where x='foo' and y>33;
for whatever the slow query is. Don't worry about the heavily loaded case
yet, just work on the isolated query for now.
And mention the locale/encoding if any conditions involve text comparisons.
For good measure, you might include output from:
select * from pg_stats where tablename='mybigtable';
Hmm, be SURE you've done "vacuum analyze" on the db before expecting
good performance. Just do (at the shell prompt, not in psql):
vacuumdb --analyze mybigdb
Vacuum may take a while but it's worth it. You should vacuum periodically.
There's an autovacuum available that has less impact on your system if that
is a problem. The --analyze is only necessary when/if your distribution
of key values changes significantly.
Also, if you're using the default postgresql.conf file, see:
-- George Young
"Are the gods not just?" "Oh no, child.
What would become of us if they were?" (CSL)
In response to
pgsql-novice by date
|Next:||From: Bruno Wolff III||Date: 2004-03-10 04:16:24|
|Subject: Re: find close (duplicate) points + create index|
|Previous:||From: Josh Berkus||Date: 2004-03-10 00:57:23|
|Subject: Re: SQL Query - Performance Problem|