On Tue, Feb 8, 2011 at 3:23 PM, Shaun Thomas <sthomas(at)peak6(dot)com> wrote:
> With 300k rows, count(*) isn't a good test, really. That's just on the edge
> of big-enough that it could be > 1-second to fetch from the disk controller,
1 second you say ? excellent, sign me up
70 seconds is way out of bounds
I don't want a more efficient query to test with, I want the shitty query
that performs badly that isolates an obvious problem.
The default settings are not going to cut it for a database of your size,
> with the volume you say it's getting.
not to mention the map reduce jobs I'm hammering it with all night :)
but I did pause those until this is solved
But you need to put in those kernel parameters I suggested. And I know this
> sucks, but you also have to raise your shared_buffers and possibly your
> work_mem and then restart the DB. But this time, pg_ctl to invoke a fast
> stop, and then use the init script in /etc/init.d to restart it.
I'm getting another slicehost slice. hopefully I can clone the whole thing
over without doing a full install and go screw around with it there.
its a fairly complicated install, even with buildout doing most of the
In response to
pgsql-performance by date
|Next:||From: Gorshkov||Date: 2011-02-10 01:58:12|
|Subject: Re: [HACKERS] Slow count(*) again...|
|Previous:||From: Stephen Frost||Date: 2011-02-09 19:40:08|
|Subject: Re: [PERFORM] pgbench to the MAXINT|