On Mon, 16 Mar 2009, ml(at)bortal(dot)de wrote:
> Any idea why my performance colapses at 2GB Database size?
pgbench results follow a general curve I outlined at
the spot where performance drops hard depends on how big of a working set
of data you can hold in RAM. (That shows a select-only test which is why
the results are so much higher than yours, all the tests work similarly as
far as the curve they trace).
In your case, you've got shared_buffers=1GB, but the rest of the RAM is
the server isn't so useful to you because you've got checkpoint_segments
set to the default of 3. That means your system is continuously doing
small checkpoints (check your database log files, you'll see what I
meant), which keeps things from ever really using much RAM before
everything has to get forced to disk.
Increase checkpoint_segments to at least 30, and bump your
transactions/client to at least 10,000 while you're at it--the 32000
transactions you're doing right now aren't nearly enough to get good
results from pgbench, 320K is in the right ballpark. That might be enough
to push your TPS fall-off a bit closer to 4GB, and you'll certainly get
more useful results out of such a longer test. I'd suggest adding in
scaling factors of 25, 50, and 150, those should let you see the standard
pgbench curve more clearly.
On this topic: I'm actually doing a talk introducing pgbench use at
tonight's meeting of the Baltimore/Washington PUG, if any readers of this
list are in the area it should be informative:
http://omniti.com/is/here for directions.
* Greg Smith gsmith(at)gregsmith(dot)com http://www.gregsmith.com Baltimore, MD
In response to
pgsql-performance by date
|Next:||From: Gregory Stark||Date: 2009-03-16 15:08:12|
|Subject: Re: Proposal of tunable fix for scalability of 8.4|
|Previous:||From: Віталій Тимчишин||Date: 2009-03-16 13:04:18|
|Subject: Re: Query much slower when run from postgres function|