Skip site navigation (1) Skip section navigation (2)

Re: Postgres benchmarking with pgbench

From: "ml(at)bortal(dot)de" <ml(at)bortal(dot)de>
To: Greg Smith <gsmith(at)gregsmith(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Postgres benchmarking with pgbench
Date: 2009-03-19 21:25:40
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance
Hi Greg,

thanks a lot for your hints. I changed my config and changed raid6 to 
raid10, but whatever i do, the benchmark breaks down at a scaling factor 
75 where the database is "only" 1126MB big.

Here are my benchmark Results (scaling factor, DB size in MB, TPS) using:
   pgbench -S -c  X  -t 1000 -U pgsql -d benchmark -h MYHOST

1 19 8600
5 79 8743
10 154 8774
20 303 8479
30 453 8775
40 602 8093
50 752 6334
75 1126 3881
150 2247 2297
200 2994 701
250 3742 656
300 4489 596
400 5984 552
500 7479 513

I have no idea if this is any good for a QuardCore Intel(R) Xeon(R) CPU  
E5320  @ 1.86GHz with 4GB Ram and 6 SATA disk (7200rpm) in raid 10.

Here is my config (maybe with some odd setting):

I played around with:
- max_connections
- shared_buffers
- work_mem
- maintenance_work_mem
- checkpoint_segments
- effective_cache_size

..but whatever i do, the graph looks the same. Any hints or tips what my 
config should look like? Or are these results even okay? Maybe i am 
driving myself crazy for nothing?


Greg Smith wrote:
> On Mon, 16 Mar 2009, ml(at)bortal(dot)de wrote:
>> Any idea why my performance colapses at 2GB Database size?
> pgbench results follow a general curve I outlined at 
> and the spot where performance drops hard depends on how big of a 
> working set of data you can hold in RAM.  (That shows a select-only 
> test which is why the results are so much higher than yours, all the 
> tests work similarly as far as the curve they trace).
> In your case, you've got shared_buffers=1GB, but the rest of the RAM 
> is the server isn't so useful to you because you've got 
> checkpoint_segments set to the default of 3.  That means your system 
> is continuously doing small checkpoints (check your database log 
> files, you'll see what I meant), which keeps things from ever really 
> using much RAM before everything has to get forced to disk.
> Increase checkpoint_segments to at least 30, and bump your 
> transactions/client to at least 10,000 while you're at it--the 32000 
> transactions you're doing right now aren't nearly enough to get good 
> results from pgbench, 320K is in the right ballpark.  That might be 
> enough to push your TPS fall-off a bit closer to 4GB, and you'll 
> certainly get more useful results out of such a longer test.  I'd 
> suggest adding in scaling factors of 25, 50, and 150, those should let 
> you see the standard pgbench curve more clearly.
> On this topic:  I'm actually doing a talk introducing pgbench use at 
> tonight's meeting of the Baltimore/Washington PUG, if any readers of 
> this list are in the area it should be informative: 
> and 
> for directions.
> -- 
> * Greg Smith gsmith(at)gregsmith(dot)com Baltimore, MD

In response to


pgsql-performance by date

Next:From: Scott CareyDate: 2009-03-19 21:43:21
Subject: Re: Proposal of tunable fix for scalability of 8.4
Previous:From: Scott CareyDate: 2009-03-19 20:58:44
Subject: Re: Proposal of tunable fix for scalability of 8.4

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group