Skip site navigation (1) Skip section navigation (2)

Re: Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)

From: Greg Smith <gsmith(at)gregsmith(dot)com>
To: Stephane Bailliez <sbailliez(at)gmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)
Date: 2008-07-20 23:12:28
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance
On Sat, 19 Jul 2008, Stephane Bailliez wrote:

> OS is Ubuntu 7.10 x86_64 running  2.6.22-14

Note that I've had some issues with the desktop Ubuntu giving slower 
results in tests like this than the same kernel release using the stock 
kernel parameters.  Haven't had a chance yet to see how the server Ubuntu 
kernel fits into that or exactly what the desktop one is doing wrong yet. 
Could be worse--if you were running any 8.04 I expect your pgbench results 
would be downright awful.

> data is on xfs noatime

While XFS has some interesting characteristics, make sure you're 
comfortable with the potential issues the journal approach used by that 
filesystem has.  With ext3, you can choose the somewhat risky writeback 
behavior or not, you're stuck with it in XFS as far as I know.  A somewhat 
one-sided intro here is at

> postgresql 8.2.9 with data and xlog as mentioned above

There are so many known performance issues in 8.2 that are improved in 8.3 
that I'd suggest you really should be considering it for a new install at 
this point.

> Script running over scaling factor 1 to 1000 and running 3 times pgbench with 
> "pgbench -t 2000 -c 8 -S pgbench"

In general, you'll want to use a couple of clients per CPU core for 
pgbench tests to get a true look at the scalability.  Unfortunately, the 
way the pgbench client runs means that it tends to top out at 20 or 30 
thousand TPS on read-only tests no matter how many cores you have around. 
But you may find operations where peak throughput comes at closer to 32 
clients here rather than just 8.

> It's a bit limited and will try to do a much much longer run and increase the 
> # of tests and calculate mean and stddev as I have a pretty large variation 
> for the 3 runs sometimes (typically for the scaling factor at 1000, the runs 
> are respectively 1952, 940, 3162)  so the graph is pretty ugly.

This is kind of a futile exercise and I wouldn't go crazy trying to 
analyze here.  Having been through that many times, I predict you'll 
discover no real value to a more statistically intense analysis.  It's not 
like sampling at more points makes the variation go away, or that the 
variation itself has some meaning worth analyzing.  Really the goal of 
pgbench tests should be look at a general trend.  Looking at your data for 
example, I'd say the main useful observation to draw from your tests is 
that performance is steady then drops off sharply once the database itself 
exceeds 10GB, which is a fairly positive statement that you're getting 
something out of most of the the 16GB of RAM in the server during this 

As far as the rest of your results go, Luke's comment that you may need 
more than one process to truly see the upper limit of your disk 
performance is right on target.  More useful commentary on that issue I'd 
recomend is near the end of

(man does that need to be a smaller URL)

* Greg Smith gsmith(at)gregsmith(dot)com Baltimore, MD

In response to


pgsql-performance by date

Next:From: System/IJS - JokoDate: 2008-07-21 06:35:20
Subject: Re: log_statement at postgres.conf
Previous:From: Greg SmithDate: 2008-07-20 23:10:18
Subject: Re: 3ware vs Areca

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group