On Tue, Feb 3, 2009 at 10:54 AM, Jeff <threshar(at)torgo(dot)978(dot)org> wrote:
> Now, moving into reality I compiled 8.3.latest and gave it a whirl. Running
> against a software R1 of the 2 x25-e's I got the following pgbench results:
> (note config tweaks: work_mem=>4mb, shared_buffers=>1gb, should probably
> have tweaked checkpoint_segs, as it was emitting lots of notices about that,
> but I didn't).
You may find you get better numbers with a lower shared_buffers value,
and definitely try cranking up number of checkpoint segments to
something in the 50 to 100 range.
> (multiple runs, avg tps)
> Scalefactor 50, 10 clients: 1700tps
> At that point I realized write caching on the drives was ON. So I turned it
> off at this point:
> Scalefactor 50, 10 clients: 900tps
> At scalefactor 50 the dataset fits well within memory, so I scaled it up.
> Scalefactor 1500: 10 clients: 420tps
> While some of us have arrays that can smash those numbers, that is crazy
> impressive for a plain old mirror pair. I also did not do much tweaking of
> PG itself.
On a scale factor or 100 my 12 disk 15k.5 seagate sas drives on an
areca get somewhere in the 2800 to 3200 tps range on sustained tests
for anywhere from 8 to 32 or so concurrent clients. I get similar
performance falloffs as I increase the testdb scaling factor.
But for a pair of disks in a mirror with no caching controller, that's
impressive. I've already told my boss our next servers will likely
have intel's SSDs in them.
> While I'm in the testing mood, are there some other tests folks would like
> me to try out?
how about varying the number of clients with a static scalefactor?
When fascism comes to America, it will be the intolerant selling
fascism as diversity.
In response to
pgsql-performance by date
|Next:||From: Tom Lane||Date: 2009-02-03 21:17:29|
|Subject: Re: Deleting millions of rows |
|Previous:||From: Scott Carey||Date: 2009-02-03 18:43:36|
|Subject: Re: SSD performance|