Good to see you producing results again.
On Sat, 2008-12-20 at 16:54 -0800, Mark Wong wrote:
> Here are links to how the throughput changes when increasing shared_buffers:
Only starnge thing here is the result at 22528MB. It's the only normal
one there. Seems to be a freeze occurring on most tests around the 30
minute mark, which delays many backends and reduces writes.
Reduction in performance as shared_buffers increases looks normal.
Increase wal_buffers, but look for something else as well. Try to get a
backtrace from when the lock up happens. It may not be Postgres?
> And another series of tests to show how throughput changes when
> checkpoint_segments are increased:
> The links go to a graphical summary and raw data. Note that the
> maximum theoretical throughput at this scale factor is approximately
> 12000 notpm.
> My first glance takes tells me that the system performance is quite
> erratic when increasing the shared_buffers. I'm also not what to
> gather from increasing the checkpoint_segments. Is it simply that the
> more checkpoint segments you have, the more time the database spends
> fsyncing when at a checkpoint?
I would ignore the checkpoint_segment tests because you aren't using a
realistic value of shared_buffers. I doubt any such effect is noticeable
when you use a realistic value determined from set of tests 505.
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support
In response to
pgsql-performance by date
|Next:||From: Ted Allen||Date: 2008-12-24 17:31:23|
|Subject: Troubles dumping a very large table. |
|Previous:||From: Stefan Kaltenbrunner||Date: 2008-12-23 19:47:36|
|Subject: Re: How to "unique-ify" HUGE table?|