Dave Cramer <pg(at)fastcrypt(dot)com> writes:
> I was just looking at the config parameters, and you have the shared buffers
> set to 60k, and the effective cache set to 1k ????
I was actually going to suggest that the performance degradation might be
because of an excessively high shared_buffers setting. That was before I saw
The only reason I could imagine the performance degradation would be because
more and more CPU time is being spent traversing the 2Q LRU buffer lists.
I would try it with a shared buffer setting of 10k to see if it levels out
sooner at a higher TPM.
I would also suggest setting checkpoint_timeout to something more realistic.
All your 60m tests that show a single checkpoint in the middle are being
deceptive since half the data in the test hasn't even been checkpointed. You
should have enough checkpoints in your test that they're represented in the
If you want 60m to be a reasonably representative sample then I would suggest
a checkpoint_timeout of 300-600 (ie, checkpoints every 5-10m) so you get 10-20
checkpoints in the result. And so that a maximum of 5-10% of the data isn't
being checkpointed in the test.
That would also make those huge performance dropouts a little less dramatic.
And it might give us a chance to see how effective the bgwriter is at
smoothing them out. Personally, as a user, I think it's more important to look
at the maximum transaction latency than the average throughput.
In response to
pgsql-hackers by date
|Next:||From: pgsql||Date: 2005-03-03 16:23:55|
|Subject: Re: cluster table by two-column index ?|
|Previous:||From: Oleg Bartunov||Date: 2005-03-03 15:43:54|
|Subject: consequent btree index scans optimizations ?|