What's interesting here is that on a couple metrics the green curve is
actually *better* until it takes that nosedive at 500 MB. Obviously it's not
better on average hits/s, the most obvious metric. But on deviation and
worst-case hits/s it's actually doing better.
Note that while the average hits/s between 100 and 500 is over 600 tps for
Postgres there is a consistent smattering of plot points spread all the way
down to 200 tps, well below the 400-500 tps that MySQL is getting.
Some of those are undoubtedly caused by things like checkpoints and vacuum
runs. Hopefully the improvements that are already in the pipeline will reduce
I mention this only to try to move some of the focus from the average
performance to trying to remove the pitfalls that affact 1-10% of transactions
and screw the worst-case performance. In practical terms it's the worst-case
that governs perceptions, not average case.
In response to
pgsql-performance by date
|Next:||From: valgog||Date: 2007-05-22 08:23:03|
|Subject: Key/Value reference table generation: INSERT/UPDATE performance|
|Previous:||From: Gregory Stark||Date: 2007-05-22 08:03:56|
|Subject: Re: Postgres Benchmark Results|