Skip site navigation (1) Skip section navigation (2)

Re: Postgres Benchmark Results

From: Gregory Stark <stark(at)enterprisedb(dot)com>
To: "PFC" <lists(at)peufeu(dot)com>
Cc: <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Postgres Benchmark Results
Date: 2007-05-22 08:16:56
Message-ID: (view raw or whole thread)
Lists: pgsql-performance
What's interesting here is that on a couple metrics the green curve is
actually *better* until it takes that nosedive at 500 MB. Obviously it's not
better on average hits/s, the most obvious metric. But on deviation and
worst-case hits/s it's actually doing better.

Note that while the average hits/s between 100 and 500 is over 600 tps for
Postgres there is a consistent smattering of plot points spread all the way
down to 200 tps, well below the 400-500 tps that MySQL is getting.

Some of those are undoubtedly caused by things like checkpoints and vacuum
runs. Hopefully the improvements that are already in the pipeline will reduce

I mention this only to try to move some of the focus from the average
performance to trying to remove the pitfalls that affact 1-10% of transactions
and screw the worst-case performance. In practical terms it's the worst-case
that governs perceptions, not average case.

  Gregory Stark

In response to


pgsql-performance by date

Next:From: valgogDate: 2007-05-22 08:23:03
Subject: Key/Value reference table generation: INSERT/UPDATE performance
Previous:From: Gregory StarkDate: 2007-05-22 08:03:56
Subject: Re: Postgres Benchmark Results

Privacy Policy | About PostgreSQL
Copyright © 1996-2015 The PostgreSQL Global Development Group