Mark Wong <markw(at)osdl(dot)org> writes:
> I have some initial results using 8.0beta5 with our OLTP workload.
> throughput: 4076.97
Do people really only look at the "throughput" numbers? Looking at those
graphs it seems that while most of the OLTP transactions are fulfilled in
subpar response times, there are still significant numbers that take as much
as 30s to fulfil.
Is this just a consequence of the type of queries being tested and the data
distribution? Or is Postgres handling queries that could run consistently fast
but for some reason generating large latencies sometimes?
I'm concerned because in my experience with web sites, once the database
responds slowly for even a small fraction of the requests, the web server
falls behind in handling http requests and a catastrophic failure builds.
It seems to me that reporting maximum, or at least the 95% confidence interval
(95% of queries executed between 50ms-20s) would be more useful than an
Personally I would be happier with an average of 200ms but an interval of
100-300ms than an average of 100ms but an interval of 50ms-20s. Consistency
can be more important than sheer speed.
In response to
pgsql-hackers by date
|Next:||From: Simon Riggs||Date: 2004-11-30 07:12:10|
|Subject: Re: [Testperf-general] Re: 8.0beta5 results w/ dbt2|
|Previous:||From: Johan Wehtje||Date: 2004-11-30 06:54:24|
|Subject: Column n.nsptablespace does not exist error|