Re: [HACKERS] pgbench regression test failure

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr>, Steve Singer <steve(at)ssinger(dot)info>, PostgreSQL Developers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] pgbench regression test failure
Date: 2017-11-21 20:28:34
Message-ID: 19483.1511296114@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> On Mon, Nov 20, 2017 at 1:40 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> I dunno, it just looks odd to me that when we've set up a test case in
>> which every one of the transactions is guaranteed to exceed the latency
>> limit, that it doesn't say that they all did. I don't particularly buy
>> your assumption that the percentages should sum. Anybody else have an
>> opinion there?

> I agree with you, but I don't think either approach is free from
> possible confusion. I think it would help to show the numerator and
> the denominator explicitly, e.g.:

> number of clients: 1
> number of threads: 1
> number of transactions per client: 100
> number of transactions actually processed: 33/100
> number of transactions skipped: 67 (67.000 %)
> number of transactions above the 1.0 ms latency limit: 33 (33 of 33, 100.000 %)

> (My proposed change is in the last line.)

Seems like a good idea, but the way you've written it is inconsistent
with the "n/m" notation used just above. I'd suggest

... latency limit: 33 (33/33, 100.000 %)

or just

... latency limit: 33/33 (100.000 %)

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Ildus K 2017-11-21 20:28:55 Re: [HACKERS] Custom compression methods
Previous Message Robert Haas 2017-11-21 20:25:28 Re: [HACKERS] Moving relation extension locks out of heavyweight lock manager