|From:||Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>|
|To:||Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr>|
|Cc:||PostgreSQL Developers <pgsql-hackers(at)postgreSQL(dot)org>|
|Subject:||Re: pgbench regression test failure|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr> writes:
> By definition, parallelism induces non determinism. When I put 2 seconds,
> the intention was that I would get a non empty trace with a "every second"
> aggregation. I would rather take a longer test rather than allowing an
> empty file: the point is to check that something is generated, but
> avoiding a longer test is desirable. So I would suggest to stick to
> between 1 and 3, and if it fails then maybe add one second...
That's a losing game. You can't ever guarantee that N seconds is
enough for slow, heavily loaded machines, and cranking up N just
penalizes developers who are testing under normal circumstances.
I have a serious, serious dislike for tests that seem to work until
they're run on a heavily loaded machine. So unless there is some
reason why pgbench is *guaranteed* to run at least one transaction
per thread, I'd rather the test not assume that.
I would not necessarily object to doing something in the code that
would guarantee that, though.
regards, tom lane
|Next Message||Robert Haas||2017-09-12 18:28:51||Re: domain type smashing is expensive|
|Previous Message||Fabien COELHO||2017-09-12 18:00:56||Re: pgbench regression test failure|