|From:||Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>|
|To:||Andrey Lepikhov <a(dot)lepikhov(at)postgrespro(dot)ru>|
|Cc:||Dean Rasheed <dean(dot)a(dot)rasheed(at)gmail(dot)com>, Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr>, Michael Paquier <michael(at)paquier(dot)xyz>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, pgsql-hackers(at)postgresql(dot)org, Paul Ramsey <pramsey(at)cleverelephant(dot)ca>|
|Subject:||Re: [PATCH] random_normal function|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
Andrey Lepikhov <a(dot)lepikhov(at)postgrespro(dot)ru> writes:
> On 1/9/23 23:52, Tom Lane wrote:
>> BTW, if this does bring the probability of failure down to the
>> one-in-a-billion range, I think we could also nuke the whole
>> "ignore:" business, simplifying pg_regress and allowing the
>> random test to be run in parallel with others.
> We have used the pg_sleep() function to interrupt a query at certain
> execution phase. But on some platforms, especially in containers, the
> query can vary execution time in so widely that the pg_sleep() timeout,
> required to get rid of dependency on a query execution time, has become
> unacceptable. So, the "ignore" option was the best choice.
But does such a test have any actual value? If your test infrastructure
ignores the result, what makes you think you'd notice if the test did
indeed detect a problem?
I think "ignore:" was a kluge we put in twenty-plus years ago when our
testing standards were a lot lower, and it's way past time we got
rid of it.
regards, tom lane
|Next Message||Michael Paquier||2023-01-19 06:14:01||Re: Modify the document of Logical Replication configuration settings|
|Previous Message||Justin Pryzby||2023-01-19 05:47:03||bug: copy progress reporting of backends which run multiple COPYs|