|From:||Andrey Lepikhov <a(dot)lepikhov(at)postgrespro(dot)ru>|
|To:||Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>|
|Cc:||Dean Rasheed <dean(dot)a(dot)rasheed(at)gmail(dot)com>, Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr>, Michael Paquier <michael(at)paquier(dot)xyz>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, pgsql-hackers(at)postgresql(dot)org, Paul Ramsey <pramsey(at)cleverelephant(dot)ca>|
|Subject:||Re: [PATCH] random_normal function|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
On 1/19/23 11:01, Tom Lane wrote:
> Andrey Lepikhov <a(dot)lepikhov(at)postgrespro(dot)ru> writes:
>> On 1/9/23 23:52, Tom Lane wrote:
>>> BTW, if this does bring the probability of failure down to the
>>> one-in-a-billion range, I think we could also nuke the whole
>>> "ignore:" business, simplifying pg_regress and allowing the
>>> random test to be run in parallel with others.
>> We have used the pg_sleep() function to interrupt a query at certain
>> execution phase. But on some platforms, especially in containers, the
>> query can vary execution time in so widely that the pg_sleep() timeout,
>> required to get rid of dependency on a query execution time, has become
>> unacceptable. So, the "ignore" option was the best choice.
> But does such a test have any actual value? If your test infrastructure
> ignores the result, what makes you think you'd notice if the test did
> indeed detect a problem?
Yes, it is good to catch SEGFAULTs and assertions which may be frequent
because of a logic complexity in the case of timeouts.
> I think "ignore:" was a kluge we put in twenty-plus years ago when our
> testing standards were a lot lower, and it's way past time we got
> rid of it.
Ok, I will try to invent alternative way for deep (and stable) testing
of timeouts. Thank you for the answer.
|Next Message||Amit Kapila||2023-01-19 07:05:38||Re: Support logical replication of DDLs|
|Previous Message||Takamichi Osumi (Fujitsu)||2023-01-19 06:35:58||RE: Time delayed LR (WAS Re: logical replication restrictions)|