From: | Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr> |
---|---|
To: | Marina Polyakova <m(dot)polyakova(at)postgrespro(dot)ru> |
Cc: | PostgreSQL Developers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: WIP Patch: Pgbench Serialization and deadlock errors |
Date: | 2017-07-03 12:59:55 |
Message-ID: | alpine.DEB.2.20.1707031430380.15247@lancre |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
>>>> The number of retries and maybe failures should be counted, maybe with
>>>> some adjustable maximum, as suggested.
>>>
>>> If we fix the maximum number of attempts the maximum number of failures
>>> for one script execution will be bounded above
>>> (number_of_transactions_in_script * maximum_number_of_attempts). Do you
>>> think we should make the option in program to limit this number much more?
>>
>> Probably not. I think that there should be a configurable maximum of
>> retries on a transaction, which may be 0 by default if we want to be
>> upward compatible with the current behavior, or maybe something else.
>
> I propose the option --max-attempts-number=NUM which NUM cannot be less than
> 1. I propose it because I think that, for example, --max-attempts-number=100
> is better than --max-retries-number=99. And maybe it's better to set its
> default value to 1 too because retrying of shell commands can produce new
> errors..
Personnaly, I like counting retries because it also counts the number of
time the transaction actually failed for some reason. But this is a
marginal preference, and one can be switchted to the other easily.
--
Fabien.
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2017-07-03 13:02:32 | Re: WIP patch for avoiding duplicate initdb runs during "make check" |
Previous Message | Michael Paquier | 2017-07-03 12:50:45 | Re: WIP patch for avoiding duplicate initdb runs during "make check" |