|From:||Marina Polyakova <m(dot)polyakova(at)postgrespro(dot)ru>|
|To:||Teodor Sigaev <teodor(at)sigaev(dot)ru>, Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr>|
|Subject:||Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
On 29-03-2018 22:39, Fabien COELHO wrote:
>> Conception of max-retry option seems strange for me. if number of
>> retries reaches max-retry option, then we just increment counter of
>> failed transaction and try again (possibly, with different random
Then the client starts another script, but by chance or by the number of
scripts it can be the same.
>> At the end we should distinguish number of error transaction and
>> failed transaction, to found this difference documentation suggests
>> to rerun pgbench with debugging on.
If I understood you correctly, this difference is the total number of
retries and this is included in all reports.
>> May be I didn't catch an idea, but it seems to me max-tries should be
>> removed. On transaction searialization or deadlock error pgbench
>> should increment counter of failed transaction, resets conditional
>> stack, variables, etc but not a random generator and then start new
>> transaction for the first line of script.
When I sent the first version of the patch there were only rollbacks,
and the idea to retry failed transactions was approved (see , ,
, ). And thank you, I fixed the patch to reset the client
variables in case of errors too, and not only in case of retries (see
attached, it is based on the commit
> ISTM that there is the idea is that the client application should give
> up at some point are report an error to the end user, kind of a
> "timeout" on trying, and that max-retry would implement this logic of
> giving up: the transaction which was intented, represented by a given
> initial random generator state, could not be committed as if after
> some iterations.
> Maybe the max retry should rather be expressed in time rather than
> number of attempts, or both approach could be implemented? But there
> is a logic of retrying the same (try again what the client wanted) vs
> retrying something different (another client need is served).
I'm afraid that we will have a problem in debugging mode: should we
report a failure (which will be retried) or an error (which will not be
retried)? Because only after executing the following script commands (to
rollback this transaction block) we will know the time that we spent on
the execution of the current script..
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
|Next Message||Teodor Sigaev||2018-03-30 12:28:40||Re: Cast jsonb to numeric, int, float, bool|
|Previous Message||David Rowley||2018-03-30 12:18:06||Re: [HACKERS] path toward faster partition pruning|