|From:||Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr>|
|To:||Teodor Sigaev <teodor(at)sigaev(dot)ru>|
|Cc:||Marina Polyakova <m(dot)polyakova(at)postgrespro(dot)ru>, pgsql-hackers(at)postgresql(dot)org|
|Subject:||Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
> Conception of max-retry option seems strange for me. if number of retries
> reaches max-retry option, then we just increment counter of failed
> transaction and try again (possibly, with different random numbers). At the
> end we should distinguish number of error transaction and failed transaction,
> to found this difference documentation suggests to rerun pgbench with
> debugging on.
> May be I didn't catch an idea, but it seems to me max-tries should be
> removed. On transaction searialization or deadlock error pgbench should
> increment counter of failed transaction, resets conditional stack, variables,
> etc but not a random generator and then start new transaction for the first
> line of script.
ISTM that there is the idea is that the client application should give up
at some point are report an error to the end user, kind of a "timeout" on
trying, and that max-retry would implement this logic of giving up: the
transaction which was intented, represented by a given initial random
generator state, could not be committed as if after some iterations.
Maybe the max retry should rather be expressed in time rather than number
of attempts, or both approach could be implemented? But there is a logic
of retrying the same (try again what the client wanted) vs retrying
something different (another client need is served).
|Next Message||legrand legrand||2018-03-29 19:52:38||Poc: pg_stat_statements with planid|
|Previous Message||David Steele||2018-03-29 19:38:30||Re: pgsql: Add documentation for the JIT feature.|