|From:||Marina Polyakova <m(dot)polyakova(at)postgrespro(dot)ru>|
|Cc:||Teodor Sigaev <teodor(at)sigaev(dot)ru>, Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr>|
|Subject:||Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
Here there's a seventh version of the patch for error handling and
retrying of transactions with serialization/deadlock failures in pgbench
(based on the commit a08dc711952081d63577fc182fcf955958f70add). I added
the option --max-tries-time which is an implemetation of Fabien Coelho's
proposal in : the transaction with serialization or deadlock failure
can be retried if the total time of all its tries is less than this
limit (in ms). This option can be combined with the option --max-tries.
But if none of them are used, failed transactions are not retried at
* Now when the first failure occurs in the transaction it is always
reported as a failure since only after the remaining commands of this
transaction are executed we find out whether we can try again or not.
Therefore add the messages about retrying or ending the failed
transaction to the "fails" debugging level so you can distinguish
failures (which are retried) and errors (which are not retried).
* Fix a report on the latency average because the total time includes
time for both errors and successful transactions.
* Code cleanup (including tests).
> Maybe the max retry should rather be expressed in time rather than
> of attempts, or both approach could be implemented?
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
|Next Message||David Rowley||2018-04-04 13:29:56||Re: [HACKERS] path toward faster partition pruning|
|Previous Message||Jesper Pedersen||2018-04-04 12:46:15||Re: [HACKERS] MERGE SQL Statement for PG11|