Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors

From: Marina Polyakova <m(dot)polyakova(at)postgrespro(dot)ru>
To: Teodor Sigaev <teodor(at)sigaev(dot)ru>, Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors
Date: 2018-03-30 12:20:16
Message-ID: fc2d3f13e4c2e4ebe061fb2e26f9f68b@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 29-03-2018 22:39, Fabien COELHO wrote:
>> Conception of max-retry option seems strange for me. if number of
>> retries reaches max-retry option, then we just increment counter of
>> failed transaction and try again (possibly, with different random
>> numbers).

Then the client starts another script, but by chance or by the number of
scripts it can be the same.

>> At the end we should distinguish number of error transaction and
>> failed transaction, to found this difference documentation suggests
>> to rerun pgbench with debugging on.

If I understood you correctly, this difference is the total number of
retries and this is included in all reports.

>> May be I didn't catch an idea, but it seems to me max-tries should be
>> removed. On transaction searialization or deadlock error pgbench
>> should increment counter of failed transaction, resets conditional
>> stack, variables, etc but not a random generator and then start new
>> transaction for the first line of script.

When I sent the first version of the patch there were only rollbacks,
and the idea to retry failed transactions was approved (see [1], [2],
[3], [4]). And thank you, I fixed the patch to reset the client
variables in case of errors too, and not only in case of retries (see
attached, it is based on the commit
3da7502cd00ddf8228c9a4a7e4a08725decff99c).

> ISTM that there is the idea is that the client application should give
> up at some point are report an error to the end user, kind of a
> "timeout" on trying, and that max-retry would implement this logic of
> giving up: the transaction which was intented, represented by a given
> initial random generator state, could not be committed as if after
> some iterations.
>
> Maybe the max retry should rather be expressed in time rather than
> number of attempts, or both approach could be implemented? But there
> is a logic of retrying the same (try again what the client wanted) vs
> retrying something different (another client need is served).

I'm afraid that we will have a problem in debugging mode: should we
report a failure (which will be retried) or an error (which will not be
retried)? Because only after executing the following script commands (to
rollback this transaction block) we will know the time that we spent on
the execution of the current script..

[1]
https://www.postgresql.org/message-id/CACjxUsOfbn72EaH4i_OuzdY-0PUYfg1Y3o8G27tEA8fJOaPQEw%40mail.gmail.com
[2]
https://www.postgresql.org/message-id/20170615211806.sfkpiy2acoavpovl%40alvherre.pgsql
[3]
https://www.postgresql.org/message-id/CAEepm%3D3TRTc9Fy%3DfdFThDa4STzPTR6w%3DRGfYEPikEkc-Lcd%2BMw%40mail.gmail.com
[4]
https://www.postgresql.org/message-id/CACjxUsOQw%3DvYjPWZQ29GmgWU8ZKj336OGiNQX5Z2W-AcV12%2BNw%40mail.gmail.com

--
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachment Content-Type Size
v6-0001-Pgbench-errors-and-serialization-deadlock-retries.patch text/x-diff 123.7 KB

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Teodor Sigaev 2018-03-30 12:28:40 Re: Cast jsonb to numeric, int, float, bool
Previous Message David Rowley 2018-03-30 12:18:06 Re: [HACKERS] path toward faster partition pruning