Re: WIP Patch: Pgbench Serialization and deadlock errors

From: Marina Polyakova <m(dot)polyakova(at)postgrespro(dot)ru>
To: Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr>
Cc: PostgreSQL Developers <pgsql-hackers(at)postgresql(dot)org>, Kevin Grittner <kgrittn(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: WIP Patch: Pgbench Serialization and deadlock errors
Date: 2017-07-14 10:35:44
Message-ID: b8aa946758db5d8f1cd3a771e5733d69@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 13-07-2017 19:32, Fabien COELHO wrote:
> Hello,

Hi!

>> [...] I didn't make rollbacks to savepoints after the failure because
>> they cannot help for serialization failures at all: after rollback to
>> savepoint a new attempt will be always unsuccessful.
>
> Not necessarily? It depends on where the locks triggering the issue
> are set, if they are all set after the savepoint it could work on a
> second attempt.

Don't you mean the deadlock failures where can really help rollback to
savepoint? And could you, please, give an example where a rollback to
savepoint can help to end its subtransaction successfully after a
serialization failure?

>>> "SimpleStats attempts": I disagree with using this floating poiunt
>>> oriented structures to count integers. I would suggest "int64 tries"
>>> instead, which should be enough for the purpose.
>>
>> I'm not sure that it is enough. Firstly it may be several transactions
>> in script so to count the average attempts number you should know the
>> total number of runned transactions. Secondly I think that stddev for
>> attempts number can be quite interesting and often it is not close to
>> zero.
>
> I would prefer to have a real motivation to add this complexity in the
> report and in the code. Without that, a simple int seems better for
> now. It can be improved later if the need really arises.

Ok!

>>> Some variables, such as "int attempt_number", should be in the client
>>> structure, not in the client? Generally, try to use block variables
>>> if
>>> possible to keep the state clearly disjoints. If there could be NO
>>> new
>>> variable at the doCustom level that would be great, because that
>>> would
>>> ensure that there is no machine state mixup hidden in these
>>> variables.
>>
>> Do you mean the code cleanup for doCustom function? Because if I do so
>> there will be two code styles for state blocks and their variables in
>> this function..
>
> I think that any variable shared between state is a recipee for bugs
> if it is not reset properly, so they should be avoided. Maybe there
> are already too many of them, then too bad, not a reason to add more.
> The status before the automaton was a nightmare.

Ok!

>>> I would suggest a more compact one-line report about failures:
>>>
>>> "number of failures: 12 (0.001%, deadlock: 7, serialization: 5)"
>>
>> I think, there may be a misunderstanding. Because script can contain
>> several transactions and get both failures.
>
> I do not understand. Both failures number are on the compact line I
> suggested.

I mean that the sum of transactions with serialization failure and
transactions with deadlock failure can be greater then the totally sum
of transactions with failures. But if you think it's ok I'll change it
and write the appropriate note in documentation.

--
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Marina Polyakova 2017-07-14 10:36:02 Re: WIP Patch: Pgbench Serialization and deadlock errors
Previous Message Michael Paquier 2017-07-14 10:29:17 Re: Authentication mechanisms categorization