Re: WIP Patch: Pgbench Serialization and deadlock errors

From: Marina Polyakova <m(dot)polyakova(at)postgrespro(dot)ru>
To: Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr>
Cc: PostgreSQL Developers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: WIP Patch: Pgbench Serialization and deadlock errors
Date: 2017-07-03 12:07:11
Message-ID: 01df6ca86c78e2c80b4e4d021c99d53a@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

> The current error handling is either "close connection" or maybe in
> some cases even "exit". If this is changed, then the client may
> continue execution in some unforseen state and behave unexpectedly.
> We'll see.

Thanks, now I understand this.

>>> ISTM that the retry implementation should be implemented somehow in
>>> the automaton, restarting the same script for the beginning.
>>
>> If there are several transactions in this script - don't you think
>> that we should restart only the failed transaction?..
>
> On some transaction failures based on their status. My point is that
> the retry process must be implemented clearly with a new state in the
> client automaton. Exactly when the transition to this new state must
> be taken is another issue.

About it, I agree with you that it should be done in this way.

>>> The number of retries and maybe failures should be counted, maybe
>>> with
>>> some adjustable maximum, as suggested.
>>
>> If we fix the maximum number of attempts the maximum number of
>> failures for one script execution will be bounded above
>> (number_of_transactions_in_script * maximum_number_of_attempts). Do
>> you think we should make the option in program to limit this number
>> much more?
>
> Probably not. I think that there should be a configurable maximum of
> retries on a transaction, which may be 0 by default if we want to be
> upward compatible with the current behavior, or maybe something else.

I propose the option --max-attempts-number=NUM which NUM cannot be less
than 1. I propose it because I think that, for example,
--max-attempts-number=100 is better than --max-retries-number=99. And
maybe it's better to set its default value to 1 too because retrying of
shell commands can produce new errors..

>>> In doLog, added columns should be at the end of the format.
>>
>> I have inserted it earlier because these columns are not optional. Do
>> you think they should be optional?
>
> I think that new non-optional columns it should be at the end of the
> existing non-optional columns so that existing scripts which may
> process the output may not need to be updated.

Thanks, I agree with you :)

>>> I'm not sure that there should be an new option to report failures,
>>> the information when relevant should be integrated in a clean format
>>> into the existing reports... Maybe the "per command latency"
>>> report/option should be renamed if it becomes more general.
>>
>> I have tried do not change other parts of program as much as possible.
>> But if you think that it will be more useful to change the option I'll
>> do it.
>
> I think that the option should change if its naming becomes less
> relevant, which is to be determined. AFAICS, ISTM that new measures
> should be added to the various existing reports unconditionnaly (i.e.
> without a new option), so maybe no new option would be needed.

Thanks! I didn't think about it in this way..

--
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Greg Stark 2017-07-03 12:25:13 Re: WIP patch for avoiding duplicate initdb runs during "make check"
Previous Message Heikki Linnakangas 2017-07-03 12:02:11 Re: Error-like LOG when connecting with SSL for password authentication