Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors

From: Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr>
To: Marina Polyakova <m(dot)polyakova(at)postgrespro(dot)ru>
Cc: pgsql-hackers(at)postgresql(dot)org, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Ildus Kurbangaliev <i(dot)kurbangaliev(at)postgrespro(dot)ru>, Teodor Sigaev <teodor(at)sigaev(dot)ru>
Subject: Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors
Date: 2018-08-15 08:50:48
Message-ID: alpine.DEB.2.21.1808151046090.30050@lancre
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers


Hello Marina,

> v10-0004-Pgbench-errors-and-serialization-deadlock-retrie.patch
> - the main patch for handling client errors and repetition of transactions
> with serialization/deadlock failures (see the detailed description in the
> file).

Patch applies cleanly.

It allows retrying a script (considered as a transaction) on serializable
and deadlock errors, which is a very interesting extension but also
impacts pgbench significantly.

I'm waiting for the feature to be right before checking in full the
documentation and tests. There are still some issues to resolve before
checking that.

Anyway, tests look reasonable. Taking advantage of of transactions control
from PL/pgsql is a good use of this new feature.

A few comments about the doc.

According to the documentation, the feature is triggered by --max-tries and
--latency-limit. I disagree with the later, because it means that having
latency limit without retrying is not supported anymore.

Maybe you can allow an "unlimited" max-tries, say with special value zero,
and the latency limit does its job if set, over all tries.

Doc: "error in meta commands" -> "meta command errors", for homogeneity with
other cases?

Detailed -r report. I understand from the doc that the retry number on the
detailed per-statement report is to identify at what point errors occur?
Probably this is more or less always at the same point on a given script,
so that the most interesting feature is to report the number of retries at the
script level.

Doc: "never occur.." -> "never occur", or eventually "...".

Doc: "Directly client errors" -> "Direct client errors".

I'm still in favor of asserting that the sql connection is idle (no tx in
progress) at the beginning and/or end of a script, and report a user error
if not, instead of writing complex caveats.

If someone has a use-case for that, then maybe it can be changed, but I
cannot see any in a benchmarking context, and I can see how easy it is
to have a buggy script with this allowed.

I do not think that the RETRIES_ENABLED macro is a good thing. I'd suggest
to write the condition four times.

ISTM that "skipped" transactions are NOT "successful" so there are a problem
with comments. I believe that your formula are probably right, it has more to do
with what is "success". For cnt decomposition, ISTM that "other transactions"
are really "directly successful transactions".

I'd suggest to put "ANOTHER_SQL_FAILURE" as the last option, otherwise "another"
does not make sense yet. I'd suggest to name it "OTHER_SQL_FAILURE".

In TState, field "uint32 retries": maybe it would be simpler to count "tries",
which can be compared directly to max tries set in the option?

ErrorLevel: I have already commented about in review about 10.2. I'm not sure of
the LOG -> DEBUG_FAIL changes. I do not understand the name "DEBUG_FAIL", has it
is not related to debug, they just seem to be internal errors. META_ERROR maybe?

inTransactionBlock: I disagree with any function other than doCustom changing
the client state, because it makes understanding the state machine harder. There
is already one exception to that (threadRun) that I wish to remove. All state
changes must be performed explicitely in doCustom.

The automaton skips to FAILURE on every possible error. I'm wondering whether
it could do so only on SQL errors, because other fails will lead to ABORTED
anyway? If there is no good reason to skip to FAILURE from some errors, I'd
suggest to keep the previous behavior. Maybe the good reason is to do some
counting, but this means that on eg metacommand errors now the script would
loop over instead of aborting, which does not look like a desirable change
of behavior.

PQexec("ROOLBACK"): you are inserting a synchronous command, for which the
thread will have to wait for the result, in a middle of a framework which
takes great care to use only asynchronous stuff so that one thread can
manage several clients efficiently. You cannot call PQexec there.
From where I sit, I'd suggest to sendQuery("ROLLBACK"), then switch to
a new state CSTATE_WAIT_ABORT_RESULT which would be similar to
CSTATE_WAIT_RESULT, but on success would skip to RETRY or ABORT instead
of proceeding to the next command.

ISTM that it would be more logical to only get into RETRY if there is a retry,
i.e. move the test RETRY/ABORT in FAILURE. For that, instead of "canRetry",
maybe you want "doRetry", which tells that a retry is possible (the error
is serializable or deadlock) and that the current parameters allow it
(timeout, max retries).

* Minor C style comments:

if / else if / else if ... on *_FAILURE: I'd suggest a switch.

The following line removal does not seem useful, I'd have kept it:

stats->cnt++;
-
if (skipped)

copyVariables: I'm not convinced that source_vars & nvars variables are that
useful.

memcpy(&(st->retry_state.random_state), &(st->random_state), sizeof(RandomState));

Is there a problem with "st->retry_state.random_state = st->random_state;"
instead of memcpy? ISTM that simple assignments work in C. Idem in the reverse
copy under RETRY.

if (!copyVariables(&st->retry_state.variables, &st->variables)) {
pgbench_error(LOG, "client %d aborted when preparing to execute a transaction\n", st->id);

The message could be more precise, eg "client %d failed while copying
variables", unless copyVariables already printed a message. As this is really
an internal error from pgbench, I'd rather do a FATAL (direct exit) there.
ISTM that the only possible failure is OOM here, and pgbench is in a very bad
shape if it gets into that.

commandFailed: I'm not thrilled by the added boolean, which is partially
redundant with the second argument.

if (per_script_stats)
- accumStats(&sql_script[st->use_file].stats, skipped, latency, lag);
+ {
+ accumStats(&sql_script[st->use_file].stats, skipped, latency, lag,
+ st->failure_status, st->retries);
+ }
}

I do not see the point of changing the style here.

--
Fabien.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Andrey Lepikhov 2018-08-15 09:28:43 Re: [WIP] [B-Tree] Retail IndexTuple deletion
Previous Message Masahiko Sawada 2018-08-15 07:17:07 Re: [WIP] [B-Tree] Retail IndexTuple deletion