Re: [HACKERS] WIP aPatch: Pgbench Serialization and deadlock errors

From: Yugo NAGATA <nagata(at)sraoss(dot)co(dot)jp>
To: Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr>
Cc: Tatsuo Ishii <ishii(at)sraoss(dot)co(dot)jp>, thomas(dot)munro(at)gmail(dot)com, m(dot)polyakova(at)postgrespro(dot)ru, alvherre(at)2ndquadrant(dot)com, pgsql-hackers(at)postgresql(dot)org, teodor(at)sigaev(dot)ru
Subject: Re: [HACKERS] WIP aPatch: Pgbench Serialization and deadlock errors
Date: 2022-02-18 14:45:49
Message-ID: 20220218234549.9a3585d555f23f106021add6@sraoss.co.jp
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hello Fabien,

Thank you so much for your review.

Sorry for the late reply. I've stopped working on it due to other
jobs but I came back again. I attached the updated patch. I would
appreciate it if you could review this again.

On Mon, 19 Jul 2021 20:04:23 +0200 (CEST)
Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr> wrote:

> # About pgbench error handling v15
>
> Patches apply cleanly. Compilation, global and local tests ok.
>
> - v15.1: refactoring is a definite improvement.
> Good, even if it is not very useful (see below).

Ok, we don't need to save variables in order to implement the
retry feature on pbench as you suggested. Well, should we completely
separate these two patches and should I fix v15.2 to not rely v15.1?

> While restructuring, maybe predefined variables could be make readonly
> so that a script which would update them would fail, which would be a
> good thing. Maybe this is probably material for an independent patch.

Yes, It shoule be for an independent patch.

> - v15.2: see detailed comments below
>
> # Doc
>
> Doc build is ok.
>
> ISTM that "number of tries" line would be better placed between the
> #threads and #transactions lines. What do you think?

Agreed. Fixed.

> Aggregate logging description: "{ failures | ... }" seems misleading
> because it suggests we have one or the other, whereas it can also be
> empty. I suggest: "{ | failures | ... }" to show the empty case.

The description is correct because either "failures" or "both of
serialization_failures and deadlock_failures" should appear in aggregate
logging. If "failures" was printed only when any transaction failed,
each line in aggregate logging could have different numbers of columns
and which would make it difficult to parse the results.

> I'd wonder whether the number of tries is set too high,
> though, ISTM that an application should give up before 100?

Indeed, max-tries=100 seems too high for practical system.

Also, I noticed that sum of latencies of each command (= 15.839 ms)
is significantly larger than the latency average (= 10.870 ms)
because "per commands" results in the documentation were fixed.

So, I retook a measurement on my machine for more accurate documentation. I
used max-tries=10.

> Minor editing:
>
> "there're" -> "there are".
>
> "the --time" -> "the --time option".

Fixed.

> "The latency for failed transactions and commands is not computed separately." is unclear,
> please use a positive sentence to tell what is true instead of what is not and the reader
> has to guess. Maybe: "The latency figures include failed transactions which have reached
> the maximum number of tries or the transaction latency limit.".

I'm not the original author of this description, but I guess this means "The latency is
measured only for successful transactions and commands but not for failed transactions
or commands.".

> "The main report contains the number of failed transactions if it is non-zero." ISTM that
> this is a pain for scripts which would like to process these reports data, because the data
> may or may not be there. I'm sure to write such scripts, which explains my concern:-)

I agree with you. I fixed the behavior to report the the number of failed transactions
always regardless with if it is non-zero or not.

> "If the total number of retried transactions is non-zero…" should it rather be "not one",
> because zero means unlimited retries?

I guess that this means the actual number of retried transaction not the max-tries, so
"non-zero" was correct. However, for the same reason above, I fixed the behavior to
report the the retry statistics always regardeless with the actual retry numbers.

>
> # FEATURES

> Copying variables: ISTM that we should not need to save the variables
> states… no clearing, no copying should be needed. The restarted
> transaction simply overrides the existing variables which is what the
> previous version was doing anyway. The scripts should write their own
> variables before using them, and if they don't then it is the user
> problem. This is important for performance, because it means that after a
> client has executed all scripts once the variable array is stable and does
> not incur significant maintenance costs. The only thing that needs saving
> for retry is the speudo-random generator state. This suggest simplifying
> or removing "RetryState".

Yes. The variables states is not necessary because we retry the
whole script. It was necessary in the initial patch because it
planned to retry one transaction included in the script. I removed
RetryState and copyVariables.

> # CODE

> In readCommandResponse: ISTM that PGRES_NONFATAL_ERROR is not needed and
> could be dealt with the default case. We are only interested in
> serialization/deadlocks which are fatal errors?

We need PGRES_NONFATAL_ERROR to save st->estatus. It is used outside
readCommandResponse to determine whether we should abort or not.

> doRetry: for consistency, given the assert, ISTM that it should return
> false if duration has expired, by testing end_time or timer_exceeded.

Ok. I fixed doRetry to check timer_exceeded again.

> checkTransactionStatus: this function does several things at once with 2
> booleans, which make it not very clear to me. Maybe it would be clearer if
> it would just return an enum (in trans, not in trans, conn error, other
> error). Another reason to do that is that on connection error pgbench
> could try to reconnect, which would be an interesting later extension, so
> let's pave the way for that. Also, I do not think that the function
> should print out a message, it should be the caller decision to do that.

OK. I added a new enum type TStatus and I fixed the function to return it.
Also, I changed the function name to getTransactionStatus because the
actual check is done by the caller.

> verbose_errors: there is more or less repeated code under RETRY and
> FAILURE, which should be factored out in a separate function. The
> advanceConnectionFunction is long enough. Once this is done, there is no
> need for a getLatencyUsed function.

OK. I made a function to print verbose error messages and removed the
getLatencyUsed function.

> I'd put cleaning up the pipeline in a function. I do not understand why
> the pipeline mode is not exited in all cases, the code checks for the
> pipeline status twice in a few lines. I'd put this cleanup in the sync
> function as well, report to the caller (advanceConnectionState) if there
> was an error, which would be managed there.

I fixed to exit the pipeline whenever we have an error in a pipeline mode.
Also, I added a PQpipelineSync call which was forgotten in the previous patch.

> WAIT_ROLLBACK_RESULT: consumming results in a while could be a function to
> avoid code repetition (there and in the "error:" label in
> readCommandResponse). On the other hand, I'm not sure why the loop is
> needed: we can only get there by submitting a "ROLLBACK" command, so there
> should be only one result anyway?

Right. We should receive just one PGRES_COMMAND_OK and null following it.
I eliminated the loop.

> report_per_command: please always count retries and failures of commands
> even if they will not be reported in the end, the code will be simpler and
> more efficient.

Ok. I fixed to count retries and failures of commands even if
report_per_command is false.

> doLog: the format has changed, including a new string on failures which
> replace the time field. Hmmm. Cannot say I like it much, but why not. ISTM
> that the string could be shorten to "deadlock" or "serialization". ISTM
> that the documentation example should include a line with a failure, to
> make it clear what to expect.

I fixed getResultString to return "deadlock" or "serialization" instead of
"deadlock_failure" or "serialization_failure". Also, I added an output
example to the documentation.

> I'm okay with always getting computing thread stats.
>
> # COMMENTS
>
> struct StatsData comment is helpful.
> - "failed transactions" -> "unsuccessfully retried transactions"?

This seems an accurate description. However, "failed transaction" is
short and simple, and it is used in several places, so instead of
replacing them I added the following statement to define it:

"failed transaction is defined as unsuccessfully retried transactions."

> - 'cnt' decomposition: first term is field 'retried'? if so say it
> explicitely?

No. 'retreid' includes unsuccessfully retreid transactions, but 'cnt'
includes only successfully retried transactions.

> "Complete the failed transaction" sounds strange: If it failed, it could
> not complete? I'd suggest "Record a failed transaction".

Sounds good. Fixed.

> # TESTS
>
> I suggested to simplify the tests by using conditionals & sequences. You
> reported that you got stuck. Hmmm.
>
> I tried again my tests which worked fine when started with 2 clients,
> otherwise they get stuck because the first client waits for the other one
> which does not exists (the point is to generate deadlocks and other
> errors). Maybe this is your issue?

That seems to be right. It got stuck when I used -T option rather than -t,
it was because, I guess, the number of transactions on each thread was
different.

> Could you try with:
>
> psql < deadlock_prep.sql
> pgbench -t 4 -c 2 -f deadlock.sql
> # note: each deadlock detection takes 1 second
>
> psql < deadlock_prep.sql
> pgbench -t 10 -c 2 -f serializable.sql
> # very quick 50% serialization errors

That works. However, it still gets hang when --max-tries = 2,
so maybe I would not think we can use it for testing the retry
feature....

Regards,
Yugo Nagata

--
Yugo NAGATA <nagata(at)sraoss(dot)co(dot)jp>

Attachment Content-Type Size
v16-0002-Pgbench-errors-and-serialization-deadlock-retrie.patch text/x-diff 81.7 KB
v16-0001-Pgbench-errors-use-the-Variables-structure-for-c.patch text/x-diff 15.2 KB

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message osumi.takamichi@fujitsu.com 2022-02-18 14:51:27 RE: Failed transaction statistics to measure the logical replication progress
Previous Message Nitin Jadhav 2022-02-18 14:37:05 Re: Report checkpoint progress with pg_stat_progress_checkpoint (was: Report checkpoint progress in server logs)