Re: Suggestion to add --continue-client-on-abort option to pgbench

From: Yugo Nagata <nagata(at)sraoss(dot)co(dot)jp>
To: "Hayato Kuroda (Fujitsu)" <kuroda(dot)hayato(at)fujitsu(dot)com>
Cc: 'Rintaro Ikeda' <ikedarintarof(at)oss(dot)nttdata(dot)com>, "slpmcf(at)gmail(dot)com" <slpmcf(at)gmail(dot)com>, "boekewurm+postgres(at)gmail(dot)com" <boekewurm+postgres(at)gmail(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, Srinath Reddy Sadipiralla <srinath2133(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>
Subject: Re: Suggestion to add --continue-client-on-abort option to pgbench
Date: 2025-06-26 09:47:33
Message-ID: 20250626184733.b019eb29d351c03b58909e06@sraoss.co.jp
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, 26 Jun 2025 05:45:12 +0000
"Hayato Kuroda (Fujitsu)" <kuroda(dot)hayato(at)fujitsu(dot)com> wrote:

> Dear Nagata-san,
>
> > As I understand it, the current patch aims to allow continuation only after
> > SQL-level
> > errors, such as constraint violations. That seems reasonable, as it can simulate
> > the
> > behavior of applications that ignore or retry such errors (even though retries are
> > not
> > implemented in the current patch).
>
> Yes, no one has objections to retry in this case. This is a main part of the proposal.,

As I understand it, the proposed --continue-on-error option does not retry the transaction
in any case; it simply gives up on the transaction. That is, when an SQL-level error occurs,
the transaction is reported as "failed" rather than "retried", and the random state is discarded.

>
> > However, I'm not sure it's reasonable to allow continuation after other types of
> > errors,
> > such as misuse of meta-commands or unexpected errors during their execution,
> > since these
> > wouldn't simulate any real application behavior and would more likely indicate a
> > failure
> > in the benchmarking process itself.
>
> I have a concern for \gset metacommand.
> According to the doc and source code, \gset assumed that executed command surely
> returns a tuple:
>
> ```
> if (meta == META_GSET && ntuples != 1)
> {
> /* under \gset, report the error */
> pg_log_error("client %d script %d command %d query %d: expected one row, got %d",
> st->id, st->use_file, st->command, qrynum, PQntuples(res));
> st->estatus = ESTATUS_META_COMMAND_ERROR;
> goto error;
> }
> ```
>
> But sometimes the SQL may not be able to return tuples or return multiple ones due
> to the concurrent transactions. I feel retrying the transaction is very useful
> in this case.

You can use \aset command instead to avoid the error of pgbench. If the query doesn't
return any row, a subsecuent SQL command trying to use the varialbe will fail, but this
would be ignored without terminating the benchmark when the --coneinue-on-error option
enabled.

> Anyway, we must confirm the opinion from the proposer.

+1

Best regards,
Yugo Nagata

--
Yugo Nagata <nagata(at)sraoss(dot)co(dot)jp>

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message shveta malik 2025-06-26 09:57:36 Re: Skipping schema changes in publication
Previous Message Hayato Kuroda (Fujitsu) 2025-06-26 09:40:05 RE: pg_logical_slot_get_changes waits continously for a partial WAL record spanning across 2 pages