psql: stop at error immediately during \copy

From: Peifeng Qiu <pgsql(at)qiupf(dot)dev>
To: pgsql-hackers(at)lists(dot)postgresql(dot)org
Subject: psql: stop at error immediately during \copy
Date: 2022-12-29 03:18:33
Message-ID: CAPH51bdPp7gR1ca76Rh2f-BJg4BGDpb3Shzko8jEtNM1r_zWyw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi, hackers.

psql \copy command can stream data from client host just like the normal
copy command can do on server host. Let's assume we want to stream a
local data file from psql:
pgsql =# \copy tbl from '/tmp/datafile' (format 'text');
If there's error inside the data file, \copy will still stream the
whole data file
before it reports the error. This is undesirable If the data file is very large,
or it's an infinite pipe. The normal copy command which reads file on the
server host can report error immediately as expected.

The problem seems to be pqParseInput3(). When error occurs in server
backend, it will send 'E' packet back to client. During \copy command, the
connection's asyncStatus is PGASYNC_COPY_IN, any 'E' packet will
get ignored by this path:

else if (conn->asyncStatus != PGASYNC_BUSY)
{
/* If not IDLE state, just wait ... */
if (conn->asyncStatus != PGASYNC_IDLE)
return;

So the client can't detect the error sent back by server.

I've attached a patch to demonstrate one way to workaround this. Save
the error via pqGetErrorNotice3() if the conn is PGASYNC_COPY_IN
status. The client code(psql) can detect the error via PQerrorMessage().
Probably still lots of details need to be considered but should be good
enough to start this discussion. Any thoughts on this issue?

Best regards
Peifeng Qiu

Attachment Content-Type Size
0001-psql-stop-at-error-immediately-during-copy.patch text/x-patch 1.7 KB

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Geoghegan 2022-12-29 06:36:53 Re: Avoiding unnecessary clog lookups while freezing
Previous Message Andres Freund 2022-12-29 03:03:29 Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and possible memory issue)