From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | Noah Misch <noah(at)leadboat(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Unhappy with error handling in psql's handleCopyOut() |
Date: | 2014-02-12 01:38:54 |
Message-ID: | 5035.1392169134@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I wrote:
>> Stephen Frost <sfrost(at)snowman(dot)net> writes:
>>> I've not gotten back to it yet, but I ran into a related-seeming issue
>>> where psql would happily chew up 2G of memory trying to send "COPY
>>> failed" notices when it gets disconnected from a server that it's trying
>>> to send data to mid-COPY. conn->sock was -1, connection was
>>> 'CONNECTION_BAD', but the loop towards the end of handleCopyIn doesn't
>>> care and nothing in libpq is changing PQresultStatus():
> After some study of the code I have a theory about this.
I was able to reproduce this misbehavior by setting a gdb breakpoint
at pqReadData and then killing the connected server process while psql's
COPY IN was stopped there. Resetting outCount to zero in the
socket-already-gone case in pqSendSome is enough to fix the problem.
However, I think it's also prudent to hack PQgetResult so that it
won't return a "copy in progress" status if the connection is known
dead.
The error recovery behavior in pqSendSome has been like this since 8.1
or thereabouts, so I'm inclined to push something like the attached into
all branches.
regards, tom lane
Attachment | Content-Type | Size |
---|---|---|
copy-in-error-handling-fix.patch | text/x-diff | 3.4 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2014-02-12 01:45:42 | Re: narwhal and PGDLLIMPORT |
Previous Message | Andrew Dunstan | 2014-02-12 01:33:20 | Re: narwhal and PGDLLIMPORT |