From: | Marko Kreen <markokr(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Daniele Varrazzo <daniele(dot)varrazzo(at)gmail(dot)com>, pgsql-docs(at)postgresql(dot)org |
Subject: | Re: Documentation improvement for PQgetResult |
Date: | 2010-08-09 17:21:22 |
Message-ID: | AANLkTin_VJa6pPu0_AU97ks0RN24jkiF7yPjLym2_Ndn@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-docs |
On 8/6/10, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Daniele Varrazzo <daniele(dot)varrazzo(at)gmail(dot)com> writes:
> > documentation about the PQgetResult function suggests to "[call]
> > repeatedly until it returns a null pointer, indicating that the
> > command is done". This has lead psycopg developers to write code with
> > the pattern:
>
> > while (NULL != (res = PQgetResult(conn))) {
> > /* do something with res */
> > PQclear(res);
> > }
>
> > Marko Kreen has pointed out
> > (http://lists.initd.org/pipermail/psycopg/2010-July/007149.html) that
> > this can lead to an infinite loop if the connection goes bad at the
> > rightly wrong time, advising then to check for the connection status
> > in the middle of the loop. libpq code seems actually returning
> > PQmakeEmptyPGresult(conn, PGRES_FATAL_ERROR) in case of something
> > unexpected.
>
> > If this is the case (I've not managed to reproduce it but the libpq
> > code seems clear), shouldn't the function documentation warn for this
> > risk, advising to check the connection status in the middle of the
> > loop and bail out if it's bad?
>
>
> No.
>
> I'm not convinced we need to do anything about that, since AFAIR no one
> has ever reported such a failure from the field (which might well
> indicate that Marko's analysis is wrong, anyway). But if we were to do
> something about it, changing the documentation would be the wrong
> response, because that coding pattern is all over the place. It'd be
> better to kluge PQgetResult so it only returns a bad-connection error
> object once per query, or something like that.
So the PQgetResult currently does not guarantee that, yes?
As that's what I noticed...
Few facts that I have:
- psycopg2 was in such loop.
- libpq [8.4.2] issued close(-1) per PQgetResult call.
Due to accident, it was without debug symbols, so I could not
analyze deeply.
- There were 2 processes in such state.
- NIC problem - corrupted the stream after multi-GB data transfer,
but not always. In this case, there were large COPY-ing
going on.
- There were 5 dbs in the machine that were copying data. Some
of them failed with data-format error, some perhaps succeeded.
- The NIC has been changed, so its probably not reproducible easily.
--
marko
From | Date | Subject | |
---|---|---|---|
Next Message | Satoshi Nagayasu | 2010-08-10 03:29:45 | Re: ECPG - Some errno definitions don't match to the manual |
Previous Message | Satoshi Nagayasu | 2010-08-09 16:36:45 | Re: ECPG - Some errno definitions don't match to the manual |