Re: pg_dump possible fix, need testers. (was: Re: [HACKERS] pg_dump disaster)

From: Alfred Perlstein <bright(at)wintelcom(dot)net>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-hackers(at)postgreSQL(dot)org
Subject: Re: pg_dump possible fix, need testers. (was: Re: [HACKERS] pg_dump disaster)
Date: 2000-01-24 01:32:54
Message-ID: 20000123173254.T26520@fw.wintelcom.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

* Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> [000123 10:19] wrote:
> >> Um, I didn't have any trouble at all reproducing Patrick's complaint.
> >> pg_dump any moderately large table (I used tenk1 from the regress
> >> database) and try to load the script with psql. Kaboom.
>
> > This is after or before my latest patch?
>
> Before. I haven't updated since yesterday...
>
> > I can't seem to reproduce this problem,
>
> Odd. Maybe there is something different about the kernel's timing of
> message sending on your platform. I see it very easily on HPUX 10.20,
> and Patrick sees it very easily on whatever he's using (netbsd I think).
> You might try varying the situation a little, say
> psql mydb <dumpfile
> psql -f dumpfile mydb
> psql mydb
> \i dumpfile
> and the same with -h localhost (to get a TCP/IP connection instead of
> Unix domain). At the moment (pre-patch) I see failures with the
> first two of these, but not with the \i method. -h doesn't seem to
> matter for me, but it might for you.

Ok, the latest patch I posted fixes that, with and without the -h
flag, at least for me.

> > Telling me something is wrong without giving suggestions on how
> > to fix it, nor direct pointers to where it fails doesn't help me
> > one bit. You're not offering constructive critism, you're not
> > even offering valid critism, you're just waving your finger at
> > "problems" that you say exist but don't pin down to anything specific.
>
> I have been explaining it as clearly as I could. Let's try it
> one more time.

What I needed was the above steps to validate that the problem is fixed.

> > I spent hours looking over what I did to pqFlush and pqPutnBytes
> > because of what you said earlier when all the bug seems to have
> > come down to is that I missed that the socket is set to non-blocking
> > in all cases now.
>
> Letting the socket mode default to blocking will hide the problems from
> existing clients that don't care about non-block mode. But people who
> try to actually use the nonblock mode are going to see the same kinds of
> problems that psql is exhibiting.

There is no non-block mode, there's the old mode, and the _real_
non-blocking mode that I'm trying to get working.

> > The old sequence of events that happened was as follows:
>
> > user sends data almost filling the output buffer...
> > user sends another line of text overflowing the buffer...
> > pqFlush is invoked blocking the user until the output pipe clears...
> > and repeat.
>
> Right.

You agree that it's somewhat broken to do that right?

>
> > The nonblocking code allows sends to fail so the user can abort
> > sending stuff to the backend in order to process other work:
>
> > user sends data almost filling the output buffer...
> > user sends another line of text that may overflow the buffer...
> > pqFlush is invoked,
> > if the pipe can't be cleared an error is returned allowing the user to
> > retry the send later.
> > if the flush succeeds then more data is queued and success is returned
>
> But you haven't thought through the mechanics of the "error is returned
> allowing the user to retry" code path clearly enough. Let's take
> pqPutBytes for an example. If it returns EOF, is that a hard error or
> does it just mean that the application needs to wait a while? The
> application *must* distinguish these cases, or it will do the wrong
> thing: for example, if it mistakes a hard error for "wait a while",
> then it will wait forever without making any progress or producing
> an error report.
>
> You need to provide a different return convention that indicates
> what happened, say
> EOF (-1) => hard error (same as old code)
> 0 => OK
> 1 => no data was queued due to risk of blocking
> And you need to guarantee that the application knows what the state is
> when the can't-do-it-yet return is made; note that I specified "no data
> was queued" above. If pqPutBytes might queue some of the data before
> returning 1, the application is in trouble again. While you apparently
> foresaw that in recoding pqPutBytes, your code doesn't actually work.
> There is the minor code bug that you fail to update "avail" after the
> first pqFlush call, and the much more fundamental problem that you
> cannot guarantee to have queued all or none of the data. Think about
> what happens if the passed nbytes is larger than the output buffer size.
> You may pass the first pqFlush successfully, then get into the loop and
> get a won't-block return from pqFlush in the loop. What then?
> You can't simply refuse to support the case nbytes > bufsize at all,
> because that will cause application failures as well (too long query
> sends it into an infinite loop trying to queue data, most likely).

I don't have to think about this too much (nbytes > conn->outBufSize),
see: http://www.postgresql.org/docs/programmer/libpq-chapter4142.htm
the Caveats section for libpq:

Caveats

The query buffer is 8192 bytes long, and queries over that length
will be rejected.

If I can enforce this limit then i'm fine, also there's nothing
stopping me from temporarily realloc()'ing the send buffer, or
chaining another sendbuffer if the first fills mid query, it would
be easy to restrict the application to only a single overcommit of
the send buffer, or a single circular buffer to avoid memory
copies.

> A possible answer is to specify that a return of +N means "N bytes
> remain unqueued due to risk of blocking" (after having queued as much
> as you could). This would put the onus on the caller to update his
> pointers/counts properly; propagating that into all the internal uses
> of pqPutBytes would be no fun. (Of course, so far you haven't updated
> *any* of the internal callers to behave reasonably in case of a
> won't-block return; PQfn is just one example.)

No way dude. :) I would like to get started on a PQfnstart()/PQfnpoll
interface soon.

> Another possible answer is to preserve pqPutBytes' old API, "queue or
> bust", by the expedient of enlarging the output buffer to hold whatever
> we can't send immediately. This is probably more attractive, even
> though a long query might suck up a lot of space that won't get
> reclaimed as long as the connection lives. If you don't do this then
> you are going to have to make a lot of ugly changes in the internal
> callers to deal with won't-block returns. Actually, a bulk COPY IN
> would probably be the worst case --- the app could easily load data into
> the buffer far faster than it could be sent. It might be best to extend
> PQputline to have a three-way return and add code there to limit the
> growth of the output buffer, while allowing all internal callers to
> assume that the buffer is expanded when they need it.

It's not too difficult to only allow one over-commit into the buffer,
and enforcing the 8k limit, what do you think about that?

> pqFlush has the same kind of interface design problem: the same EOF code
> is returned for either a hard error or can't-flush-yet, but it would be
> disastrous to treat those cases alike. You must provide a 3-way return
> code.
>
> Furthermore, the same sort of 3-way return code convention will have to
> propagate out through anything that calls pqFlush (with corresponding
> documentation updates). pqPutBytes can be made to hide a pqFlush won't-
> block return by trying to enlarge the output buffer, but in most other
> places you won't have a choice except to punt it back to the caller.

I'm not sure about this, the old pqFlush would reset the connection
if it went bad, (I was suprised to see that it didn't) it doesn't do
this so it can read the dying words from the backend, imo all that's
needed is a:
conn->status = CONNECTION_BAD;

before returning EOF in pqFlush()

pqReadData will happily read from pgconn marked 'CONNECTION_BAD' and
user applications can then QPstatus() and reset the connection.

> PQendcopy has the same interface design problem. It used to be that
> (unless you passed a null pointer) PQendcopy would *guarantee* that
> the connection was no longer in COPY state on return --- by resetting
> it, if necessary. So the return code was mainly informative; the
> application didn't have to do anything different if PQendcopy reported
> failure. But now, a nonblocking application does need to pay attention
> to whether PQendcopy completed or not --- and you haven't provided a way
> for it to tell. If 1 is returned, the connection might still be in
> COPY state, or it might not (PQendcopy might have reset it). If the
> application doesn't distinguish these cases then it will fail.

PQstatus will allow you to determine if the connection has gone to
CONNECTION_BAD.

> I also think that you want to take a hard look at the automatic "reset"
> behavior upon COPY failure, since a PQreset call will block the
> application until it finishes. Really, what is needed to close down a
> COPY safely in nonblock mode is a pair of entry points along the line of
> "PQendcopyStart" and "PQendcopyPoll", with API conventions similar to
> PQresetStart/PQresetPoll. This gives you the ability to do the reset
> (if one is necessary) without blocking the application. PQendcopy
> itself will only be useful to blocking applications.

I'd be willing to work on fixing this up, but currently other issues
you've mentioned seem higher on the priority list.

> > I'm sorry if they don't work for some situations other than COPY IN,
> > but it's functionality that I needed and I expect to be expanded on
> > by myself and others that take interest in nonblocking operation.
>
> I don't think that the nonblock code is anywhere near production quality
> at this point. It may work for you, if you don't stress it too hard and
> never have a communications failure; but I don't want to see us ship it
> as part of Postgres unless these issues get addressed.

I'd really appreciate if it was instead left as undocumented until we
have it completed.

Doing that allows people like myself to see that work is in progress
to provide this functionality and possibly contribute to polishing
it up and expanding its usefullness instead of giving up entirely
or starting from scratch.

--
-Alfred Perlstein - [bright(at)wintelcom(dot)net|alfred(at)freebsd(dot)org]

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Bruce Momjian 2000-01-24 02:10:04 initdb everyone
Previous Message Hiroshi Inoue 2000-01-24 01:01:36 RE: [HACKERS] Patch for elog(FATAL)/elog(ERROR) infinite loop?