after finding out that libpq apparently doesn't work properly when
sending long queries ('long' meaning somewhere larger than 8KB), I had a
look at the sources and also found some mails in the archives where this
issue had been discussed. The problem appears to be still present in the
current CVS version.
I've worked on a fix today that works for me. It's perhaps not the best
solution but it's simple :-)
The basic strategy is to fix pqFlush and pqPutBytes.
The problem with pqFlush as it stands now is that it returns EOF when an
error occurs or when not all data could be sent. The latter case is
clearly not an error for a non-blocking connection but the caller can't
distringuish it from an error very well.
The first part of the fix is therefore to fix pqFlush. This is done by
to renaming it to pqFlushSome which only differs from pqFlush in its
return values to allow the caller to make the above distinction and a
new pqFlush which is implemented in terms of pqFlushSome and behaves
exactly like the old pqFlush.
The second part of the fix modifies pqPutBytes to use pqFlushSome
instead of pqFlush and to either send all the data or if not all data
can be sent on a non-blocking connection to at least put all data into
the output buffer, enlarging it if necessary.
I've also added a new API function PQflushSome which analogously to
PQflush just calls pqFlushSome. Programs using PQsendQuery should use
this new function. The main difference is that this function will have
to be called repeatedly (calling select() properly in between) until all
data has been written.
Being new to postgresql development I'm not completely sure how to
proceed from here. Is it OK if I post the patch here?
Intevation GmbH http://intevation.de/
pgsql-interfaces by date
|Next:||From: Tom Lane||Date: 2002-01-21 21:51:57|
|Subject: Re: non-blocking connections in libpq, fix proposal |
|Previous:||From: Unnikrishnan Menon||Date: 2002-01-21 15:55:13|
|Subject: PGACCESS installation|