Re: Libpq async issues

From: Alfred Perlstein <bright(at)wintelcom(dot)net>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Libpq async issues
Date: 2001-01-24 18:33:42
Message-ID: 20010124103342.B26076@fw.wintelcom.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

* Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> [010124 10:27] wrote:
> Alfred Perlstein <bright(at)wintelcom(dot)net> writes:
> > * Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> [010124 07:58] wrote:
> >> I have added this email to TODO.detail and a mention in the TODO list.
>
> > The bug mentioned here is long gone,
>
> Au contraire, the misdesign is still there. The nonblock-mode code
> will *never* be reliable under stress until something is done about
> that, and that means fairly extensive code and API changes.

The "bug" is the one mentioned in the first paragraph of the email
where I broke _blocking_ connections for a short period.

I still need to fix async connections for myself (and of course
contribute it back), but I just haven't had the time. If anyone
else wants it fixed earlier they can wait for me to do it, do it
themself, contract me to do it or hope someone else comes along
to fix it.

I'm thinking that I'll do what you said and have seperate paths
for writing/reading to the socket and API's to do so that give
the user the option of a boundry, basically:

buffer this, but don't allow me to write until it's flushed

which would allow for larger than 8k COPY rows to go into the
backend.

--
-Alfred Perlstein - [bright(at)wintelcom(dot)net|alfred(at)freebsd(dot)org]
"I have the heart of a child; I keep it in a jar on my desk."

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Bruce Momjian 2001-01-24 18:41:16 Re: postgresql.conf and postgres options
Previous Message Peter Eisentraut 2001-01-24 18:15:12 Re: Re: unixODBC again :-(