Re: Problem with asynchronous connect in 8.0.1

From: Chad Robinson <taskswap(at)yahoo(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-interfaces(at)postgresql(dot)org
Subject: Re: Problem with asynchronous connect in 8.0.1
Date: 2005-02-08 18:36:53
Message-ID: 20050208183653.22202.qmail@web11601.mail.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-interfaces


--- Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:

> Chad Robinson <taskswap(at)yahoo(dot)com> writes:
> > I'm having a problem with asynchronous connections, and I can't seem to
> find
> > a good example of this anywhere. To make things simple, I've reduced my
> > program to the bare minimums, not even using select().
>
> I believe that the async I/O code expects you to obey the API protocol,
> in particular to wait for read-ready or write-ready where it says to.
> Offhand I would expect an error return in case of failure to wait
> long enough, though.
>
> > In my Postgres logs I see:
> > LOG: incomplete startup packet
>
> Hmm, is it possible pqFlush never succeeded in writing out the whole
> startup packet? It'd be useful to look at the state of the connection
> object with a debugger, esp. to see if anything is still pending in the
> outbound I/O buffer.

I found the problem here. A logic error in the event-handling loop was
causing me to call PQconsumeInput at some point during the connect sequence.
It might be a good idea in the future to have all from-server communications
handled by a single polling function, but in the end it was still my fault.

I'm not sure I can use Postgres though. I'm having a terrible time getting
large numbers of clients connected. I need to have upwards of 8000 clients
attached at any one time. I can do this on MySQL without problems, but that
DB has limitations I was hoping to get rid of with PostgreSQL (especially
asynchronous query support - for a neat client-side performance trick, check
out combining libpq with libepoll, which is what I'm doing).

When I connect the clients directly to the server, I have no real problems. A
few clients will occasionally get EAGAIN errors during the SSL startup, but I
wrote them to retry after a few seconds and the second time around they get
in - probably a backlog issue. But memory usage on the server is ABYSMAL -
1200 clients use almost a GB of RAM, no matter how I tune things. I'm not
trying to be cheap, but that seems a little excessive. I shouldn't have to
deploy 8GB of RAM in this server just to connect my clients, especially since
they'll all be doing pretty basic queries (a few tables, maybe even no
joins). Heck, the data itself is probably in the 1-2GB range!

I tried using pgpool, and that seems promising, but I can't seem to get more
than a few hundred connections per front-end there before my clients start
losing connections (seems like around 190 and up). It doesn't seem to be a
tuning parameter, but I'm still exploring that and I'll take that up on the
pgpool list.

Thanks for your reply.

-Chad


__________________________________
Do you Yahoo!?
The all-new My Yahoo! - What will yours do?
http://my.yahoo.com

In response to

Responses

Browse pgsql-interfaces by date

  From Date Subject
Next Message Tom Lane 2005-02-08 18:41:17 Re: Problem with asynchronous connect in 8.0.1
Previous Message Mark Crosland 2005-02-08 06:18:21 Can you bind output variables?