Skip site navigation (1) Skip section navigation (2)

Re: Incremental results from libpq

From: "Guy Rouillier" <guyr(at)masergy(dot)com>
To: <pgsql-interfaces(at)postgresql(dot)org>
Subject: Re: Incremental results from libpq
Date: 2005-11-16 20:35:39
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-interfaces
Peter Eisentraut wrote:
> I'm at LinuxWorld Frankfurt and one of the Trolltech guys came over
> to talk to me about this.  He opined that it would be beneficial for
> their purpose (in certain cases) if the server would first compute
> the entire result set and keep it in the server memory (thus
> eliminating potential errors of the 1/x kind) and then ship it to the
> client in a way that the client would be able to fetch it piecewise. 
> Then, the client application could build the display incrementally
> while the rest of the result set travels over the (slow) link. Does
> that make sense? 

No.  How would you handle the 6-million row result set?  You want the
server to cache that?  Remember, the server authors have no way to
predict client code efficiency.  What if a poorly written client
retrieves just 10 of those rows and decides it doesn't want any more,
but doesn't free up the server connection?  The server will be stuck
holding those 6 million rows in memory for a long time.  And readily
available techniques exist for the client to handle this.  Have one
thread reading rows from the DB, and a second thread drawing the

Guy Rouillier

pgsql-interfaces by date

Next:From: Goulet, DickDate: 2005-11-16 22:14:28
Subject: Re: Incremental results from libpq
Previous:From: Bruce MomjianDate: 2005-11-16 20:13:17
Subject: Re: Incremental results from libpq

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group