From: | "Guy Rouillier" <guyr(at)masergy(dot)com> |
---|---|
To: | <pgsql-interfaces(at)postgresql(dot)org> |
Subject: | Re: Incremental results from libpq |
Date: | 2005-11-16 20:35:39 |
Message-ID: | CC1CF380F4D70844B01D45982E671B239E8C95@mtxexch01.add0.masergy.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-interfaces |
Peter Eisentraut wrote:
> I'm at LinuxWorld Frankfurt and one of the Trolltech guys came over
> to talk to me about this. He opined that it would be beneficial for
> their purpose (in certain cases) if the server would first compute
> the entire result set and keep it in the server memory (thus
> eliminating potential errors of the 1/x kind) and then ship it to the
> client in a way that the client would be able to fetch it piecewise.
> Then, the client application could build the display incrementally
> while the rest of the result set travels over the (slow) link. Does
> that make sense?
No. How would you handle the 6-million row result set? You want the
server to cache that? Remember, the server authors have no way to
predict client code efficiency. What if a poorly written client
retrieves just 10 of those rows and decides it doesn't want any more,
but doesn't free up the server connection? The server will be stuck
holding those 6 million rows in memory for a long time. And readily
available techniques exist for the client to handle this. Have one
thread reading rows from the DB, and a second thread drawing the
display.
--
Guy Rouillier
From | Date | Subject | |
---|---|---|---|
Next Message | Goulet, Dick | 2005-11-16 22:14:28 | Re: Incremental results from libpq |
Previous Message | Bruce Momjian | 2005-11-16 20:13:17 | Re: Incremental results from libpq |