"Bryan White" <bryan(at)arcamax(dot)com> writes:
> It is my understanding that when a query is issued the backend runs the
> query and accumulates the results in memory and when it completes it
> transmits the entire result set to the front end.
No, the backend does not accumulate the result; it transmits tuples to
the frontend on-the-fly. The current implementation of frontend libpq
does buffer the result rows on the frontend side, because it presents a
random-access-into-the-query-result API to the client application.
(There's been talk of offering an alternative API that eliminates the
buffering and the random-access option, but nothing's been done yet.)
> I have studied the documentation and found Cursors and Asyncronous Query
> Processing. Cursors seems to solve the problem on the front end but I get
> the impression the back end will buffer the entire result until the cursor
> is closed.
A cursor should solve the problem just fine. If you can put your finger
on what part of the documentation misled you, maybe we can improve it.
> Asyncronous Query Processing as I understand it is more about not blocking
> the client during the query and it does not fundementally alter the result
> buffering on either end.
Correct, it just lets a single-threaded client continue to do other
stuff while waiting for the (whole) result to arrive.
regards, tom lane
In response to
pgsql-interfaces by date
|Next:||From: Bryan White||Date: 2000-02-17 04:19:26|
|Subject: Re: [INTERFACES] Managing the memory requierments of large query results |
|Previous:||From: Oliver Elphick||Date: 2000-02-16 23:00:42|
|Subject: pgaccess and multibyte-enabled libpq|