|From:||Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>|
|To:||"Bryan White" <bryan(at)arcamax(dot)com>|
|Subject:||Re: [INTERFACES] Managing the memory requierments of large query results|
|Views:||Raw Message | Whole Thread | Download mbox|
"Bryan White" <bryan(at)arcamax(dot)com> writes:
> It is my understanding that when a query is issued the backend runs the
> query and accumulates the results in memory and when it completes it
> transmits the entire result set to the front end.
No, the backend does not accumulate the result; it transmits tuples to
the frontend on-the-fly. The current implementation of frontend libpq
does buffer the result rows on the frontend side, because it presents a
random-access-into-the-query-result API to the client application.
(There's been talk of offering an alternative API that eliminates the
buffering and the random-access option, but nothing's been done yet.)
> I have studied the documentation and found Cursors and Asyncronous Query
> Processing. Cursors seems to solve the problem on the front end but I get
> the impression the back end will buffer the entire result until the cursor
> is closed.
A cursor should solve the problem just fine. If you can put your finger
on what part of the documentation misled you, maybe we can improve it.
> Asyncronous Query Processing as I understand it is more about not blocking
> the client during the query and it does not fundementally alter the result
> buffering on either end.
Correct, it just lets a single-threaded client continue to do other
stuff while waiting for the (whole) result to arrive.
regards, tom lane
|Next Message||Bryan White||2000-02-17 04:19:26||Re: [INTERFACES] Managing the memory requierments of large query results|
|Previous Message||Oliver Elphick||2000-02-16 23:00:42||pgaccess and multibyte-enabled libpq|