Skip site navigation (1) Skip section navigation (2)

Managing the memory requierments of large query results

From: "Bryan White" <bryan(at)arcamax(dot)com>
To: <pgsql-interfaces(at)postgreSQL(dot)org>
Subject: Managing the memory requierments of large query results
Date: 2000-02-16 22:23:02
Message-ID: 005b01bf78cc$634cc3e0$2dd260d1@arcamax.com (view raw or flat)
Thread:
Lists: pgsql-interfaces
It is my understanding that when a query is issued the backend runs the
query and accumulates the results in memory and when it completes it
transmits the entire result set to the front end.

For selects with large result sets this creates large demands for memory,
first in the back end and then in the front end.

Is there a mechansism to avoid this?  In particular I am looking for a
mechanism where the backend generates rows to fill a relatively small buffer
and blocks while waiting for the front end to drain that buffer.  In this
interface the front end would only need to present one row at a time to the
application.  I understand that there might be limitations on the kind or
complexity of a query that usses this mode of operation.

I have studied the documentation and found Cursors and Asyncronous Query
Processing.  Cursors seems to solve the problem on the front end but I get
the impression the back end will buffer the entire result until the cursor
is closed.

Asyncronous Query Processing as I understand it is more about not blocking
the client during the query and it does not fundementally alter the result
buffering on either end.



Responses

pgsql-interfaces by date

Next:From: Oliver ElphickDate: 2000-02-16 23:00:42
Subject: pgaccess and multibyte-enabled libpq
Previous:From: Tom LaneDate: 2000-02-16 22:13:13
Subject: Re: [INTERFACES] A question on triggers

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group