Re: Limits on PostgreSQL

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Marcos Barreto de Castro <mbdecastro(at)yahoo(dot)com>
Cc: pgsql-general(at)hub(dot)org
Subject: Re: Limits on PostgreSQL
Date: 2000-05-25 23:01:16
Message-ID: 536.959295676@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Marcos Barreto de Castro <mbdecastro(at)yahoo(dot)com> writes:
> 1 - How many tuples can be returned as a result of a
> query using a CURSOR? (Is it possible to do a SELECT
> * on a table that has 2 million records and OPEN a
> CURSOR for that SELECT and show all records'contents
> using FETCH FORWARD, for example?)

You probably wouldn't want to fetch all 2 million rows in one FETCH,
because you'd risk running out of memory on the client side. But as
long as you grab a reasonable number of rows per FETCH command, it works
fine. This is in fact the recommended method for dealing with extremely
large SELECT results.

> 2 - When one uses a CURSOR for a SELECT is there a
> big memory consumption or there is a memory buffer
> limit and beyond that the result is written to a file
> (virtual memory) and read from there when needed?

Cursors work just the same as plain evaluation of the query would,
except that the query execution is suspended after having fetched the
requested number of tuples. There isn't any special memory requirement
on the backend side.

Some types of queries need temporary files (an explicit sort of a large
volume of data is an example). But that'll be just the same whether you
run them via a cursor or a plain SELECT.

regards, tom lane

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Marcin Inkielman 2000-05-25 23:11:03 PG 7.0 vacuum problem
Previous Message Tom Lane 2000-05-25 22:49:03 Re: problem with NOTICE: _outNode: don't know how to print type