Skip site navigation (1) Skip section navigation (2)

Re: BUG #1756: PQexec eats huge amounts of memory

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: <vda(at)ilport(dot)com(dot)ua>, <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: <alvherre(at)alvh(dot)no-ip(dot)org>, <pierce(at)hogranch(dot)com>,<pgsql-bugs(at)postgresql(dot)org>, <neilc(at)samurai(dot)com>
Subject: Re: BUG #1756: PQexec eats huge amounts of memory
Date: 2005-07-18 20:18:04
Message-ID: s2dbc838.077@gwmta.wicourts.gov (view raw or flat)
Thread:
Lists: pgsql-bugs
Are you saying that if I execute a statement like:
 
select * from "TableWithHundredsOfMillionsOfRows"
 
that the entire table will be copied to a result set before returning
the first row?  Is this result set built in RAM on the server side?
 
I am interested primarily in the behavior under JDBC, although psql is
also significant.  If streaming results to the client as they are
obtained is considered to be something which can only be done by a
cursor (an interpretation I haven't seen in other products), is there
any way to get a SELECT statement to be treated as a cursor without
explicitly issuing DECLARE CURSOR and FETCH statements?  (For example,
some other products take an invocation of the JDBC
Statement.setCursorName method as a clue to use a cursor for a SELECT
using that Statement object, while this method is a no-op for
PostgreSQL's JDBC driver.)
 
I am fixing some portability flaws in software which has been using
Sybase so that is will also support PostgreSQL.  Under Sybase, results
are always returned as soon as possible (and it is therefore entirely
possible to receive some rows and then receive an exception).  Since our
software needs to be portable enough to support Sybase's optimistic
"streaming" technique, there is no particular benefit for us in the
strategy used by PostgreSQL in this regard, and it may cause problems
with a few of the larger results sets.  I suspect that overall
performance for most queries will be about the same either way, as
PostgreSQL will reduce concurrency on a single request, but is likely to
make up for it by reducing task switching and blocking.
 
Thanks for any info.
 
-Kevin
 
 
>>> Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> 07/13/05 9:43 AM >>>
Denis Vlasenko <vda(at)ilport(dot)com(dot)ua> writes:
> Consider my posts in this thread as user wish to
> * libpq and network protocol to be changed to allow for incremental
reads
>   of executed queries and for multiple outstanding result sets,
> 	or, if above thing looks unsurmountable at the moment,
> * libpq-only change as to allow incremental reads of single
outstanding
>   result set. Attempt to use pg_numrows, etc, or attempt to execute
>   another query forces libpq to read and store all remaining rows
>   in client's memory (i.e. current behaviour).

This isn't going to happen because it would be a fundamental change in
libpq's behavior and would undoubtedly break a lot of applications.
The reason it cannot be done transparently is that you would lose the
guarantee that a query either succeeds or fails: it would be entirely
possible to return some rows to the application and only later get a
failure.

You can have this behavior today, though, as long as you are willing to
work a little harder at it --- just declare some cursors and then FETCH
in convenient chunks from the cursors.

			regards, tom lane


Responses

pgsql-bugs by date

Next:From: Tom LaneDate: 2005-07-18 20:25:40
Subject: Re: BUG #1756: PQexec eats huge amounts of memory
Previous:From: Dirk JagdmannDate: 2005-07-18 16:40:17
Subject: BUG #1774: ecpg preprocessor produces a wrong varchar struct

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group