From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | dougt(at)mugc(dot)cc(dot)monash(dot)edu(dot)au |
Cc: | pgsql-interfaces(at)postgreSQL(dot)org |
Subject: | Re: [INTERFACES] Front end memory consumption in SELECT |
Date: | 1999-11-20 06:26:35 |
Message-ID: | 6505.943079195@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-interfaces |
Douglas Thomson <dougt(at)mugc(dot)cc(dot)monash(dot)edu(dot)au> writes:
> However, judging by the memory consumption of my front-end process,
> it would seem that the SELECT is loading the entire table into memory
> before I even fetch the first row! Can anyone confirm that this is in
> fact what goes on?
libpq handles SELECTs that way. You should consider DECLARE CURSOR
and FETCH if you need to retrieve a large query result in chunks.
It was probably bad design for libpq to offer random access to query
results --- I'm sure there are few applications that really care,
and the memory-consumption problem is a very real one for many apps.
But I see no way to fix it without fundamental changes to libpq's API,
and it's not clear it's worth that kind of pain. Maybe there will
be a deliberately-incompatible libpq Mark II someday ... or maybe we'll
switch to a whole new client interface like CORBA.
> If so, is there any way to avoid it? The obvious solution would seem
> to be to use LIMIT and OFFSET to get just a few thousand rows at a
> time, but will that suffer from a time overhead while the backend
> skips over millions of rows to get to the ones it needs??
Yes. See the CURSOR stuff instead.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Mount | 1999-11-20 11:31:34 | Re: [INTERFACES] JDBC compliancy question |
Previous Message | Sergio A. Kessler | 1999-11-20 03:18:58 | Re: [INTERFACES] pg_pwd |