I have a large table that I need to traverse in full. I currently
start with a simple unrestricted SELECT, and then fetch each and
every row one at a time. I thought that by fetching just one row at a
time I would not consume any significant amount of memory.
However, judging by the memory consumption of my front-end process,
it would seem that the SELECT is loading the entire table into memory
before I even fetch the first row! Can anyone confirm that this is in
fact what goes on?
If so, is there any way to avoid it? The obvious solution would seem
to be to use LIMIT and OFFSET to get just a few thousand rows at a
time, but will that suffer from a time overhead while the backend
skips over millions of rows to get to the ones it needs??
Thanks for any clues anyone can provide!
P.S. If it matters, I am using the Perl interface. I am also running in
pgsql-interfaces by date
|Next:||From: Sergio A. Kessler||Date: 1999-11-20 03:18:58|
|Subject: Re: [INTERFACES] pg_pwd|
|Previous:||From: Mads Pultz||Date: 1999-11-19 23:21:39|
|Subject: JDBC compliancy question|