Skip site navigation (1) Skip section navigation (2)

Large selects handled inefficiently?

From: Jules Bean <jules(at)jellybean(dot)co(dot)uk>
To: pgsql-general(at)postgresql(dot)org
Subject: Large selects handled inefficiently?
Date: 2000-08-30 11:30:40
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-general

I am running a very large SELECT - it selects every row from a ~10 000
000 row table. I'm running this in Perl using DBD:Pg, with the general

$sth = $dbh->prepare("SELECT $fields FROM $from") || return 0;
$fh = new FileHandle(">$file") || die "Can't open $file : $!";
$sth->execute()  || return 0;
while ($sth->fetch){
      print $fh join("\t",@fields),"\n";
if ($sth->err) { return 0; }
close $fh;

...which works fine in most circumstance, but on this really big

* the client process's memory gradually climbs to around 1 gigabyte
* eventually the stdout of the process shows over and over again:

Backend sent D message without prior T

It seems as if the client is fetching the /whole/ query result, or
trying to, all at once.  I expected it to only actually fetch result
rows as I called fetch().

Is this:

* A deficiency in DBD::Pg?
* A deficiency in the postgresql client libraries?
* A silly bug in my code?

I believe I can work around this problem using cursors (although I
don't know how well DBD::Pg copes with cursors).  However, that
doesn't seem right -- cursors should be needed to fetch a large query
without having it all in memory at once...



pgsql-general by date

Next:From: Trurl McByteDate: 2000-08-30 11:35:37
Subject: SSL support autoconfiguration troubles
Previous:From: Sander SteffannDate: 2000-08-30 09:57:35
Subject: Re: 7.1 Release Date

Privacy Policy | About PostgreSQL
Copyright © 1996-2018 The PostgreSQL Global Development Group