Re: Critical performance problems on large databases

From: Lincoln Yeoh <lyeoh(at)pop(dot)jaring(dot)my>
To: Gunther Schadow <gunther(at)aurora(dot)regenstrief(dot)org>, Gunther Schadow <gunther(at)aurora(dot)regenstrief(dot)org>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Critical performance problems on large databases
Date: 2002-04-12 14:49:17
Message-ID: 5.1.0.14.1.20020412223513.0265f020@192.228.128.13
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

At 11:09 AM 4/11/02 -0500, Gunther Schadow wrote:
>There was one remark about Perl or PHP always loading the complete
>result set before returning. Bad for them. I don't use either and
>I think it's just bad design to do that on the client but I don't
>care about bad clients. I care about a good server.

>The constructive responses suggested that I use LIMIT/OFFSET and
>CURSORs. I can see how that could be a workaround the problem, but
>I still believe that something is wrong with the PostgreSQL query
>executer. Loading the entire result set into a buffer without
>need just makes no sense. Good data base engines try to provide

AFAIK you can turn off buffering the whole result set with perl DBI/DBD.

Buffering allows programmers to easily use each row of a query to make
other queries, even if the DB doesn't support cursors.

e.g.
select ID from table1;
for each row returned {
select ID2 from table2 where x=ID;
for each row returned:
....
}

Without buffering or cursors you will have to open another connection or
store all rows somewhere then do the subsequent queries using the stored rows.

Cheerio,
Link.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Samuel J. Sutjiono 2002-04-12 14:51:47 Re: [SQL] Transactional vs. Read-only (Retrieval) database
Previous Message Paulo Jan 2002-04-12 12:27:14 Re: Problems building 7.2.1 RPMs