Re: C libpq frontend library fetchsize

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Yeb Havinga <yebhavinga(at)gmail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: C libpq frontend library fetchsize
Date: 2010-03-18 17:00:01
Message-ID: 13404.1268931601@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Yeb Havinga <yebhavinga(at)gmail(dot)com> writes:
> What if the default operation of e.g. php using libpq would be as
> follows: set some default fetchsize (e.g. 1000 rows), then just issue
> getrow. In the php pg handling, a function like getnextrow would wait
> for the first pgresult with 1000 rows. Then if the pgresult is depleted
> or almost depleted, request the next pgresult automatically. I see a lot
> of benefits like less memory requirements in libpq, less new users with
> why is my query so slow before the first row, and almost no concerns.

You are blithely ignoring the reasons why libpq doesn't do this. The
main one being that it's impossible to cope sanely with queries that
fail partway through execution. The described implementation would not
cope tremendously well with nonsequential access to the resultset, either.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Josh Berkus 2010-03-18 17:18:28 Re: Getting to beta1
Previous Message Yeb Havinga 2010-03-18 16:54:25 Re: C libpq frontend library fetchsize