Re: [HACKERS] libpq

From: Chris Bitmead <chrisb(at)nimrod(dot)itg(dot)telstra(dot)com(dot)au>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: chris(at)bitmead(dot)com, Postgres Hackers List <hackers(at)postgreSQL(dot)org>
Subject: Re: [HACKERS] libpq
Date: 2000-02-11 06:36:19
Message-ID: 38A3ADE3.AAE1FC7D@nimrod.itg.telecom.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Tom Lane wrote:

> Well, that's true from one point of view, but I think it's just libpq's
> point of view. The application programmer is fairly likely to have
> specific knowledge of the size of tuple he's fetching, and maybe even
> to have a global perspective that lets him decide he doesn't really
> *want* to deal with retrieved tuples on a packet-by-packet basis.
> Maybe waiting till he's got 100K of data is just right for his app.
>
> But I can also believe that the app programmer doesn't want to commit to
> a particular tuple size any more than libpq does. Do you have a better
> proposal for an API that doesn't commit any decisions about how many
> tuples to fetch at once?

If you think applications may like to keep buffered 100k of data, isn't
that an argument for the PGobject interface instead of the PGresult
interface?

I'm trying to think of a situation where you want to buffer data. Let's
say psql has something like "more" inbuilt and it needs to buffer
a screenful, and go forward line by line. Now you want to keep the last
40 tuples buffered. First up you want 40 tuples, then you want one
at a time every time you press Enter.

This seems too much responsibility to press onto libpq, but if the user
has control over destruction of PQobjects they can buffer what they
want, how they want, when they want.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Michael Meskes 2000-02-11 06:39:43 Re: [HACKERS] psql and libpq fixes
Previous Message Don Baccus 2000-02-11 06:35:24 Re: [HACKERS] Solution for LIMIT cost estimation