Re: Protocol 3, Execute, maxrows to return, impact?

From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: "Stephen R(dot) van den Berg" <srb(at)cuci(dot)nl>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Protocol 3, Execute, maxrows to return, impact?
Date: 2008-07-10 14:30:42
Message-ID: 48761D12.9000202@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Tom Lane wrote:
> "Stephen R. van den Berg" <srb(at)cuci(dot)nl> writes:
>
>> Then, from a client perspective, there is no use at all, because the
>> client can actually pause reading the results at any time it wants,
>> when it wants to avoid storing all of the result rows. The network
>> will perform the cursor/fetch facility for it.
>>
>
> [ shrug... ] In principle you could write a client library that would
> act that way, but I think you'll find that none of the extant ones
> will hand back an incomplete query result to the application.
>
> A possibly more convincing argument is that with that approach, the
> connection is completely tied up --- you cannot issue additional
> database commands based on what you just read, nor pull rows from
> multiple portals in an interleaved fashion.
>
>

I really think we need to get something like this into libpq. It's on my
TODO list after notification payloads and libpq support for arrays and
composites. We'll need to come up with an API before we do much else.

cheers

andrew

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Kless 2008-07-10 14:37:13 UUID - Data type inefficient
Previous Message Dave Page 2008-07-10 14:28:35 Re: CommitFest rules