From: | Frank van Vugt <ftm(dot)van(dot)vugt(at)foxi(dot)nl> |
---|---|
To: | pgsql-interfaces(at)postgresql(dot)org |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Subject: | Re: Incremental results from libpq |
Date: | 2005-11-10 16:51:54 |
Message-ID: | 200511101751.55377.ftm.van.vugt@foxi.nl |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-interfaces |
> > The main reason why libpq does what it does is that this way we do not
> > have to expose in the API the notion of a command that fails part way
> > through. If you support partial result fetching then you'll have to
> > deal with the idea that a SELECT could fail after you've already
> > returned some rows to the client.
I'm wondering, what kind of failure do you have in mind, here? If I'm informed
correctly then Oracle and others are generating the complete static result
set on the server-side, which will then stay cached until all rows/chunks are
fetched. The one failure that comes to mind in this scenario is that the
connection breaks down, but since informing the client would then be a bit
difficult, you'll certainly be referring to something else ;)
If PostgreSQL were to build the complete result-set before handing over the
first fetched rows/chunks, then I understand. Is that the case? Or something
else even...?
--
Best,
Frank.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2005-11-10 17:03:12 | Re: Incremental results from libpq |
Previous Message | Frank van Vugt | 2005-11-10 09:11:45 | Re: Incremental results from libpq |