Re: Incremental results from libpq

From: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>
To: "Goulet, Dick" <DGoulet(at)vicr(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Peter Eisentraut <peter_e(at)gmx(dot)net>, pgsql-interfaces(at)postgresql(dot)org, Scott Lamb <slamb(at)slamb(dot)org>
Subject: Re: Incremental results from libpq
Date: 2005-11-16 20:13:17
Message-ID: 200511162013.jAGKDHY22079@candle.pha.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-interfaces

Goulet, Dick wrote:
> Bruce,
>
> If I may, one item that would be of extreme use to our location
> would be global temporary tables. These have existed since Oracle 9.0.
> They are defined once and then used by clients as needed. Each session
> is ignorant of the data of any other session and once you disconnect the
> data from the session disappears. Truly a real temporary table.

How is it better than what we have now?

---------------------------------------------------------------------------

>
> -----Original Message-----
> From: Bruce Momjian [mailto:pgman(at)candle(dot)pha(dot)pa(dot)us]
> Sent: Wednesday, November 16, 2005 11:33 AM
> To: Goulet, Dick
> Cc: Tom Lane; Peter Eisentraut; pgsql-interfaces(at)postgresql(dot)org; Scott
> Lamb
> Subject: Re: [INTERFACES] Incremental results from libpq
>
>
> Added to TODO:
>
> o Allow query results to be automatically batched to the client
>
> Currently, all query results are transfered to the libpq
> client before libpq makes the results available to the
> application. This feature would allow the application to make
> use of the first result rows while the rest are transfered, or
> held on the server waiting for them to be requested by libpq.
> One complexity is that a query like SELECT 1/col could error
> out mid-way through the result set.
>
>
> ------------------------------------------------------------------------
> ---
>
> Goulet, Dick wrote:
> > Tom,
> >
> > Your case for not supporting this is reasonable, at least to me.
> > Personally I believe you should take one side or the other at the
> server
> > level and then allow the app developer to use it as appropriate, so no
> > argument here. But, there was a change in behavior introduced by
> Oracle
> > in 10G that supports what was asked for by Trolltech. The optimizer
> was
> > provided the "smarts" to determine if your query is best supported by
> a
> > regular cursor or if a bulk collect in the background would be better.
> > The end result is that the application behaves as normal, but the
> > results are faster at getting back to it. What appears to be
> happening
> > is that the database returns the first row as normal, but then
> continues
> > collecting data rows and sequestering then off some where, probably
> the
> > temp tablespace, until your ready for them. Appears to have driven
> the
> > final coffin nail in the old "ORA-01555 Snapshot too old" error.
> Course
> > since Postgresql doesn't have undo segments you don't have that
> problem.
> >
> > -----Original Message-----
> > From: pgsql-interfaces-owner(at)postgresql(dot)org
> > [mailto:pgsql-interfaces-owner(at)postgresql(dot)org] On Behalf Of Tom Lane
> > Sent: Wednesday, November 16, 2005 9:24 AM
> > To: Peter Eisentraut
> > Cc: pgsql-interfaces(at)postgresql(dot)org; Scott Lamb
> > Subject: Re: [INTERFACES] Incremental results from libpq
> >
> > Peter Eisentraut <peter_e(at)gmx(dot)net> writes:
> > > Am Mittwoch, 9. November 2005 22:22 schrieb Tom Lane:
> > >> The main reason why libpq does what it does is that this way we do
> > not
> > >> have to expose in the API the notion of a command that fails part
> way
> > >> through.
> >
> > > I'm at LinuxWorld Frankfurt and one of the Trolltech guys came over
> to
> > talk to
> > > me about this. He opined that it would be beneficial for their
> > purpose (in
> > > certain cases) if the server would first compute the entire result
> set
> > and
> > > keep it in the server memory (thus eliminating potential errors of
> the
> > 1/x
> > > kind) and then ship it to the client in a way that the client would
> be
> > able
> > > to fetch it piecewise. Then, the client application could build the
> > display
> > > incrementally while the rest of the result set travels over the
> (slow)
> > link.
> > > Does that make sense?
> >
> > Ick. That seems pretty horrid compared to the straight
> > incremental-compute-and-fetch approach. Yes, it preserves the
> illusion
> > that a SELECT is all-or-nothing, but at a very high cost, both in
> terms
> > of absolute runtime and in terms of needing a new concept in the
> > frontend protocol. It also doesn't solve the problem for people who
> > need incremental fetch because they have a result set so large they
> > don't want it materialized on either end of the wire. Furthermore,
> ISTM
> > that any client app that's engaging in incremental fetches really has
> to
> > deal with the failure-after-part-of-the-query-is-done problem anyway,
> > because there's always a risk of failures on the client side or in the
> > network connection. So I don't see any real gain in conceptual
> > simplicity from adding this feature anyway.
> >
> > Note that if Trolltech really want this behavior, they can have it
> today
> > --- it's called CREATE TEMP TABLE AS SELECT. It doesn't seem
> attractive
> > enough to me to justify any further feature than that.
> >
> > regards, tom lane
> >
> > ---------------------------(end of
> broadcast)---------------------------
> > TIP 9: In versions below 8.0, the planner will ignore your desire to
> > choose an index scan if your joining column's datatypes do not
> > match
> >
> > ---------------------------(end of
> broadcast)---------------------------
> > TIP 1: if posting/reading through Usenet, please send an appropriate
> > subscribe-nomail command to majordomo(at)postgresql(dot)org so that
> your
> > message can get through to the mailing list cleanly
> >
>
> --
> Bruce Momjian | http://candle.pha.pa.us
> pgman(at)candle(dot)pha(dot)pa(dot)us | (610) 359-1001
> + If your life is a hard drive, | 13 Roberts Road
> + Christ can be your backup. | Newtown Square, Pennsylvania
> 19073
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/docs/faq
>

--
Bruce Momjian | http://candle.pha.pa.us
pgman(at)candle(dot)pha(dot)pa(dot)us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073

In response to

Responses

Browse pgsql-interfaces by date

  From Date Subject
Next Message Guy Rouillier 2005-11-16 20:35:39 Re: Incremental results from libpq
Previous Message Goulet, Dick 2005-11-16 19:22:04 Re: Incremental results from libpq