Skip site navigation (1) Skip section navigation (2)

Re: Incremental results from libpq

From: "Goulet, Dick" <DGoulet(at)vicr(dot)com>
To: "Bruce Momjian" <pgman(at)candle(dot)pha(dot)pa(dot)us>
Cc: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>,"Peter Eisentraut" <peter_e(at)gmx(dot)net>,<pgsql-interfaces(at)postgresql(dot)org>,"Scott Lamb" <slamb(at)slamb(dot)org>
Subject: Re: Incremental results from libpq
Date: 2005-11-16 19:22:04
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-interfaces

	If I may, one item that would be of extreme use to our location
would be global temporary tables.  These have existed since Oracle 9.0.
They are defined once and then used by clients as needed.  Each session
is ignorant of the data of any other session and once you disconnect the
data from the session disappears.  Truly a real temporary table.

-----Original Message-----
From: Bruce Momjian [mailto:pgman(at)candle(dot)pha(dot)pa(dot)us] 
Sent: Wednesday, November 16, 2005 11:33 AM
To: Goulet, Dick
Cc: Tom Lane; Peter Eisentraut; pgsql-interfaces(at)postgresql(dot)org; Scott
Subject: Re: [INTERFACES] Incremental results from libpq

Added to TODO:

        o Allow query results to be automatically batched to the client

          Currently, all query results are transfered to the libpq
          client before libpq makes the results available to the
          application.  This feature would allow the application to make
          use of the first result rows while the rest are transfered, or
          held on the server waiting for them to be requested by libpq.
          One complexity is that a query like SELECT 1/col could error
          out mid-way through the result set.


Goulet, Dick wrote:
> Tom,
> 	Your case for not supporting this is reasonable, at least to me.
> Personally I believe you should take one side or the other at the
> level and then allow the app developer to use it as appropriate, so no
> argument here.  But, there was a change in behavior introduced by
> in 10G that supports what was asked for by Trolltech.  The optimizer
> provided the "smarts" to determine if your query is best supported by
> regular cursor or if a bulk collect in the background would be better.
> The end result is that the application behaves as normal, but the
> results are faster at getting back to it.  What appears to be
> is that the database returns the first row as normal, but then
> collecting data rows and sequestering then off some where, probably
> temp tablespace, until your ready for them.  Appears to have driven
> final coffin nail in the old "ORA-01555 Snapshot too old" error.
> since Postgresql doesn't have undo segments you don't have that
> -----Original Message-----
> From: pgsql-interfaces-owner(at)postgresql(dot)org
> [mailto:pgsql-interfaces-owner(at)postgresql(dot)org] On Behalf Of Tom Lane
> Sent: Wednesday, November 16, 2005 9:24 AM
> To: Peter Eisentraut
> Cc: pgsql-interfaces(at)postgresql(dot)org; Scott Lamb
> Subject: Re: [INTERFACES] Incremental results from libpq 
> Peter Eisentraut <peter_e(at)gmx(dot)net> writes:
> > Am Mittwoch, 9. November 2005 22:22 schrieb Tom Lane:
> >> The main reason why libpq does what it does is that this way we do
> not
> >> have to expose in the API the notion of a command that fails part
> >> through.
> > I'm at LinuxWorld Frankfurt and one of the Trolltech guys came over
> talk to 
> > me about this.  He opined that it would be beneficial for their
> purpose (in 
> > certain cases) if the server would first compute the entire result
> and 
> > keep it in the server memory (thus eliminating potential errors of
> 1/x 
> > kind) and then ship it to the client in a way that the client would
> able 
> > to fetch it piecewise.  Then, the client application could build the
> display 
> > incrementally while the rest of the result set travels over the
> link.  
> > Does that make sense?
> Ick.  That seems pretty horrid compared to the straight
> incremental-compute-and-fetch approach.  Yes, it preserves the
> that a SELECT is all-or-nothing, but at a very high cost, both in
> of absolute runtime and in terms of needing a new concept in the
> frontend protocol.  It also doesn't solve the problem for people who
> need incremental fetch because they have a result set so large they
> don't want it materialized on either end of the wire.  Furthermore,
> that any client app that's engaging in incremental fetches really has
> deal with the failure-after-part-of-the-query-is-done problem anyway,
> because there's always a risk of failures on the client side or in the
> network connection.  So I don't see any real gain in conceptual
> simplicity from adding this feature anyway.
> Note that if Trolltech really want this behavior, they can have it
> --- it's called CREATE TEMP TABLE AS SELECT.  It doesn't seem
> enough to me to justify any further feature than that.
> 			regards, tom lane
> ---------------------------(end of
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>        choose an index scan if your joining column's datatypes do not
>        match
> ---------------------------(end of
> TIP 1: if posting/reading through Usenet, please send an appropriate
>        subscribe-nomail command to majordomo(at)postgresql(dot)org so that
>        message can get through to the mailing list cleanly

  Bruce Momjian                        |
  pgman(at)candle(dot)pha(dot)pa(dot)us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania


pgsql-interfaces by date

Next:From: Bruce MomjianDate: 2005-11-16 20:13:17
Subject: Re: Incremental results from libpq
Previous:From: Bruce MomjianDate: 2005-11-16 16:33:21
Subject: Re: Incremental results from libpq

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group