Re: Large selects handled inefficiently?

From: Chris <chrisb(at)nimrod(dot)itg(dot)telstra(dot)com(dot)au>
To: Jules Bean <jules(at)jellybean(dot)co(dot)uk>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Large selects handled inefficiently?
Date: 2000-08-31 04:28:14
Message-ID: 39ADDEDE.3BFB4EB7@nimrod.itg.telecom.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Jules Bean wrote:
>
> On Thu, Aug 31, 2000 at 12:22:36AM +1000, Andrew Snow wrote:
> >
> > > I believe I can work around this problem using cursors (although I
> > > don't know how well DBD::Pg copes with cursors). However, that
> > > doesn't seem right -- cursors should be needed to fetch a large query
> > > without having it all in memory at once...
> >
> > Actually, I think thats why cursors were invented in the first place ;-) A
> > cursor is what you are using if you're not fetching all the results of a
> > query.
>
> I really can't agree with you there.
>
> A cursor is another slightly foolish SQL hack.

Not quite, but it is true that this is a flaw in postgres. It has been
discussed on hackers from time to time about implementing a "streaming"
interface. This means that the client doesn't absorb all the results
before allowing access to the results. You can start processing results
as and when they become available by blocking in the client. The main
changes would be to the libpq client library, but there would be also
other issues to address like what happens if an error happens half way
through. In short, I'm sure this will be fixed at some stage, but for
now cursors is the only real answer.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Tom Lane 2000-08-31 05:33:35 Re: Error with tcp/ip networking
Previous Message Travis Bauer 2000-08-31 04:13:43 Error with tcp/ip networking