Re: Large selects handled inefficiently?

From: Jules Bean <jules(at)jellybean(dot)co(dot)uk>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Large selects handled inefficiently?
Date: 2000-08-31 10:06:37
Message-ID: 20000831110637.E24680@grommit.office.vi.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Thu, Aug 31, 2000 at 09:58:34AM +0100, Jules Bean wrote:
> On Thu, Aug 31, 2000 at 03:28:14PM +1100, Chris wrote:
>
> > but it is true that this is a flaw in postgres. It has been
> > discussed on hackers from time to time about implementing a "streaming"
> > interface. This means that the client doesn't absorb all the results
> > before allowing access to the results. You can start processing results
> > as and when they become available by blocking in the client. The main
> > changes would be to the libpq client library, but there would be also
> > other issues to address like what happens if an error happens half way
> > through. In short, I'm sure this will be fixed at some stage, but for
> > now cursors is the only real answer.
>
> Or ...LIMIT...OFFSET, I guess. [As long as I remember to set the
> transaction isolation to serializable. *sigh* Why isn't that the
> default?]
>
> I shall investigate whether LIMIT...OFFSET or cursors seems to be
> better for my application.

OK, I'm using cursors (after having checked that they work with
DBD::Pg!). I'm a little confused about transaction isolation levels,
though. I'm setting the level to 'serializable' --- this seems
important, since other INSERTS might occur during my SELECT. However,
the documentation for DECLARE cursor suggests that the 'INSENSITIVE'
keyword is useless, which seems to me to be equivalent to saying that
the transaction level is always SERIALIZABLE?

Jules

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Hoosain Madhi 2000-08-31 10:54:15 VACUUM database gives error
Previous Message Jules Bean 2000-08-31 09:02:54 Re: Error with tcp/ip networking