Re: [HACKERS] Single row fetch from backend

From: Theo Kramer <theo(at)flame(dot)co(dot)za>
To: hackers(at)postgresql(dot)org
Subject: Re: [HACKERS] Single row fetch from backend
Date: 1999-08-13 15:33:28
Message-ID: 37B43AC8.EBD9A21D@flame.co.za
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Tom Lane wrote:
> Not unless you can precalculate the number of rows you want and use
> LIMIT. I recommend a cursor ;-).
>
> There has been some talk of modifying libpq so that rows could be handed
> back to the application a few at a time, rather than accumulating the
> whole result before PQgetResult lets you have any of it. That wouldn't
> allow you to abort the SELECT early, mind you, but when you're dealing
> with a really big result it would avoid waste of memory space inside the
> client app. I'm not sure if that would address your problem or not.
>
> If you really want the ability to stop the fetch from the backend at
> any random point, a cursor is the only way to do it. I suppose libpq
> might try to offer some syntactic sugar that would make a cursor
> slightly easier to use, but it'd still be a cursor as far as the backend
> and the FE/BE protocol were concerned. ecpg is probably a better answer
> if you want syntactic sugar...

Hmmm, I've had pretty bad experiences with cursors on Informix Online. When
many clients use cursors on large result sets the system (even on big iron)
grinds to a halt. Luckily you can fetch a single row at a time on a normal
select with Informix so that solved that. It does appear, however, that
Postgres does not create huge overheads for cursors, but I would still like
to see what happens when many clients do a cursor select...
--------
Regards
Theo

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Vince Vielhaber 1999-08-13 15:52:45 Re: We won!
Previous Message Theo Kramer 1999-08-13 15:29:21 Re: [HACKERS] Index scan?