Re: libpq - lack of support to set the fetch size

From: matshyeq <matshyeq(at)gmail(dot)com>
To: Marko Kreen <markokr(at)gmail(dot)com>
Cc: "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>, "Daniel Verite *EXTERN*" <daniel(at)manitou-mail(dot)org>, Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: libpq - lack of support to set the fetch size
Date: 2014-03-12 10:57:03
Message-ID: CAONr5=urpsJfSa_1HUAEqTqP0D9HLESS0yMTNACQQiz16M04Ew@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Wed, Mar 12, 2014 at 9:30 AM, Marko Kreen <markokr(at)gmail(dot)com> wrote:

> On Tue, Mar 11, 2014 at 12:39:12PM +0000, matshyeq wrote:
>
> - when using PQsetSingleRowMode() function - does it give an option to
>
> define how many rows to cache on client's side (like JDBC setFetchSize()
>
> does) or leaves it at pqlib's discretion?
>

> This option would not make sense as you are not "fetching" anything,

full resultset is being streamed from server over TCP connection.

Well, I don't know what "streamed" exactly means here.
If server pushes sequentially all the data not asking client if ready to
receive then that's what the issue is about.
If client asks server for another chunk each time it has received previous
one then to me it's implicit 'fetching' scenario where user/developer
doesn't have an option to define fetch size.

>
> > - is it/would it be possible to add corresponding option to pgAdmin to
>
> limit initially (and each subsequently) returned rows in Query Tool by
>
> custom defined max value?
>

> It could close connection in the middle of resultset but that seems like

bad idea. LIMIT N or FETCH N are better for such task.

I don't see why? I can't think of any single SQL tool I've been working
with that didn't have this functionality, really.
The principle I find very simple and useful.
There is defined "fetch row size" parameter (each tool calls give its own
name),
after submitting ANY query, client fetches result set rows but not more
than that.
Some programs even automatically define this value based on result grid
size displayed on the screen.
User then usually has two buttons, fetch another batch/screen or fetch all
- he decides.
If he decides way too late (break for coffee) then he simply resubmits the
query (and potentially change the parameter first)...

I don't find value in auto-fetching millions of rows for user to present on
the screen.
Also I don't think it's particularly useful when you need to know and apply
database specific SQL syntax to limit the rows.
If you join multiple tables that may be even more tricky (which table to
apply limit? or use subquerying instead?).

Last but non least, I can even see now this Option was once available in
pgAdmin, but disappeared over time
http://www.pgadmin.org/docs/1.10/query.html

"In the options dialog <http://www.pgadmin.org/docs/1.10/options-tab3.html>,
you can specify a default limit for the rowset size to retrieve. By
default, this value will be 100. If the number of rows to retrieve from the
server exceeds this value, a message box will appear asking what to do to
prevent retrieval of an unexpected high amount of data. You may decide to
retrieve just the first rows, as configured with the max rows setting, or
retrieving the complete rowset regardless of the setting, or abort the
query, effectively retrieving zero rows."

This is pretty much exactly what I'm talking about here (pgAdmin
functionality)
and its equivalent for developers (pqlib)

Regards
Msciwoj

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Marko Kreen 2014-03-12 11:40:50 Re: libpq - lack of support to set the fetch size
Previous Message Marko Kreen 2014-03-12 09:30:01 Re: libpq - lack of support to set the fetch size