From: | Thomas Heller <info(at)zilence(dot)net> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-interfaces(at)postgresql(dot)org |
Subject: | Re: Protocol Question |
Date: | 2014-08-13 00:09:40 |
Message-ID: | CAGTxmOv3V3wa_yQO+RfXdXXkOw5qQ5gvHoiMhaL_8RG0kyyr8A@mail.gmail.com |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-interfaces |
I tried sending P/D/S...B/E/S...E/S... but the second Execute fails because
the portal no longer exists.
So I need a BEGIN/COMMIT for a SELECT in order to read in multiple steps?
I can process the data on-the-fly, my motivation for using Execute with a
limit was to introduce some kind of control on how fast things travel over
the wire. I'm not sure how the backend handles large results sets and
clients that can't keep up. Say I have a SELECT that returns 100k rows,
after 60k rows the clients into some sort of resource limit. Since the
backend just dumps everything on the wire I now have to continue reading
and discarding the remaining rows? I doubt I can Close the statement early
since the buffers on the backend should be full with more rows to send to
the frontend. I don't expect into these kinds of situations very often,
just want the client to "do the right thing". If I run into some sort of
hard limit I can just terminate the connection since I probably have other
things to worry about than reusing a connection.
Anyways, I might be overthinking things. The "standard" flow of
P/D/S.../B/E/S is probably the way to go then.
Regards,
/thomas
PS: The whole thing is available at https://github.com/thheller/shadow-pgsql
but be warned its very alpha.
On Wed, Aug 13, 2014 at 12:08 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Thomas Heller <info(at)zilence(dot)net> writes:
> > In an Extended Query Lifecycle, in order to prepare a query I send the
> > Commands
>
> > Parse('P') / Describe('D') / Sync('S')
>
> > read 1/t/T/Z then to execute
>
> > Bind('B') / Execute('E') / Flush('H')
>
> This is not a good idea. You *need* to use Sync to terminate a command
> sequence in order to be sure of proper error recovery (because if there's
> an error during the Execute, the backend will discard subsequent messages
> until it sees Sync).
>
> > If I skip the Flush after Execute I receive no data, if I Execute and
> Sync
> > I receive the the Limit of rows and a ReadyForQuery('Z').
>
> That's probably because you're not wrapping this in a transaction so the
> Sync implicitly does a commit, discarding the open portal. If you want
> to read from a portal in multiple steps then you should issue a BEGIN
> first and a COMMIT (or ROLLBACK) after you're done. However, have you
> considered just processing the data on-the-fly instead of using a limit?
>
> regards, tom lane
>
From | Date | Subject | |
---|---|---|---|
Next Message | Alistair Bayley | 2014-08-14 04:44:50 | Re: Protocol Question |
Previous Message | Tom Lane | 2014-08-12 22:08:48 | Re: Protocol Question |