Re: PATCH: enabling parallel execution for cursors explicitly (experimental)

From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: PATCH: enabling parallel execution for cursors explicitly (experimental)
Date: 2017-11-01 02:19:39
Message-ID: CAMsr+YEhqXTw6fzEmQzanLJ4Q7p6mxVBTZ0z9k1FxR5PMgqH=w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

> Now, I agree this is somewhat more limited than I hoped for, but OTOH it
> still solves the issue I initially aimed for (processing large query
> results in efficient way).

I don't quite understand this part.

We already send results to the client in a stream unless it's
something we have to materialize, in which case a cursor won't help
anyway.

If the client wants to fetch in chunks it can use a portal and limited
size fetches. That shouldn't (?) be parallel-unsafe, since nothing
else can happen in the middle anyway.

But in most cases the client can just fetch all, and let the socket
buffering take care of things, reading results only when it wants
them, and letting the server block when the windows are full.

That's not to say that SQL-level cursor support wouldn't be nice. I'm
just trying to better understand what it's solving.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Craig Ringer 2017-11-01 02:35:28 Re: Dynamic result sets from procedures
Previous Message Robert Haas 2017-11-01 02:16:53 Re: Another oddity in handling of WCO constraints in postgres_fdw