RE: Parallel INSERT SELECT take 2

From: "Zhijie Hou (Fujitsu)" <houzj(dot)fnst(at)fujitsu(dot)com>
To: Tomas Vondra <tomas(at)vondra(dot)me>, Greg Nancarrow <gregn4422(at)gmail(dot)com>
Cc: PostgreSQL Developers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Tsunakawa, Takayuki/綱川 貴之 <tsunakawa(dot)takay(at)fujitsu(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>
Subject: RE: Parallel INSERT SELECT take 2
Date: 2026-05-09 06:31:20
Message-ID: TYRPR01MB14195BA8A74AF9BA370E04A99943A2@TYRPR01MB14195.jpnprd01.prod.outlook.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Saturday, May 9, 2026 7:59 AM Tomas Vondra <tomas(at)vondra(dot)me> wrote:
>
> declarative approach
> --------------------
>
> The new declarative approach in inspired by Tom's post [3] - there's a new
> pg_class attribute, determining if a given target relation (including all it's child
> partitions) is assumed to be "safe" (i.e. at least restricted).
>
> I like that this is "caching" the safety information in pg_class (which means we
> don't have to do the expensive stuff even for the first insert in a connection).
> That seems like a good approach.
>
> But I very much dislike that we're shifting all the responsibility to the
> user/developer. I have very little faith people will understand what is required
> for safety, and I believe we'd get plenty of bogus bug reports due to this.
>
> I 100% agree with Robert [4] that labeling functions as parallel-safe is not the
> same as labeling target relations. We can't determine safety for functions
> (halting theorem and all that). We can determine safety for target relations,
> we just choose not to because of overhead.
>
> But why can't we keep the pg_class updated automatically? I mean, every
> time a parallel safety of a relation changes (e.g. because a trigger is
> added/removed, a default expression changes, ...), we could recalculate the
> pg_class flag, and then update all the parent relations too.
>
> If we did that, then we'd not need the manual declarations at all, I think. Sure,
> determining the safety may not be cheap, but so what? It should be only a
> very rare operation. If you do ALTER TABLE often enough for this to matter,
> you have other problems, I guess.
>
> Yes, there may be other problems, e.g. there may be deadlock hazards when
> updating the parent relations, as pointed out by Robert [5]. But that outght to
> be solvable, e.g. by figuring out which relations to lock and then locking them
> from the top (and retrying if needed).
>
> There's also suggestions there might be race conditions, e.g. [3]. I haven't
> thought about this very much, but I don't quite see why this would be any
> different from other ALTER TABLE commands. Maybe I'm missing something
> obvious ...
>
> I'm not sure how difficult it is to figure out all possibly affected relations,
> when e.g. a marking on a function changes. But these claims [6] seem rather
> hand-wavy, and I'd like to see some clear explanation why this is not
> practical. Maybe it'd require improving some of the dependency tracking, or
> whatever. But so what? New features often require that kind of stuff.

From my personal understanding, the question isn't practicality, but rather how
to justify the added complexity and the locking strategy change.

For partitioned tables, if we want to automatically adjust
pg_class.parallel_safety, we would need to do the following. For example, when
executing ALTER FUNCTION ... PARALLEL SAFE or CREATE OR REPLACE FUNCTION, we
would first need to locate all tables that have any triggers, index expressions,
or other elements that could affect parallel safety and that use the altered
function. Then, we would need to find all parent tables of those tables and
update pg_class.parallel_safety accordingly. To make this concurrency-safe, we
would likely need to take locks on the affected tables.

I don't think this is impossible to implement, but I'm not entirely sure whether
it would be acceptable to perform these operations during a FUNCTION DDL, as it
currently does not take locks on user tables. While I understand that
dependencies exist between tables and functions, it might not be immediately
intuitive to users that altering a function could lock user tables.
Additionally, this approach would require developers to carefully identify all
tables that use the changed function, which could be error-prone. I haven't
analyzed this deeply, so I cannot say for certain whether it would be easy or
not.

Similarly, for ALTER TABLE that changes certain expressions, we currently do not
take locks on the parent tables of the one being altered. However, we would need
to do so if we want to maintain parallel safety. Please forgive me if I've missed
something.

BTW, most of the technical discussion for the new approaches after reverting
took place in another thread[1]. It might be helpful to review that thread as well,
in case some of the same questions were raised there.

> alternative idea
> ----------------
>
> What if we took a very different approach, and just made sure the INSERT
> part never runs concurrently with the SELECT? Say we fully materialize the
> SELECT result (e.g. write it into a tuplestore), finish the parallel mode, and
> only then do the INSERT?
>
> That's pretty much what existing workarounds do - they do the SELECT, stash
> the result somewhere (either outside the database or in a CTAS), and then do
> a separate INSERT into the table.
>
> AFAIK that wouldn't have any of the issues mentioned here. Yes, it may
> require writing the data into an temporary file, but that seems like something
> we could cost.
>
> I haven't even tried to implement this, but it seems workable. Or am I missing
> something?

I haven't tried this approach before, so I haven't analyzed its feasibility, but
it does seem feasible.

That said, one concern is that, the natural next step beyond parallel SELECT for
INSERT is to eventually support true parallel INSERT. If we hope to reach that
point in the future, we will still need to address the parallel safety issue
regardless. Therefore, if solving the safety issue is necessary anyway, it might
be better to tackle it first. As I understand it, saving changes to a temporary
file is less efficient than inserting tuples concurrently with parallel SELECT.

[1] https://www.postgresql.org/message-id/flat/OS0PR01MB5716DB1E3F723F86314D080094F09%40OS0PR01MB5716.jpnprd01.prod.outlook.com#dd7228adb44037fc7df6ada73349518f

Best Regards,
Hou zj

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Chao Li 2026-05-09 06:38:03 Re: FOR PORTION OF does not recompute GENERATED STORED columns that depend on the range column
Previous Message Zhenwei Shang 2026-05-09 06:30:17 Re: Doc update proposal for the note on log_statement in the runtime config for logging page