Re: A reloption for partitioned tables - parallel_workers

From: Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at>
To: Amit Langote <amitlangote09(at)gmail(dot)com>, Seamus Abshere <seamus(at)abshere(dot)net>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: A reloption for partitioned tables - parallel_workers
Date: 2021-02-15 16:06:52
Message-ID: 0dfcba0b8f5365a70bc34c71fff4c940f918c599.camel@cybertec.at
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, 2021-02-15 at 17:53 +0900, Amit Langote wrote:
> On Mon, Feb 15, 2021 at 5:28 PM Seamus Abshere <seamus(at)abshere(dot)net> wrote:
> > It turns out parallel_workers may be a useful reloption for certain uses of partitioned tables,
> > at least if they're made up of fancy column store partitions (see
> > https://www.postgresql.org/message-id/7d6fdc20-857c-4cbe-ae2e-c0ff9520ed55%40www.fastmail.com).
> > Would somebody tell me what I'm doing wrong? I would love to submit a patch but I'm stuck:
>
> You may see by inspecting the callers of compute_parallel_worker()
> that it never gets called on a partitioned table, only its leaf
> partitions. Maybe you could try calling compute_parallel_worker()
> somewhere in add_paths_to_append_rel(), which has this code to figure
> out parallel_workers to use for a parallel Append path for a given
> partitioned table:
>
> /* Find the highest number of workers requested for any subpath. */
> foreach(lc, partial_subpaths)
> {
> Path *path = lfirst(lc);
>
> parallel_workers = Max(parallel_workers, path->parallel_workers);
> }
> Assert(parallel_workers > 0);
>
> /*
> * If the use of parallel append is permitted, always request at least
> * log2(# of children) workers. We assume it can be useful to have
> * extra workers in this case because they will be spread out across
> * the children. The precise formula is just a guess, but we don't
> * want to end up with a radically different answer for a table with N
> * partitions vs. an unpartitioned table with the same data, so the
> * use of some kind of log-scaling here seems to make some sense.
> */
> if (enable_parallel_append)
> {
> parallel_workers = Max(parallel_workers,
> fls(list_length(live_childrels)));
> parallel_workers = Min(parallel_workers,
> max_parallel_workers_per_gather);
> }
> Assert(parallel_workers > 0);
>
> Note that the 'rel' in this code refers to the partitioned table for
> which an Append path is being considered, so compute_parallel_worker()
> using that 'rel' would use the partitioned table's
> rel_parallel_workers as you are trying to do.

Note that there is a second chunk of code quite like that one a few
lines down from there that would also have to be modified.

I am +1 on allowing to override the degree of parallelism on a parallel
append. If "parallel_workers" on the partitioned table is an option for
that, it might be a simple solution. On the other hand, perhaps it would
be less confusing to have a different storage parameter name rather than
having "parallel_workers" do double duty.

Also, since there is a design rule that storage parameters can only be used
on partitions, we would have to change that - is that a problem for anybody?

There is another related consideration that doesn't need to be addressed
by this patch, but that is somewhat related: if the executor prunes some
partitions, the degree of parallelism is unaffected, right?
So if the planner decides to use 24 workers for 25 partitions, and the
executor discards all but one of these partition scans, we would end up
with 24 workers scanning a single partition.

I am not sure how that could be improved.

Yours,
Laurenz Albe

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tomas Vondra 2021-02-15 16:07:11 Re: Improvements and additions to COPY progress reporting
Previous Message Finnerty, Jim 2021-02-15 16:01:16 Re: PostgreSQL <-> Babelfish integration