Re: max_parallel_workers question

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Jeff Davis <pgsql(at)j-davis(dot)com>
Cc: "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: max_parallel_workers question
Date: 2019-09-28 04:10:53
Message-ID: CA+TgmoZLaOUUfHv1S+ueUCyazHrR-YE6jSZ9mZiwgGcM7eDi-w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, Sep 27, 2019 at 8:07 PM Jeff Davis <pgsql(at)j-davis(dot)com> wrote:
> The current docs for max_parallel_workers start out:
>
> "Sets the maximum number of workers that the system can support for
> parallel operations..."
>
> In my interpretation, "the system" means the entire cluster, but the
> max_parallel_workers setting is PGC_USERSET. That's a bit confusing,
> because two different backends can have different settings for "the
> maximum number ... the system can support".

Oops.

I intended it to mean "the entire cluster." Basically, how many
workers out of max_worker_processes are you willing to use for
parallel query, as opposed to other things. I agree that PGC_USERSET
doesn't make any sense.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2019-09-28 04:16:05 Re: Consider low startup cost in add_partial_path
Previous Message Peter Geoghegan 2019-09-28 04:02:34 contrib/bloom Valgrind error