max_parallel_workers question

From: Jeff Davis <pgsql(at)j-davis(dot)com>
To: pgsql-hackers(at)postgresql(dot)org
Subject: max_parallel_workers question
Date: 2019-09-28 00:07:39
Message-ID: d040a1144e0127a49e335d1244a4de102a2a443b.camel@j-davis.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

The current docs for max_parallel_workers start out:

"Sets the maximum number of workers that the system can support for
parallel operations..."

In my interpretation, "the system" means the entire cluster, but the
max_parallel_workers setting is PGC_USERSET. That's a bit confusing,
because two different backends can have different settings for "the
maximum number ... the system can support".

max_parallel_workers is compared against the total number of parallel
workers in the system, which appears to be why the docs are worded that
way. But it's still confusing to me.

If the purpose is to make sure parallel queries don't take up all of
the worker processes, perhaps we should rename the setting
reserved_worker_processes, and make it PGC_SUPERUSER.

If the purpose is to control execution within a backend, perhaps we
should just compare it to the count of parallel processes that the
backend is already using.

If the purpose is just to be a more flexible version of
max_worker_processes, maybe we should change it to PGC_SIGHUP?

If it has multiple purposes, perhaps we should have multiple GUCs?

Regards,
Jeff Davis

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message James Coleman 2019-09-28 00:31:30 Re: [PATCH] Incremental sort (was: PoC: Partial sort)
Previous Message Tom Lane 2019-09-27 22:03:08 Re: ECPG installcheck tests fail if PGDATABASE is set