From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Neha Khatri <nehakhatri5(at)gmail(dot)com> |
Cc: | Kuntal Ghosh <kuntalghosh(dot)2007(at)gmail(dot)com>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: strange parallel query behavior after OOM crashes |
Date: | 2017-04-06 01:20:15 |
Message-ID: | CA+Tgmoaf5ffhqJ2h8SCa2B2uk_13tKoqqHw7aORQvfOti82dew@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Apr 5, 2017 at 8:17 PM, Neha Khatri <nehakhatri5(at)gmail(dot)com> wrote:
> The problem here seem to be the change in the max_parallel_workers value
> while the parallel workers are still under execution. So this poses two
> questions:
>
> 1. From usecase point of view, why could there be a need to tweak the
> max_parallel_workers exactly at the time when the parallel workers are at
> play.
> 2. Could there be a restriction on tweaking of max_parallel_workers while
> the parallel workers are at play? At least do not allow setting the
> max_parallel_workers less than the current # of active parallel workers.
Well, that would be letting the tail wag the dog. The maximum value
of max_parallel_workers is only 1024, and what we're really worried
about here is seeing a value near PG_UINT32_MAX, which leaves a lot of
daylight. How about just creating a #define that's used by guc.c as
the maximum for the GUC, and here we assert that we're <= that value?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Noah Misch | 2017-04-06 01:30:38 | Re: Rewriting the test of pg_upgrade as a TAP test |
Previous Message | Andres Freund | 2017-04-06 01:14:06 | Re: Re: new set of psql patches for loading (saving) data from (to) text, binary files |