Re: [HACKERS] Block level parallel vacuum

From: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
To: Masahiko Sawada <masahiko(dot)sawada(at)2ndquadrant(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Sergei Kornilov <sk(at)zsrv(dot)org>, Mahendra Singh <mahi6run(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Amit Langote <langote_amit_f8(at)lab(dot)ntt(dot)co(dot)jp>, David Steele <david(at)pgmasters(dot)net>, Claudio Freire <klaussfreire(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Pavan Deolasee <pavan(dot)deolasee(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] Block level parallel vacuum
Date: 2019-12-18 06:34:53
Message-ID: CAA4eK1+PCOLhYLO995vRYj9GE-4i0cRk4VWG_OmNvXZvZE8H0Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, Dec 18, 2019 at 11:46 AM Masahiko Sawada
<masahiko(dot)sawada(at)2ndquadrant(dot)com> wrote:
>
> On Wed, 18 Dec 2019 at 15:03, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> >
> > I was analyzing your changes related to ReinitializeParallelDSM() and
> > it seems like we might launch more number of workers for the
> > bulkdelete phase. While creating a parallel context, we used the
> > maximum of "workers required for bulkdelete phase" and "workers
> > required for cleanup", but now if the number of workers required in
> > bulkdelete phase is lesser than a cleanup phase(as mentioned by you in
> > one example), then we would launch more workers for bulkdelete phase.
>
> Good catch. Currently when creating a parallel context the number of
> workers passed to CreateParallelContext() is set not only to
> pcxt->nworkers but also pcxt->nworkers_to_launch. We would need to
> specify the number of workers actually to launch after created the
> parallel context or when creating it. Or I think we call
> ReinitializeParallelDSM() even the first time running index vacuum.
>

How about just having ReinitializeParallelWorkers which can be called
only via vacuum even for the first time before the launch of workers
as of now?

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Amit Kapila 2019-12-18 06:37:34 Re: [HACKERS] Block level parallel vacuum
Previous Message Mahendra Singh 2019-12-18 06:31:43 Re: [HACKERS] Block level parallel vacuum