Re: [HACKERS] Block level parallel vacuum

From: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
To: Masahiko Sawada <masahiko(dot)sawada(at)2ndquadrant(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Sergei Kornilov <sk(at)zsrv(dot)org>, Mahendra Singh <mahi6run(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Amit Langote <langote_amit_f8(at)lab(dot)ntt(dot)co(dot)jp>, David Steele <david(at)pgmasters(dot)net>, Claudio Freire <klaussfreire(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Pavan Deolasee <pavan(dot)deolasee(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] Block level parallel vacuum
Date: 2019-12-13 06:49:58
Message-ID: CAA4eK1+3RngcujynPBZ7g7UTyEabYs2fwgUNOFCxB2Y+AX=wow@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, Dec 13, 2019 at 11:08 AM Masahiko Sawada
<masahiko(dot)sawada(at)2ndquadrant(dot)com> wrote:
>
> On Fri, 13 Dec 2019 at 14:19, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> >
> > > > >
> > > > > How about adding an additional argument to ReinitializeParallelDSM()
> > > > > that allows the number of workers to be reduced? That seems like it
> > > > > would be less confusing than what you have now, and would involve
> > > > > modify code in a lot fewer places.
> > > > >
> > > >
> > > > Yeah, we can do that. We can maintain some information in
> > > > LVParallelState which indicates whether we need to reinitialize the
> > > > DSM before launching workers. Sawada-San, do you see any problem with
> > > > this idea?
> > >
> > > I think the number of workers could be increased in cleanup phase. For
> > > example, if we have 1 brin index and 2 gin indexes then in bulkdelete
> > > phase we need only 1 worker but in cleanup we need 2 workers.
> > >
> >
> > I think it shouldn't be more than the number with which we have
> > created a parallel context, no? If that is the case, then I think it
> > should be fine.
>
> Right. I thought that ReinitializeParallelDSM() with an additional
> argument would reduce DSM but I understand that it doesn't actually
> reduce DSM but just have a variable for the number of workers to
> launch, is that right?
>

Yeah, probably, we need to change the nworkers stored in the context
and it should be lesser than the value already stored in that number.

> And we also would need to call
> ReinitializeParallelDSM() at the beginning of vacuum index or vacuum
> cleanup since we don't know that we will do either index vacuum or
> index cleanup, at the end of index vacum.
>

Right.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Michael Paquier 2019-12-13 07:12:55 Re: Fetching timeline during recovery
Previous Message vignesh C 2019-12-13 06:40:07 Re: segmentation fault when cassert enabled