Re: [HACKERS] Block level parallel vacuum

From: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
To: Masahiko Sawada <masahiko(dot)sawada(at)2ndquadrant(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Sergei Kornilov <sk(at)zsrv(dot)org>, Mahendra Singh <mahi6run(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Amit Langote <langote_amit_f8(at)lab(dot)ntt(dot)co(dot)jp>, David Steele <david(at)pgmasters(dot)net>, Claudio Freire <klaussfreire(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Pavan Deolasee <pavan(dot)deolasee(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] Block level parallel vacuum
Date: 2019-12-19 11:05:06
Message-ID: CAA4eK1LS6f7eub=YdTZC=9LOjAG73TPFG=zWPiA9_H-oDXR9DA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Dec 19, 2019 at 11:11 AM Masahiko Sawada
<masahiko(dot)sawada(at)2ndquadrant(dot)com> wrote:
>
> On Wed, 18 Dec 2019 at 19:06, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> >
>
> - /* Cap by the worker we computed at the beginning of parallel lazy vacuum */
> - nworkers = Min(nworkers, lps->pcxt->nworkers);
> + /*
> + * The number of workers required for parallel vacuum phase must be less
> + * than the number of workers with which parallel context is initialized.
> + */
> + Assert(lps->pcxt->nworkers >= nworkers);
>
> Regarding the above change in your patch I think we need to cap the
> number of workers by lps->pcxt->nworkers because the computation of
> the number of indexes based on lps->nindexes_paralle_XXX can be larger
> than the number determined when creating the parallel context, for
> example, when max_parallel_maintenance_workers is smaller than the
> number of indexes that can be vacuumed in parallel at bulkdelete
> phase.
>

oh, right, but then probably, you can write a comment as this is not so obvious.

> >
> > Few other comments which I have not fixed:
> >
> > 4.
> > + if (Irel[i]->rd_indam->amusemaintenanceworkmem)
> > + nindexes_mwm++;
> > +
> > + /* Skip indexes that don't participate parallel index vacuum */
> > + if (vacoptions == VACUUM_OPTION_NO_PARALLEL ||
> > + RelationGetNumberOfBlocks(Irel[i]) < min_parallel_index_scan_size)
> > + continue;
> >
> > Won't we need to worry about the number of indexes that uses
> > maintenance_work_mem only for indexes that can participate in a
> > parallel vacuum? If so, the above checks need to be reversed.
>
> You're right. Fixed.
>
> >
> > 5.
> > /*
> > + * Remember indexes that can participate parallel index vacuum and use
> > + * it for index statistics initialization on DSM because the index
> > + * size can get bigger during vacuum.
> > + */
> > + can_parallel_vacuum[i] = true;
> >
> > I am not able to understand the second part of the comment ("because
> > the index size can get bigger during vacuum."). What is its
> > relevance?
>
> I meant that the indexes can be begger even during vacuum. So we need
> to check the size of indexes and determine participations of parallel
> index vacuum at one place.
>

Okay, but that doesn't go with the earlier part of the comment. We
can either remove it or explain it a bit more.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tatsuro Yamada 2019-12-19 12:06:50 Re: progress report for ANALYZE
Previous Message Pavel Stehule 2019-12-19 10:52:38 Re: inherits clause for CREATE TYPE? -