| From: | SATYANARAYANA NARLAPURAM <satyanarlapuram(at)gmail(dot)com> |
|---|---|
| To: | Daniil Davydov <3danissimo(at)gmail(dot)com> |
| Cc: | Bharath Rupireddy <bharath(dot)rupireddyforpostgres(at)gmail(dot)com>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Sami Imseih <samimseih(at)gmail(dot)com>, Alexander Korotkov <aekorotkov(at)gmail(dot)com>, Matheus Alcantara <matheusssilv97(at)gmail(dot)com>, Maxim Orlov <orlovmg(at)gmail(dot)com>, Postgres hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
| Subject: | Re: POC: Parallel processing of indexes in autovacuum |
| Date: | 2026-03-31 07:46:20 |
| Message-ID: | CAHg+QDdfsS6DQMUNQUW9ZDt_D0FtfygabkeAkRY5Yy2SeeNbLA@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
Hi
On Mon, Mar 30, 2026 at 1:44 AM Daniil Davydov <3danissimo(at)gmail(dot)com> wrote:
> Hi,
>
> On Mon, Mar 30, 2026 at 7:17 AM SATYANARAYANA NARLAPURAM
> <satyanarlapuram(at)gmail(dot)com> wrote:
> >
> > Thank you for working on this, very useful feature. Sharing a few
> thoughts:
> >
> > 1. Shouldn't we also cap by max_parallel_workers to avoid wasting DSM
> resources in parallel_vacuum_compute_workers?
>
> Actually, autovacuum_max_parallel_workers is already limited by
> max_parallel_workers. It is not clear for me why we allow setting this GUC
> higher than max_parallel_workers, but if this happens, I think it is a
> user's
> misconfiguration.
Isn’t there a wasted effort here if user misconfigures because anyway we
cannot launch that many workers? I suggest making a check here.
>
>
> > 2. Is it intentional that other autovacuum workers not yield cost limits
> to the parallel auto vacuum workers? Cost limits are distributed first
> equally to the autovacuum workers.
> > and then they share that. Therefore, parallel workers will be heavily
> throttled. IIUC, this problem doesn't exist with manual vacuum.
> > If we don't fix this, at least we should document this.
>
> Parallel a/v workers inherit cost based parameters (including the
> vacuum_cost_limit) from the leader worker. Do you mean that this can be too
> low value for parallel operation? If so, user can manually increase the
> vacuum_cost_limit reloption for those tables, where parallel a/v sleeps too
> much (due to cost delay).
They don’t inherit but share, isn’t it?
>
> BTW, describing the cost limit propagation to the parallel a/v workers is
> worth mentioning in the documentation. I'll add it in the next patch
> version.
Yes, that helps
>
>
> > 3. Additionally, is there a point where, based on the cost limits,
> launching additional workers becomes counterproductive compared to running
> fewer workers and preventing it?
>
> I don't think that we can possibly find a universal limit that will be
> appropriate for all possible configurations. By now we are using a pretty
> simple formula for parallel degree calculation. Since user have several
> ways
> to affect this formula, I guess that there will be no problems with it
> (except
> my concerns about opt-out style).
Thanks,
Satya
>
>
| From | Date | Subject | |
|---|---|---|---|
| Next Message | jian he | 2026-03-31 07:47:53 | Re: implement CAST(expr AS type FORMAT 'template') |
| Previous Message | Peter Eisentraut | 2026-03-31 07:43:11 | Re: meson vs. llvm bitcode files |