From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Block level parallel vacuum WIP |
Date: | 2016-08-23 16:48:36 |
Message-ID: | 20160823164836.naody2ht6cutioiz@alap3.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 2016-08-23 12:17:30 -0400, Robert Haas wrote:
> On Tue, Aug 23, 2016 at 11:17 AM, Alvaro Herrera
> <alvherre(at)2ndquadrant(dot)com> wrote:
> > Robert Haas wrote:
> >> 2. When you finish the heap scan, or when the array of dead tuple IDs
> >> is full (or very nearly full?), perform a cycle of index vacuuming.
> >> For now, have each worker process a separate index; extra workers just
> >> wait. Perhaps use the condition variable patch that I posted
> >> previously to make the workers wait. Then resume the parallel heap
> >> scan, if not yet done.
> >
> > At least btrees should easily be scannable in parallel, given that we
> > process them in physical order rather than logically walk the tree. So
> > if there are more workers than indexes, it's possible to put more than
> > one worker on the same index by carefully indicating each to stop at a
> > predetermined index page number.
>
> Well that's fine if we figure it out, but I wouldn't try to include it
> in the first patch. Let's make VACUUM parallel one step at a time.
Given that index scan(s) are, in my experience, way more often the
bottleneck than the heap-scan(s), I'm not sure that order is the
best. The heap-scan benefits from the VM, the index scans don't.
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2016-08-23 16:50:19 | Re: Logical decoding of sequence advances, part II |
Previous Message | Craig Ringer | 2016-08-23 16:38:20 | Re: Logical decoding of sequence advances, part II |