Re: Block level parallel vacuum WIP

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
Cc: Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Block level parallel vacuum WIP
Date: 2016-08-23 16:17:30
Message-ID: CA+TgmoanvB1gRZC9jFRYk6xwt1QmxkfZEm2r-R+YLFnxA8jHhg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Aug 23, 2016 at 11:17 AM, Alvaro Herrera
<alvherre(at)2ndquadrant(dot)com> wrote:
> Robert Haas wrote:
>> 2. When you finish the heap scan, or when the array of dead tuple IDs
>> is full (or very nearly full?), perform a cycle of index vacuuming.
>> For now, have each worker process a separate index; extra workers just
>> wait. Perhaps use the condition variable patch that I posted
>> previously to make the workers wait. Then resume the parallel heap
>> scan, if not yet done.
>
> At least btrees should easily be scannable in parallel, given that we
> process them in physical order rather than logically walk the tree. So
> if there are more workers than indexes, it's possible to put more than
> one worker on the same index by carefully indicating each to stop at a
> predetermined index page number.

Well that's fine if we figure it out, but I wouldn't try to include it
in the first patch. Let's make VACUUM parallel one step at a time.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Alvaro Herrera 2016-08-23 16:25:39 Re: Block level parallel vacuum WIP
Previous Message Masahiko Sawada 2016-08-23 15:56:18 Re: Block level parallel vacuum WIP