Tom Lane wrote:
> Huh? There is no extra cost in what I suggested; it'll perform
> exactly the same number of index scans that it would do anyway.
The things I wanted to say is that:
If we can stop at any point, we can make maintenance memory large
sufficient to contain all of the dead tuples, then we only need to
clean index for once. No matter how many times vacuum stops,
indexes are cleaned for once.
But in your proposal, indexes will be scan as many as vacuum stops.
Those extra indexes cleaning are thought as the extra cost compared
with stop-on-dime approach. To vacuum a large table by stopping 8
times, tests show the extra cost can be one third of the stop-on-dime
>So I'm not really convinced that being able to stop a table
> vacuum halfway is critical.
To run vacuum on the same table for a long period, it is critical
to be sure:
1. not to eat resources that foreground processes needs
2. not to block vacuuming of hot-updated tables
3. not to block any transaction, not to block any backup activities
In the current implementation of concurrent vacuum, the third is not
satisfied obviously, the first issue comes to my mind is the
lazy_truncate_heap, it takes AccessExclusiveLock for a long time,
that is problematic. Except we change such kinds of mechanism to ensure
that there is no problem to run vacuum on the same table for several
days, we can not say we don’t need to stop in a half way.
Galy Lee <lee(dot)galy(at)oss(dot)ntt(dot)co(dot)jp>
NTT Open Source Software Center
In response to
pgsql-hackers by date
|Next:||From: Joshua D. Drake||Date: 2007-02-28 05:08:58|
|Subject: Re: Proposal for Implenting read-only queries during wal
replay (SoC 2007)|
|Previous:||From: Zoltan Boszormenyi||Date: 2007-02-28 04:40:36|
|Subject: Re: Final version of IDENTITY/GENERATED patch|