Re: Preventing duplicate vacuums?

From: Thomas Swan <tswan(at)idigx(dot)com>
To: Robert Treat <xzilla(at)users(dot)sourceforge(dot)net>
Cc: Josh Berkus <josh(at)agliodbs(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Preventing duplicate vacuums?
Date: 2004-02-07 05:07:12
Message-ID: 40247280.2010102@idigx.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Robert Treat wrote:

>On Thu, 2004-02-05 at 16:51, Josh Berkus wrote:
>
>
>>Tom,
>>
>>
>>
>>>Yes we do: there's a lock.
>>>
>>>
>>Sorry, bad test. Forget I said anything.
>>
>>Personally, I would like to have the 2nd vacuum error out instead of blocking.
>>However, I'll bet that a lot of people won't agree with me.
>>
>>
>>
>
>Don't know if I would agree for sure, but i the second vacuum could see
>that it is being blocked by the current vacuum, exiting out would be a
>bonus, since in most scenarios you don't need to run that second vacuum
>so it just ends up wasting resources (or clogging other things up with
>it lock)
>
>
>
What about a situation where someone would have lazy vacuums cron'd and
it takes longer to complete the vacuum than the interval between
vacuums. You could wind up with an ever increasing queue of vacuums.

Erroring out with a "vacuum already in progress" might be useful.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Joshua D. Drake 2004-02-07 05:43:25 Re: Preventing duplicate vacuums?
Previous Message Carroll Kong 2004-02-07 04:53:31 Feature Request, allow binding of stat buffer daemon