RE: Vaccum (Was: Re: [HACKERS] Hot Backup Ability)

From: Peter Mount <petermount(at)it(dot)maidstone(dot)gov(dot)uk>
To: "'The Hermit Hacker'" <scrappy(at)hub(dot)org>, Bruce Momjian <maillist(at)candle(dot)pha(dot)pa(dot)us>
Cc: Michael Richards <miker(at)scifair(dot)acadiau(dot)ca>, pgsql-hackers(at)postgresql(dot)org
Subject: RE: Vaccum (Was: Re: [HACKERS] Hot Backup Ability)
Date: 1999-06-30 14:02:11
Message-ID: 1B3D5E532D18D311861A00600865478CA029@exchange1.nt.maidstone.gov.uk
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hmmm, leaving out the truncate would remove some of the hassle of what
to do with segmented tables. As long as a new tuple goes into the
begining of the last dead area, this should work.

Saying this, we would need some form of truncate, but perhaps this could
be done if vacuum is run manually, and not while running automatically?

Peter

--
Peter Mount
Enterprise Support
Maidstone Borough Council
Any views stated are my own, and not those of Maidstone Borough Council.

-----Original Message-----
From: The Hermit Hacker [mailto:scrappy(at)hub(dot)org]
Sent: 30 June 1999 13:41
To: Bruce Momjian
Cc: Michael Richards; pgsql-hackers(at)postgreSQL(dot)org
Subject: Vaccum (Was: Re: [HACKERS] Hot Backup Ability)

On Wed, 30 Jun 1999, Bruce Momjian wrote:

> > Would it be easy to come up with a scheme for the vacuum function
defrag a
> > set number of pages and such, release its locks if there is another
> > process blocked and waiting, then resume after that process is
finished?
>
> That is a very nice idea. We could just release and reaquire the
lock,
> knowing that if there is someone waiting, they would get the lock.
> Maybe someone can comment on this?

My first thought is "doesn't this still require the 'page-reusing'
functionality to exist"? Which virtually eliminates the problem...

If not, then why can't something be done where this is transparent
altogther? Have some sort of mechanism that keeps track of "dead
space"...a trigger that says after X tuples have been deleted, do an
automatic vacuum of the database?

The automatic vacuum would be done in a way similar to Michael's
suggestion above...scan through for the first 'dead space', lock the
table
for a short period of time and "move records up". How many tuples could
you move in a very short period of time, such that it is virtually
transparent to end-users?

As a table gets larger and larger, a few 'dead tuples' aren't going to
make much of a different in performance, so make the threshold some
percentage of the size of the table, so at it grows, the number of 'dead
tuples' has to be larger...

And leave out the truncate at the end...

The 'manual vacuum' would still need to be run periodically, for the
truncate and for stats...

Just a thought...:)

Marc G. Fournier ICQ#7615664 IRC Nick:
Scrappy
Systems Administrator @ hub.org
primary: scrappy(at)hub(dot)org secondary:
scrappy(at){freebsd|postgresql}.org

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Vince Vielhaber 1999-06-30 14:27:35 Re: Vaccum (Was: Re: [HACKERS] Hot Backup Ability)
Previous Message Thomas Lockhart 1999-06-30 13:44:19 Re: [HACKERS] User requests now that 6.5 is out