Re: [HACKERS] Re: vacuum timings

From: The Hermit Hacker <scrappy(at)hub(dot)org>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>, PostgreSQL-development <pgsql-hackers(at)postgreSQL(dot)org>
Subject: Re: [HACKERS] Re: vacuum timings
Date: 2000-01-22 00:11:27
Message-ID: Pine.BSF.4.21.0001211957590.23487-100000@thelab.hub.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, 21 Jan 2000, Tom Lane wrote:

> The Hermit Hacker <scrappy(at)hub(dot)org> writes:
> >> lock table for less duration, or read lock
>
> > if there is some way that we can work around the bug that I believe Tom
> > found with removing the lock altogether (ie. makig use of MVCC), I think
> > that would be the best option ... if not possible, at least get things
> > down to a table lock vs the whole database?
>
> Huh? VACUUM only requires an exclusive lock on the table it is
> currently vacuuming; there's no database-wide lock.
>
> Even a single-table exclusive lock is bad, of course, if it's a large
> table that's critical to a 24x7 application. Bruce was talking about
> the possibility of having VACUUM get just a write lock on the table;
> other backends could still read it, but not write it, during the vacuum
> process. That'd be a considerable step forward for 24x7 applications,
> I think.
>
> It looks like that could be done if we rewrote the table as a new file
> (instead of compacting-in-place), but there's a problem when it comes
> time to rename the new files into place. At that point you'd need to
> get an exclusive lock to ensure all the readers are out of the table too
> --- and upgrading from a plain lock to an exclusive lock is a well-known
> recipe for deadlocks. Not sure if this can be solved.

What would it take to re-use space vs compacting/truncating the file?

Right now, ppl vacuum the database to clear out old, deleted records, and
truncate the tables ... if we were to change things so that an
insert/update were to find the next largest contiguous free block in the
table and re-used it, then, theoretically, you would eventually hit a
fixed table size assuming no new inserts, and only updates/deletes, right?

Eventually, you'd have "holes" in the table, where an inserted record was
smaller then the "next largest contiguous free block", but what's left
over is too small for any further additions ... but I would think that
that would greatly reduce how often you'd have to do a vacuum, and, if we
split out ANALYZE, you could use that to update statistics ...

To speed up the search for the "next largest contiguous free block", a
special table.FAT could be used similar to an index?

Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy(at)hub(dot)org secondary: scrappy(at){freebsd|postgresql}.org

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Thomas Lockhart 2000-01-22 02:49:20 Re: [HACKERS] Building Documentation under Debian
Previous Message Marc Tardif 2000-01-21 23:10:46 max(oid)