Skip site navigation (1) Skip section navigation (2)

How to make lazy VACUUM of one table run in several transactions ?

From: Hannu Krosing <hannu(at)tm(dot)ee>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: How to make lazy VACUUM of one table run in several transactions ?
Date: 2005-04-24 00:43:22
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
Hi Tom, 

I've got the impression that you have worked most actively on VACUUM and
so I ask you this directly instead of adressing pgsql-hackers list in
general. Feel free to correct me :)

I have a problem, that I think can be solved by splitting the vacuum up
to run in several transactions, each running for no more than X minutes.

The problem itself is having a database with small (5-50k rows), fast-
changing tables and huge (5-25M rows) slower changing tables, all
running in a 24/7 setup. 

The small table needs to be kept small by constant vacuuming (a loop
doing a vacuum on this table after each 15 sec interval) to keep up with
its traffic.

The problem appears when the big table needs to be vacuumed, as this
vacuum on big table prevents the vacuum on small table from freeing up
the space used by dead tuples.

And the sutuation is made *worse* by running the vacuum with
vacuum_cost_limit to reduce the I/O impact, (kind of priority-reversal),
as then vacuum runs then even longer, and slows down operations on the
small table even more. 

The fastest fix seems to change vacuum command to run in several

So what should be done in addition to changing 

    lazy_vacuum_rel(onerel, vacstmt);

to check for some time/page_cnt limit after each heap page (near the
start of main loop in lazy_scan_heap() ), and if it is reached then
stop, clean up indexes, and return the blkno of next page needing to be
replacing the call to lazy_vacuum_rel(onerel, vacstmt); in vacuum.c with
the following loop.

    next_page_to_vacuum = 0;
    while (next_page_to_vacuum < RelationGetNumberOfBlocks(onerel)) {
        ActiveSnapshot = CopySnapshot(GetTransactionSnapshot());

        next_page_to_vacuum = lazy_vacuum_rel(onerel, vacstmt);


Must some locks also be released an reaquired inside this loop, or is
there something else I should keep in mind when trying to do this ?

The operations in this loop need not be cheap - I'm happy if I can keep
individual transactions below 5 to 10 minutes, though 1-2 min bould be

P.S. One other typical case where long transactions are disastrous is 
Slony1's use of LISTEN/NOTIFY (the non-indexed table pg_listener, after
having grown to 100Mb, is not too responsive), though there the real
solution for Slony1 would be switching to polling instead of interrupt
(notify) mode for high-volume databases.

Hannu Krosing <hannu(at)tm(dot)ee>

pgsql-hackers by date

Next:From: Paul TillotsonDate: 2005-04-24 02:00:30
Subject: Re: possible TODO: read-only tables, select from indexes
Previous:From: Josh BerkusDate: 2005-04-23 23:53:16
Subject: Re: Bad n_distinct estimation; hacks suggested?

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group