Re: Shortening time of vacuum analyze

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Francisco Reyes <lists(at)natserv(dot)com>
Cc: pgsql General List <pgsql-general(at)postgresql(dot)org>
Subject: Re: Shortening time of vacuum analyze
Date: 2002-01-30 16:22:58
Message-ID: 23778.1012407778@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Francisco Reyes <lists(at)natserv(dot)com> writes:
> Until 7.2 release is out I am looking for a way to optimize a vacuum
> analyze.

7.2RC2 is going to mutate into 7.2 *real* soon now, probably next week.
My best advice to you is not to wait any longer.

> Nightly doing delete of about 6 million records and then re-merging.
> Previously I was doing truncate, but this was an issue if a user tried to
> use the system while we were loading. Now we are having a problem while
> the server is running vacuum analyzes.

> Does vacuum alone takes less time?

Yes, but with so many deletes I'm sure that it's the space-compaction
part that's killing you.

The only useful workaround I can think of is to create a new table,
fill it with the data you want, then DROP the old table and ALTER RENAME
the new one into place. However this will not work if there are other
tables with foreign-key references to the big table. You also have a
problem if you can't shut off updates to the old table while this is
going on.

7.2's lazy VACUUM ought to be perfect for you, though.

regards, tom lane

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Kevin D 2002-01-30 16:24:38 test
Previous Message Francisco Reyes 2002-01-30 16:15:54 Re: Moving my business to PostgreSQL