Re: VACUUM process running for a long time

From: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
To: Jan Krcmar <honza801(at)gmail(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: VACUUM process running for a long time
Date: 2010-04-16 09:33:22
Message-ID: v2rdcc563d11004160233r4d981993haf2898a4340561@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Wed, Apr 14, 2010 at 8:01 AM, Jan Krcmar <honza801(at)gmail(dot)com> wrote:
> hi
>
> i've got the database (about 300G) and it's still growing.
>
> i am inserting new data (about 2G/day) into the database (there is
> only one table there) and i'm also deleting about 2G/day (data older
> than month).
>
> the documentation says, one should run VACUUM if there are many
> changes in the database, but the vacuumdb never finishes sooner than
> the new data should be imported.
>
> is there any technique that can solve this problem?

Are you running autovacuum? Has it been tuned to be more aggresive
than the default. I've got some large heavily updated dbs for which
I've had to turn down the autovacuum_vacuum_cost_delay to 2 or 5 ms to
get it to keep up. But I've got a pretty good IO subsystem that can
handle the more aggresive autovacuum.

If you're doing one big insert and one big delete a day, then you
should be able to just kick off a regular vacuum at the end of the
delete, with low cost_delay and higher cost_limit that might keep up.
However, if you're on the edge on your IO subsystem then it isn't
gonna help much because it's gonna slow down the system too much.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Peter Bex 2010-04-16 09:41:01 Tuple storage overhead
Previous Message Adrian von Bidder 2010-04-16 09:23:56 Re: VACUUM process running for a long time