Full vacuuming of BIG tables takes too long

From: "Eugene M(dot) Zheganin" <emz(at)norma(dot)perm(dot)ru>
To: pgsql-admin(at)postgresql(dot)org
Subject: Full vacuuming of BIG tables takes too long
Date: 2003-05-22 04:39:56
Message-ID: 54953117843.20030522103956@norma.perm.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Hi, all.

Just example. I have an table in ISP billing base, which every 2
months grows and contains up to 35,000,000 of records. That takes
13Gb of disk space. On that amount 'vacuum analyze' (every night) is
inefficient, cause after it the table continues to grow (but not
very fast).

When trying to do 'vacuum full' it takes too long- I can wait only
5-6 hours (and that is not enough), cause it locks the table and
the number of procecces, awaiting their inserts becomes too high.
So it is much faster (40-50 mins) to dump the entire database,
then drop it, recreate and resore it.

I know that 'vacuum_mem = 65536' is not enough to do 'vacuum full'
fast enough - but I wanna ask- if I dedcide to increase that number
- will be 512 megs for example be better ?

Is there any other init parameters that can helkp me ?

Or speaking of such amount of data dump/recreate/restore will be the
best way ?

WBR, Eugene.

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Sean Chittenden 2003-05-22 06:39:31 Re: SECURITY
Previous Message Dhananjay Mishra 2003-05-22 02:28:31 Re: TCP/IP connection