Re: full vacuum of a very large table

From: raghu ram <raghuchennuru(at)gmail(dot)com>
To: Nic Chidu <nic(at)chidu(dot)net>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: full vacuum of a very large table
Date: 2011-03-29 16:21:48
Message-ID: AANLkTi=CBjmnmPuKFGXx9gKR7RE=d88j=30SupN+h-LF@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

On Tue, Mar 29, 2011 at 9:26 PM, Nic Chidu <nic(at)chidu(dot)net> wrote:

> Got a situation where a 130 mil rows (137GB) table needs to be brought down
> in size to 10 mil records (most recent)
> with the least amount of downtime.
>
> Doing a full vacuum would be faster on:
> - 120 mil rows deleted and 10 mil active (delete most of them then full
> vacuum)
> - 10 mil deleted and 120 mil active. (delete small batches and full vacuum
> after each delete).
>
> Any other suggestions?
>

Best recommended way is, take the dump of the table after dropping un-used
rows from the table and restored back to the database. Dump and reload would
be faster than a VACUUM FULL.

--Raghu Ram

>
> Thanks,
>
> Nic
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin
>

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message Ashish Nauriyal 2011-03-29 16:27:02 Re: full vacuum of a very large table
Previous Message Plugge, Joe R. 2011-03-29 16:04:33 Re: full vacuum of a very large table