vaccuming very large table problem

From: if <zeylie(at)gmail(dot)com>
To: pgsql-admin(at)postgresql(dot)org
Subject: vaccuming very large table problem
Date: 2008-02-15 10:56:09
Message-ID: f37dba8b0802150256x4bfe638duc921ef686b623e04@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Hello list!

We use postgresql as a backend to our email gateway, and keep al
emails for in database. Using postgres version 7.4.8 (yes, i know it's
old), and rather specific table schema (the application was desined
that way) -- all emails split into 2kb parts and fed up into
pg_largeobject. So, long story short, i now have a catch-22 situation
-- database using about 0.7TB and we are running out of space ;-)
I can delete some old stuff but i cannot run full vacuum to reclaim
disk space (i takes way more than full weekend) and i also cannot
dump/restore as there's no free space (2x database)

So, with this restrictions aplied, i figured out that i can somehow
zero out all old entries in pg_largeobject or even physically delete
these files, and rebuild all neccesary indexes.

What is the best way to do this?
IMO, dd'ing /dev/zero to this files will cause postgres to
reinitialize these empty blocks, and after this will still need to
vacuum full over 0.7TB, am i right?
And if i delete them, then start postmaster, there'll be lots of
complaining but will the latest data be saved?

How can i delete, for instance, first 70% of data reasonably fast?

P.S. Please cc me, as i'm not subscribed yet.
Thanks in advance!

regards,
if

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Vikram Vincent 2008-02-15 13:26:29 Re: How to re-own db cluster
Previous Message Suresh Gupta VG 2008-02-15 10:38:50 error while extracting the database