Re: Largest DATABASE

From: Rod Taylor <pg(at)rbt(dot)ca>
To: Jamil <jamil(dot)figueira(at)ibi(dot)com(dot)br>
Cc: "'pgsql-benchmarks(at)postgresql(dot)org'" <pgsql-benchmarks(at)postgresql(dot)org>
Subject: Re: Largest DATABASE
Date: 2004-09-14 20:46:06
Message-ID: 1095194766.98565.102.camel@jester
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-benchmarks

> I would like to know witch was the largest database that you ever
> had to administrate. My database is about to 160GB and I´ve got some
> problems to backup it using the pg_dump command and I too have some problems
> to run vaccum.

Tell me about it. We started to do more fine grained scheduling of
vacuum a little while back we we past the 120GB mark.

You can try out the vacuum daemon, but it really didn't help me (little
tables were ignored too much, big tables done too often).

Check VACUUM VERBOSE output to see if you need to vacuum all of the
structures at the current rate.

We're still doing the dump, but running pg_dump on a different machine.
Most of the CPU time it eats up is in formatting the data for the dump.

Another option (which can take some effort) is to take the database
offline, fsync, take a filesystem snapshot, restart the database, tar up
the snapshot, remove the snapshot.

Database downtime can be short enough that if clients reattempt a failed
connection for some time (a couple of minutes), they'll simply see a
hicup and not a failure.

Looking forward to PITR making backups much friendlier.

In response to

Browse pgsql-benchmarks by date

  From Date Subject
Next Message Alban Médici (NetCentrex) 2004-10-06 10:15:31 stats on cursor and query execution troubleshooting
Previous Message Jamil 2004-09-14 18:40:05 Largest DATABASE