Re: dump of 700 GB database

From: John R Pierce <pierce(at)hogranch(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: dump of 700 GB database
Date: 2010-02-10 07:35:43
Message-ID: 4B7261CF.4070202@hogranch.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

karsten vennemann wrote:
> I have to write a 700 GB large database to a dump to clean out a lot
> of dead records on an Ubuntu server with postgres 8.3.8. What is the
> proper procedure to succeed with this - last time the dump stopped at
> 3.8 GB size I guess. Should I combine the -Fc option of pg_dump and
> and the split command ?

vacuum should clean out the dead tuples, then cluster on any large
tables that are bloated will sort them out without needing too much
temporary space.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Martijn van Oosterhout 2010-02-10 07:42:10 Re: Memory Usage and OpenBSD
Previous Message Allan Kamau 2010-02-10 07:34:03 Re: more than 2GB data string save