dump of 700 GB database

From: "karsten vennemann" <karsten(at)terragis(dot)net>
To: <pgsql-general(at)postgresql(dot)org>
Subject: dump of 700 GB database
Date: 2010-02-10 06:48:38
Message-ID: AC5865D2B70748F9A9F7F2F5B85821D1@snuggie
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

I have to write a 700 GB large database to a dump to clean out a lot of dead records on an Ubuntu server with postgres 8.3.8. What is the proper procedure to succeed with this - last time the dump stopped at 3.8 GB size I guess. Should I combine the -Fc option of pg_dump and and the split command ?

I thought something like

"pg_dump -Fc test | split -b 1000m - testdb.dump"
might work ?
Karsten

Terra GIS LTD
Seattle, WA, USA


Responses

Browse pgsql-general by date

  From Date Subject
Next Message Greg Smith 2010-02-10 06:51:12 Re: Best way to handle multi-billion row read-only table?
Previous Message AI Rumman 2010-02-10 06:38:03 Re: more than 2GB data string save