Re: dump of 700 GB database

From: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
To: karsten vennemann <karsten(at)terragis(dot)net>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: dump of 700 GB database
Date: 2010-02-10 07:29:36
Message-ID: 162867791002092329y7bb4c438ycf0149a1accb44a7@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hello

2010/2/10 karsten vennemann <karsten(at)terragis(dot)net>

> I have to write a 700 GB large database to a dump to clean out a lot of
> dead records on an Ubuntu server with postgres 8.3.8. What is the proper
> procedure to succeed with this - last time the dump stopped at 3.8 GB size I
> guess. Should I combine the -Fc option of pg_dump and and the split
> command ?
> I thought something like
> "pg_dump -Fc test | split -b 1000m - testdb.dump"
> might work ?
> Karsten
>

vacuum full doesn't work?

Regards
Pavel Stehule

>
> Terra GIS LTD
> Seattle, WA, USA
>
>

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Steve Atkins 2010-02-10 07:30:40 Re: more than 2GB data string save
Previous Message Scott Marlowe 2010-02-10 07:21:37 Re: more than 2GB data string save