Re: Problem w/ dumping huge table and no disk space

From: Alvaro Herrera <alvherre(at)atentus(dot)com>
To: David Ford <david(at)blue-labs(dot)org>
Cc: <pgsql-general(at)postgresql(dot)org>
Subject: Re: Problem w/ dumping huge table and no disk space
Date: 2001-09-07 21:42:32
Message-ID: Pine.LNX.4.33L2.0109071737460.5974-100000@aguila.protecne.cl
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Fri, 7 Sep 2001, David Ford wrote:

> Help if you would please :)
>
> I have a 10million+ row table and I've only got a couple hundred megs
> left. I can't delete any rows, pg runs out of disk space and crashes.
> I can't pg_dump w/ compressed, the output file is started, has the
> schema and a bit other info comprising about 650 bytes, runs for 30
> minutes and pg runs out of disk space and crashes. My pg_dump cmd is:
> "pg_dump -d -f syslog.tar.gz -F c -t syslog -Z 9 syslog".

Try putting the output into ssh or something similar. You don't have to
keep it on the local machine.

From the bigger machine, something like

ssh server-with-data "pg_dump <options>" > syslog-dump

or from the smaller machine,
pg_dump <options> | ssh big-machine "cat > syslog-dump"

should do the trick. Maybe you can even pipe the output directly into
psql or pg_restore. Make sure the pg_dump throws output to stdout.

HTH.

--
Alvaro Herrera (<alvherre[(at)]atentus(dot)com>)

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Andrew Gould 2001-09-07 21:52:10 Re: Problem w/ dumping huge table and no disk space
Previous Message Bruce Momjian 2001-09-07 21:39:24 Re: [GENERAL] SQL Server to PostgreSQL HOWTO