Re: Problem w/ dumping huge table and no disk space

From: David Ford <david(at)blue-labs(dot)org>
To: Calvin Dodge <caldodge(at)fpcc(dot)net>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Problem w/ dumping huge table and no disk space
Date: 2001-09-08 05:59:54
Message-ID: 3B99B3DA.5090104@blue-labs.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

>
>
>Do you have ssh available on your computer? Is an sshd daemon running
>on the other computer?
>
>Then try this:
>
>pg_dump mydatabase|ssh othersystem.com dd of=/home/me/database.dump
>
>The output of pg_dump on your computer will end up on the other
>computer in /home/me/database.dump.
>

The problem with that was all in that 7.1b had some broken stuff. psql
and pg_dump ate huge amounts of memory while storing the data which were
eventually killed by the OOM handler. They never got to the point of
dumping the data. The solution was to start pg_dump from the new box
and connect to the old server, pg_dump was fixed in that one. That
worked just fine.

Thank you for the suggestion.

On a side note (Tom, Bruce, etc), is there some way to mitigate psql's
storage of all rows returned in memory? Perhaps a 'swap' file? If you
connect to a 1.7G database and issue a query on it that returns a lot of
rows, the entire thing is held in memory which with such a query is
likely to cause an OOM and get killed.

David

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Justin Clift 2001-09-08 06:57:45 Re: Idea: jobs.postgresql.org
Previous Message Calvin Dodge 2001-09-08 05:05:49 Re: Problem w/ dumping huge table and no disk space