Re: 7.4.6 pg_dump failed

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Marty Scholes <marty(at)outputservices(dot)com>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: 7.4.6 pg_dump failed
Date: 2005-01-26 16:19:35
Message-ID: 15489.1106756375@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Marty Scholes <marty(at)outputservices(dot)com> writes:
> A pg_dump of one table ran for 28:53:29.50 and produced a 30 GB dump
> before it aborted with:

> pg_dump: dumpClasses(): SQL command failed
> pg_dump: Error message from server: out of memory for query result
> pg_dump: The command was: FETCH 100 FROM _pg_dump_cursor

Even though it says "from server", this is actually an out-of-memory
problem inside pg_dump, or more specifically inside libpq.

> The table contains a text field that could contain several hundred MB of
> data, although always less than 2GB.

"Could contain"? What's the actual maximum field width, and how often
do very wide values occur? I don't recall the exact space allocation
algorithms inside libpq, but I'm wondering if it could choke on such a
wide row.

You might have better luck if you didn't use -d.

regards, tom lane

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Haron, Charles 2005-01-26 16:20:57 Re: Trouble Escaping Quotes
Previous Message Marty Scholes 2005-01-26 15:16:31 7.4.6 pg_dump failed