Skip site navigation (1) Skip section navigation (2)

Re: 7.4.6 pg_dump failed

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Marty Scholes <marty(at)outputservices(dot)com>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: 7.4.6 pg_dump failed
Date: 2005-01-26 16:19:35
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-admin
Marty Scholes <marty(at)outputservices(dot)com> writes:
> A pg_dump of one table ran for 28:53:29.50 and produced a 30 GB dump 
> before it aborted with:

> pg_dump: dumpClasses(): SQL command failed
> pg_dump: Error message from server: out of memory for query result
> pg_dump: The command was: FETCH 100 FROM _pg_dump_cursor

Even though it says "from server", this is actually an out-of-memory
problem inside pg_dump, or more specifically inside libpq.

> The table contains a text field that could contain several hundred MB of 
> data, although always less than 2GB.

"Could contain"?  What's the actual maximum field width, and how often
do very wide values occur?  I don't recall the exact space allocation
algorithms inside libpq, but I'm wondering if it could choke on such a
wide row.

You might have better luck if you didn't use -d.

			regards, tom lane

In response to


pgsql-admin by date

Next:From: Haron, CharlesDate: 2005-01-26 16:20:57
Subject: Re: Trouble Escaping Quotes
Previous:From: Marty ScholesDate: 2005-01-26 15:16:31
Subject: 7.4.6 pg_dump failed

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group