Skip site navigation (1) Skip section navigation (2)

Re: Troubles dumping a very large table.

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Merlin Moncure" <mmoncure(at)gmail(dot)com>
Cc: "Ted Allen" <tallen(at)blackducksoftware(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Troubles dumping a very large table.
Date: 2008-12-26 20:18:50
Message-ID: (view raw or whole thread)
Lists: pgsql-performance
"Merlin Moncure" <mmoncure(at)gmail(dot)com> writes:
> On Fri, Dec 26, 2008 at 12:38 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> Yeah, the average expansion of bytea data in COPY format is about 3X :-(
>> So you need to get the max row length down to around 300mb.  I'm curious
>> how you got the data in to start with --- were the values assembled on
>> the server side?

> Wouldn't binary style COPY be more forgiving in this regard?  (if so,
> the OP might have better luck running COPY BINARY)...

Yeah, if he's willing to use COPY BINARY directly.  AFAIR there is not
an option to get pg_dump to use it.  But maybe "pg_dump -s" together
with a manual dump of the table data is the right answer.  It probably
beats shoving some of the rows aside as he's doing now...

			regards, tom lane

In response to


pgsql-performance by date

Next:From: Greg SmithDate: 2008-12-28 22:04:47
Subject: Re: Bgwriter and pg_stat_bgwriter.buffers_clean aspects
Previous:From: Merlin MoncureDate: 2008-12-26 19:50:32
Subject: Re: Troubles dumping a very large table.

Privacy Policy | About PostgreSQL
Copyright © 1996-2015 The PostgreSQL Global Development Group