From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Merlin Moncure" <mmoncure(at)gmail(dot)com> |
Cc: | "Ted Allen" <tallen(at)blackducksoftware(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Troubles dumping a very large table. |
Date: | 2008-12-26 20:18:50 |
Message-ID: | 14875.1230322730@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
"Merlin Moncure" <mmoncure(at)gmail(dot)com> writes:
> On Fri, Dec 26, 2008 at 12:38 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> Yeah, the average expansion of bytea data in COPY format is about 3X :-(
>> So you need to get the max row length down to around 300mb. I'm curious
>> how you got the data in to start with --- were the values assembled on
>> the server side?
> Wouldn't binary style COPY be more forgiving in this regard? (if so,
> the OP might have better luck running COPY BINARY)...
Yeah, if he's willing to use COPY BINARY directly. AFAIR there is not
an option to get pg_dump to use it. But maybe "pg_dump -s" together
with a manual dump of the table data is the right answer. It probably
beats shoving some of the rows aside as he's doing now...
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Smith | 2008-12-28 22:04:47 | Re: Bgwriter and pg_stat_bgwriter.buffers_clean aspects |
Previous Message | Merlin Moncure | 2008-12-26 19:50:32 | Re: Troubles dumping a very large table. |