From: | Ted Allen <tallen(at)blackducksoftware(dot)com> |
---|---|
To: | Merlin Moncure <mmoncure(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Troubles dumping a very large table. |
Date: | 2008-12-29 03:11:47 |
Message-ID: | 49583FF3.8030105@blackducksoftware.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
I was hoping use pg_dump and not to have to do a manual dump but if that
latest solution (moving rows >300mb elsewhere and dealing with them
later) does not work I'll try that.
Thanks everyone.
Merlin Moncure wrote:
> On Fri, Dec 26, 2008 at 12:38 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>
>> Ted Allen <tallen(at)blackducksoftware(dot)com> writes:
>>
>>> 600mb measured by get_octet_length on data. If there is a better way to measure the row/cell size, please let me know because we thought it was the >1Gb problem too. We thought we were being conservative by getting rid of the larger rows but I guess we need to get rid of even more.
>>>
>> Yeah, the average expansion of bytea data in COPY format is about 3X :-(
>> So you need to get the max row length down to around 300mb. I'm curious
>> how you got the data in to start with --- were the values assembled on
>> the server side?
>>
>
> Wouldn't binary style COPY be more forgiving in this regard? (if so,
> the OP might have better luck running COPY BINARY)...
>
> This also goes for libpq traffic..large (>1mb) bytea definately want
> to be passed using the binary switch in the protocol.
>
> merlin
>
From | Date | Subject | |
---|---|---|---|
Next Message | Laszlo Nagy | 2008-12-29 08:06:51 | Re: rebellious pg stats collector (reopened case) |
Previous Message | Greg Smith | 2008-12-28 22:04:47 | Re: Bgwriter and pg_stat_bgwriter.buffers_clean aspects |