Re: Troubles dumping a very large table.

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Ted Allen <tallen(at)blackducksoftware(dot)com>
Cc: "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Troubles dumping a very large table.
Date: 2008-12-26 17:38:38
Message-ID: 13141.1230313118@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Ted Allen <tallen(at)blackducksoftware(dot)com> writes:
> 600mb measured by get_octet_length on data. If there is a better way to measure the row/cell size, please let me know because we thought it was the >1Gb problem too. We thought we were being conservative by getting rid of the larger rows but I guess we need to get rid of even more.

Yeah, the average expansion of bytea data in COPY format is about 3X :-(
So you need to get the max row length down to around 300mb. I'm curious
how you got the data in to start with --- were the values assembled on
the server side?

regards, tom lane

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Merlin Moncure 2008-12-26 19:50:32 Re: Troubles dumping a very large table.
Previous Message Ted Allen 2008-12-26 16:02:55 Re: Troubles dumping a very large table.