Skip site navigation (1) Skip section navigation (2)

Re: Troubles dumping a very large table.

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Ted Allen <tallen(at)blackducksoftware(dot)com>
Cc: "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Troubles dumping a very large table.
Date: 2008-12-26 17:38:38
Message-ID: 13141.1230313118@sss.pgh.pa.us (view raw or flat)
Thread:
Lists: pgsql-performance
Ted Allen <tallen(at)blackducksoftware(dot)com> writes:
> 600mb measured by get_octet_length on data.  If there is a better way to measure the row/cell size, please let me know because we thought it was the >1Gb problem too.  We thought we were being conservative by getting rid of the larger rows but I guess we need to get rid of even more.

Yeah, the average expansion of bytea data in COPY format is about 3X :-(
So you need to get the max row length down to around 300mb.  I'm curious
how you got the data in to start with --- were the values assembled on
the server side?

			regards, tom lane

In response to

Responses

pgsql-performance by date

Next:From: Merlin MoncureDate: 2008-12-26 19:50:32
Subject: Re: Troubles dumping a very large table.
Previous:From: Ted AllenDate: 2008-12-26 16:02:55
Subject: Re: Troubles dumping a very large table.

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group