600mb measured by get_octet_length on data. If there is a better way to measure the row/cell size, please let me know because we thought it was the >1Gb problem too. We thought we were being conservative by getting rid of the larger rows but I guess we need to get rid of even more.
From: Tom Lane [tgl(at)sss(dot)pgh(dot)pa(dot)us]
Sent: Wednesday, December 24, 2008 12:49 PM
To: Ted Allen
Subject: Re: [PERFORM] Troubles dumping a very large table.
Ted Allen <tallen(at)blackducksoftware(dot)com> writes:
> during the upgrade. The trouble is, when I dump the largest table,
> which is 1.1 Tb with indexes, I keep getting the following error at the
> same point in the dump.
> pg_dump: SQL command failed
> pg_dump: Error message from server: ERROR: invalid string enlargement
> request size 1
> pg_dump: The command was: COPY public.large_table (id, data) TO stdout;
> As you can see, the table is two columns, one column is an integer, and
> the other is bytea. Each cell in the data column can be as large as
> 600mb (we had bigger rows as well but we thought they were the source of
> the trouble and moved them elsewhere to be dealt with separately.)
600mb measured how? I have a feeling the problem is that the value
exceeds 1Gb when converted to text form...
regards, tom lane
In response to
pgsql-performance by date
|Next:||From: Tom Lane||Date: 2008-12-26 17:38:38|
|Subject: Re: Troubles dumping a very large table. |
|Previous:||From: Nikolas Everett||Date: 2008-12-26 14:34:09|
|Subject: Re: Slow table update|