Re: pg_dump / copy bugs with "big lines" ?

From: "Daniel Verite" <daniel(at)manitou-mail(dot)org>
To: "Alvaro Herrera" <alvherre(at)2ndquadrant(dot)com>
Cc: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>,"Robert Haas" <robertmhaas(at)gmail(dot)com>,"Jim Nasby" <Jim(dot)Nasby(at)bluetreble(dot)com>,"Ronan Dunklau" <ronan(dot)dunklau(at)dalibo(dot)com>,"pgsql-hackers" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: pg_dump / copy bugs with "big lines" ?
Date: 2016-03-16 13:02:07
Message-ID: 28a1f376-e006-4ecf-93f5-133737652c5c@mm
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Daniel Verite wrote:

> # \copy bigtext2 from '/var/tmp/bigtext.sql'
> ERROR: 54000: out of memory
> DETAIL: Cannot enlarge string buffer containing 1073741808 bytes by 8191
> more bytes.
> CONTEXT: COPY bigtext2, line 1
> LOCATION: enlargeStringInfo, stringinfo.c:278

To go past that problem, I've tried tweaking the StringInfoData
used for COPY FROM, like the original patch does in CopyOneRowTo.

It turns out that it fails a bit later when trying to make a tuple
from the big line, in heap_form_tuple():

tuple = (HeapTuple) palloc0(HEAPTUPLESIZE + len);

which fails because (HEAPTUPLESIZE + len) is again considered
an invalid size, the size being 1468006476 in my test.

At this point it feels like a dead end, at least for the idea that extending
StringInfoData might suffice to enable COPYing such large rows.

Best regards,
--
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Stas Kelvich 2016-03-16 13:05:46 Re: async replication code
Previous Message Michael Paquier 2016-03-16 13:00:44 Re: Password identifiers, protocol aging and SCRAM protocol