Re: pg_dump / copy bugs with "big lines" ?

From: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
To: Daniel Verite <daniel(at)manitou-mail(dot)org>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Ronan Dunklau <ronan(dot)dunklau(at)dalibo(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: pg_dump / copy bugs with "big lines" ?
Date: 2016-03-02 15:47:17
Message-ID: 20160302154717.GA422573@alvherre.pgsql
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Daniel Verite wrote:

> The cause of the crash turns out to be, in enlargeStringInfo():
>
> needed += str->len + 1; /* total space required now */
>
> needed is an int and str->len is an int64, so it overflows when the
> size has to grow beyond 2^31 bytes, fails to enlarge the buffer and
> overwrites memory after it.
>
> When fixing it with a local int64 copy of the variable, the backend
> no longer crashes and COPY big2 TO 'file' appears to work.

Great, thanks for debugging.

> However, getting it to the client with \copy big2 to 'file'
> still produces the error in psql:
> lost synchronization with server: got message type "d"
> and leaves an empty file, so there are more problems to solve to
> go beyond 2GB text per row.

Well, the CopyData message has an Int32 field for the message length.
I don't know the FE/BE protocol very well but I suppose each row
corresponds to one CopyData message, or perhaps each column corresponds
to one CopyData message. In either case, it's not possible to go beyond
2GB without changing the protocol ...

--
Álvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Alvaro Herrera 2016-03-02 15:53:42 Re: create opclass documentation outdated
Previous Message Tomas Vondra 2016-03-02 15:30:11 Re: pg_dump / copy bugs with "big lines" ?