Re: pg_dump / copy bugs with "big lines" ?

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Daniel Verite" <daniel(at)manitou-mail(dot)org>
Cc: "Alvaro Herrera" <alvherre(at)2ndquadrant(dot)com>, "Robert Haas" <robertmhaas(at)gmail(dot)com>, "Jim Nasby" <Jim(dot)Nasby(at)bluetreble(dot)com>, "Ronan Dunklau" <ronan(dot)dunklau(at)dalibo(dot)com>, "pgsql-hackers" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: pg_dump / copy bugs with "big lines" ?
Date: 2016-03-01 21:48:43
Message-ID: 27428.1456868923@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

"Daniel Verite" <daniel(at)manitou-mail(dot)org> writes:
> I've tried adding another large field to see what happens if the whole row
> exceeds 2GB, and data goes to the client rather than to a file.
> My idea was to check if the client side was OK with that much data on
> a single COPY row, but it turns out that the server is not OK anyway.

BTW, is anyone checking the other side of this, ie "COPY IN" with equally
wide rows? There doesn't seem to be a lot of value in supporting dump
if you can't reload ...

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Michael Paquier 2016-03-01 22:22:52 Re: 2016-03 Commitfest Manager
Previous Message Alvaro Herrera 2016-03-01 21:46:33 Re: TAP / recovery-test fs-level backups, psql enhancements etc