Re: COPY: row is too big

From: Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>
To: Rob Sargent <robjsargent(at)gmail(dot)com>, vod vos <vodvos(at)zoho(dot)com>
Cc: pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: Re: COPY: row is too big
Date: 2017-01-05 18:46:59
Message-ID: 6b0a0f3e-cd0d-1927-6c4e-ba32cc24a0a3@aklaver.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 01/05/2017 08:31 AM, Rob Sargent wrote:
>
>
> On 01/05/2017 05:44 AM, vod vos wrote:
>> I finally figured it out as follows:
>>
>> 1. modified the corresponding data type of the columns to the csv file
>>
>> 2. if null values existed, defined the data type to varchar. The null
>> values cause problem too.
>>
>> so 1100 culumns work well now.
>>
>> This problem wasted me three days. I have lots of csv data to COPY.
>>
>>
> Yes, you cost yourself a lot of time by not showing the original table
> definition into which you were trying insert data.

Given that the table had 1100 columns I am not sure I wanted to see it:)

Still the OP did give it to us in description:

https://www.postgresql.org/message-id/15969913dd3.ea2ff58529997.7460368287916683127%40zoho.com
"I create a table with 1100 columns with data type of varchar, and hope
the COPY command will auto transfer the csv data that contains some
character and date, most of which are numeric."

In retrospect I should have pressed for was a more complete description
of the data. I underestimated this description:

"And some the values in the csv file contain nulls, do this null values
matter? "

--
Adrian Klaver
adrian(dot)klaver(at)aklaver(dot)com

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message BRUSSER Michael 2017-01-05 18:57:25 psql error (encoding related?)
Previous Message Paul Ramsey 2017-01-05 17:50:49 Re: Improve PostGIS performance with 62 million rows?