Re: out of memory during COPY .. FROM

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Tom Lanyon <tom+pgsql-hackers(at)oneshoeco(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: out of memory during COPY .. FROM
Date: 2011-02-03 15:38:31
Message-ID: AANLkTikgnZT5Tpct9GiaVka6HgX+pQJSF=ofi6qyjG=Z@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Feb 1, 2011 at 5:32 AM, Tom Lanyon
<tom+pgsql-hackers(at)oneshoeco(dot)com> wrote:
> List,
>
> Can anyone suggest where the below error comes from, given I'm attempting to load HTTP access log data with reasonably small row and column value lengths?
>
>        logs=# COPY raw FROM '/path/to/big/log/file' DELIMITER E'\t' CSV;
>        ERROR:  out of memory
>        DETAIL:  Cannot enlarge string buffer containing 1073712650 bytes by 65536 more bytes.
>        CONTEXT:  COPY raw, line 613338983
>
> It was suggested in #postgresql that I'm reaching the 1GB MaxAllocSize - but I would have thought this would only be a constraint against either large values for specific columns or for whole rows. It's worth noting that this is after 613 million rows have already been loaded (somewhere around 100GB of data) and that I'm running this COPY after the "CREATE TABLE raw ..." in a single transaction.
>
> I've looked at line 613338983 in the file being loaded (+/- 10 rows) and can't see anything out of the ordinary.
>
> Disclaimer: I know nothing of PostgreSQL's internals, please be gentle!

Is there by any chance a trigger on this table? Or any foreign keys?

What version of PostgreSQL?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Vitalii Tymchyshyn 2011-02-03 15:43:47 Re: [HACKERS] Slow count(*) again...
Previous Message Robert Haas 2011-02-03 15:35:43 Re: [HACKERS] Slow count(*) again...