Can anyone suggest where the below error comes from, given I'm attempting to load HTTP access log data with reasonably small row and column value lengths?
logs=# COPY raw FROM '/path/to/big/log/file' DELIMITER E'\t' CSV;
ERROR: out of memory
DETAIL: Cannot enlarge string buffer containing 1073712650 bytes by 65536 more bytes.
CONTEXT: COPY raw, line 613338983
It was suggested in #postgresql that I'm reaching the 1GB MaxAllocSize - but I would have thought this would only be a constraint against either large values for specific columns or for whole rows. It's worth noting that this is after 613 million rows have already been loaded (somewhere around 100GB of data) and that I'm running this COPY after the "CREATE TABLE raw ..." in a single transaction.
I've looked at line 613338983 in the file being loaded (+/- 10 rows) and can't see anything out of the ordinary.
Disclaimer: I know nothing of PostgreSQL's internals, please be gentle!
pgsql-hackers by date
|Next:||From: Thom Brown||Date: 2011-02-01 10:35:11|
|Subject: Re: [HACKERS] Add reference to client_encoding parameter|
|Previous:||From: Nicolas Barbier||Date: 2011-02-01 10:05:17|
|Subject: Re: bad links in messages from commits|