Copy Command out of memory

From: Kevin Keith <kkeith(at)borderware(dot)com>
To: pgsql-admin(at)postgresql(dot)org
Subject: Copy Command out of memory
Date: 2005-12-14 23:18:07
Message-ID: 43A0A82F.5070205@borderware.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

I was trying to run a bulk data load using the COPY command on PGSQL 8.1.0.

After loading about 3,500,000 records it ran out of memory - I am
assuming because it ran out of space to store such a large transaction.
Does the COPY command offer a similar feature to Oracle's SQL*Loader
where you can specify the number of records to load between commit
statements, or will I have to break the file I am loading into smaller
files?

Or can a transaction be bypassed altogether with the COPY command since
any failure (the load is going to an empty table) could easily be solved
with a reload of the data anyway.

Thanks,

Kevin

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Anjan Dave 2005-12-14 23:44:55 Re: PG 8.1.1 Cannot allocate shared_buffers memory error
Previous Message Tomeh, Husam 2005-12-14 22:49:17 Re: PG 8.1.1 Cannot allocate shared_buffers memory