> -----Original Message-----
> From: pgsql-novice-owner(at)postgresql(dot)org
> [mailto:pgsql-novice-owner(at)postgresql(dot)org]On Behalf Of Ron Johnson
> Sent: Tuesday, May 21, 2002 4:40 PM
> To: PgSQL Novice ML
> Subject: [NOVICE] Better way to bulk-load millions of CSV records into
> Currently, I've got a python script using pyPgSQL that
> parses the CSV record, creates a string that is a big
> "INSERT INTO VALUES (...)" command, then, execute() it.
> top shows that this method uses postmaster with ~70% CPU
> utilization, and python with ~15% utilization.
> Still, it's only inserting ~190 recs/second. Is there a
> better way to do this, or am I constrained by the hardware?
> Instead of python and postmaster having to do a ton of data
> xfer over sockets, I'm wondering if there's a way to send a
> large number of csv records (4000, for example) in one big
> chunk to a stored procedure and have the engine process it
You could change your Python script to output a COPY command, which is
*much* faster than individual INSERT commands.
Joel BURTON | joel(at)joelburton(dot)com | joelburton.com | aim: wjoelburton
Knowledge Management & Technology Consultant
In response to
pgsql-novice by date
|Next:||From: Ron Johnson||Date: 2002-05-22 02:43:11|
|Subject: Re: PostgreSQL+Access97+Linux: How to..|
|Previous:||From: Victor Manuel Torres Aguirre||Date: 2002-05-22 00:50:33|
|Subject: PostgreSQL+Access97+Linux: How to..|