Re: LOCK TABLE & speeding up mass data loads

From: Ron Johnson <ron(dot)l(dot)johnson(at)cox(dot)net>
To: PgSQL Performance ML <pgsql-performance(at)postgresql(dot)org>
Subject: Re: LOCK TABLE & speeding up mass data loads
Date: 2003-01-27 09:08:20
Message-ID: 1043658500.818.398.camel@haggis
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Sun, 2003-01-26 at 17:10, Curt Sampson wrote:
> On Sun, 25 Jan 2003, Ron Johnson wrote:
>
> > > Oh, and you're using COPY right?
> >
> > No. Too much data manipulation to do 1st. Also, by committing every
> > X thousand rows, then if the process must be aborted, then there's
> > no huge rollback, and the script can then skip to the last comitted
> > row and pick up from there.
>
> I don't see how the amount of data manipulation makes a difference.
> Where you now issue a BEGIN, issue a COPY instead. Where you now INSERT,
> just print the data for the columns, separated by tabs. Where you now
> issue a COMMIT, end the copy.

Yes, create an input file for COPY. Great idea.

However, If I understand you correctly, then if I want to be able
to not have to roll-back and re-run and complete COPY (which may
entail millions of rows), then I'd have to have thousands of seperate
input files (which would get processed sequentially).

Here's what I'd like to see:
COPY table [ ( column [, ...] ) ]
FROM { 'filename' | stdin }
[ [ WITH ]
[ BINARY ]
[ OIDS ]
[ DELIMITER [ AS ] 'delimiter' ]
[ NULL [ AS ] 'null string' ] ]
[COMMIT EVERY ... ROWS WITH LOGGING] <<<<<<<<<<<<<
[SKIP ... ROWS] <<<<<<<<<<<<<

This way, if I'm loading 25M rows, I can have it commit every, say,
1000 rows, and if it pukes 1/2 way thru, then when I restart the
COPY, it can SKIP past what's already been loaded, and proceed apace.

--
+---------------------------------------------------------------+
| Ron Johnson, Jr. mailto:ron(dot)l(dot)johnson(at)cox(dot)net |
| Jefferson, LA USA http://members.cox.net/ron.l.johnson |
| |
| "Fear the Penguin!!" |
+---------------------------------------------------------------+

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Ron Johnson 2003-01-27 09:18:41 Re: bigserial vs serial - which one I'd have to use?
Previous Message Sean Chittenden 2003-01-27 08:17:45 Re: 7.3.1 New install, large queries are slow