Skip site navigation (1) Skip section navigation (2)

Re: LOCK TABLE & speeding up mass data loads

From: "Shridhar Daithankar" <shridhar_daithankar(at)persistent(dot)co(dot)in>
To: PgSQL Performance ML <pgsql-performance(at)postgresql(dot)org>
Subject: Re: LOCK TABLE & speeding up mass data loads
Date: 2003-01-27 09:45:07
Message-ID: 3E354CFB.32732.A38120C@localhost (view raw or flat)
Thread:
Lists: pgsql-performance
On 27 Jan 2003 at 3:08, Ron Johnson wrote:

> Here's what I'd like to see:
> COPY table [ ( column [, ...] ) ]
>     FROM { 'filename' | stdin }
>     [ [ WITH ] 
>           [ BINARY ] 
>           [ OIDS ]
>           [ DELIMITER [ AS ] 'delimiter' ]
>           [ NULL [ AS ] 'null string' ] ]
>     [COMMIT EVERY ... ROWS WITH LOGGING]  <<<<<<<<<<<<<
>     [SKIP ... ROWS]          <<<<<<<<<<<<<
> 
> This way, if I'm loading 25M rows, I can have it commit every, say,
> 1000 rows, and if it pukes 1/2 way thru, then when I restart the 
> COPY, it can SKIP past what's already been loaded, and proceed apace.

IIRc, there is a hook to \copy, not the postgreSQL command copy for how many 
transactions you would like to see. I remember to have benchmarked that and 
concluded that doing copy in one transaction is the fastest way of doing it.

DOn't have a postgresql installation handy, me being in linux, but this is 
definitely possible..

Bye
 Shridhar

--
I still maintain the point that designing a monolithic kernel in 1991 is 
afundamental error.  Be thankful you are not my student.  You would not get 
ahigh grade for such a design :-)(Andrew Tanenbaum to Linus Torvalds)


In response to

Responses

pgsql-performance by date

Next:From: Ron JohnsonDate: 2003-01-27 09:54:25
Subject: Re: LOCK TABLE & speeding up mass data loads
Previous:From: Ron JohnsonDate: 2003-01-27 09:18:41
Subject: Re: bigserial vs serial - which one I'd have to use?

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group