Re: LOCK TABLE & speeding up mass data loads

From: Curt Sampson <cjs(at)cynic(dot)net>
To: Ron Johnson <ron(dot)l(dot)johnson(at)cox(dot)net>
Cc: PgSQL Performance ML <pgsql-performance(at)postgresql(dot)org>
Subject: Re: LOCK TABLE & speeding up mass data loads
Date: 2003-01-27 10:23:10
Message-ID: Pine.NEB.4.51.0301271921270.393@angelic.cynic.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Mon, 27 Jan 2003, Ron Johnson wrote:

> > I don't see how the amount of data manipulation makes a difference.
> > Where you now issue a BEGIN, issue a COPY instead. Where you now INSERT,
> > just print the data for the columns, separated by tabs. Where you now
> > issue a COMMIT, end the copy.
>
> Yes, create an input file for COPY. Great idea.

That's not quite what I was thinking of. Don't create an input file,
just send the commands directly to the server (if your API supports it).
If worst comes to worst, you could maybe open up a subprocess for a psql
and write to its standard input.

> However, If I understand you correctly, then if I want to be able
> to not have to roll-back and re-run and complete COPY (which may
> entail millions of rows), then I'd have to have thousands of seperate
> input files (which would get processed sequentially).

Right.

But you can probably commit much less often than 1000 rows. 10,000 or
100,000 would probably be more practical.

cjs
--
Curt Sampson <cjs(at)cynic(dot)net> +81 90 7737 2974 http://www.netbsd.org
Don't you know, in this new Dark Age, we're all light. --XTC

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Curt Sampson 2003-01-27 10:34:42 Re: 7.3.1 New install, large queries are slow
Previous Message Shridhar Daithankar 2003-01-27 10:00:39 Re: LOCK TABLE & speeding up mass data loads