On Mon, 27 Jan 2003, Ron Johnson wrote:
> > I don't see how the amount of data manipulation makes a difference.
> > Where you now issue a BEGIN, issue a COPY instead. Where you now INSERT,
> > just print the data for the columns, separated by tabs. Where you now
> > issue a COMMIT, end the copy.
> Yes, create an input file for COPY. Great idea.
That's not quite what I was thinking of. Don't create an input file,
just send the commands directly to the server (if your API supports it).
If worst comes to worst, you could maybe open up a subprocess for a psql
and write to its standard input.
> However, If I understand you correctly, then if I want to be able
> to not have to roll-back and re-run and complete COPY (which may
> entail millions of rows), then I'd have to have thousands of seperate
> input files (which would get processed sequentially).
But you can probably commit much less often than 1000 rows. 10,000 or
100,000 would probably be more practical.
Curt Sampson <cjs(at)cynic(dot)net> +81 90 7737 2974 http://www.netbsd.org
Don't you know, in this new Dark Age, we're all light. --XTC
In response to
pgsql-performance by date
|Next:||From: Curt Sampson||Date: 2003-01-27 10:34:42|
|Subject: Re: 7.3.1 New install, large queries are slow|
|Previous:||From: Shridhar Daithankar||Date: 2003-01-27 10:00:39|
|Subject: Re: LOCK TABLE & speeding up mass data loads|