Re: Bulkloading using COPY - ignore duplicates?

From: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Lee Kindness <lkindness(at)csl(dot)co(dot)uk>, Peter Eisentraut <peter_e(at)gmx(dot)net>, Jim Buttafuoco <jim(at)buttafuoco(dot)net>, PostgreSQL Development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Bulkloading using COPY - ignore duplicates?
Date: 2002-01-03 03:24:26
Message-ID: 200201030324.g033OQe25713@candle.pha.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Tom Lane wrote:
> > How about for TODO:
> > * Allow COPY to report error lines and continue; requires
> > nested transactions; optionally allow error codes to be specified
>
> Okay, that seems reasonable.

Good. Now that I think of it, nested transactions don't seem required.
We already allow pg_dump to dump a database using INSERTs, and we don't
put those inserts in a single transaction when we load them:

CREATE TABLE "test" (
"x" integer
);

INSERT INTO "test" VALUES (1);
INSERT INTO "test" VALUES (2);

Should we be wrapping these INSERTs in a transaction? Can we do COPY
with each row being its own transaction?

--
Bruce Momjian | http://candle.pha.pa.us
pgman(at)candle(dot)pha(dot)pa(dot)us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2002-01-03 04:05:23 Re: bug in join?
Previous Message Tatsuo Ishii 2002-01-03 01:18:25 Re: LWLock contention: I think I understand the problem