Re: Bulkloading using COPY - ignore duplicates?

From: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>
To: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Lee Kindness <lkindness(at)csl(dot)co(dot)uk>, Peter Eisentraut <peter_e(at)gmx(dot)net>, Jim Buttafuoco <jim(at)buttafuoco(dot)net>, PostgreSQL Development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Bulkloading using COPY - ignore duplicates?
Date: 2002-01-03 18:24:26
Message-ID: 200201031824.g03IOQN23254@candle.pha.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Bruce Momjian wrote:
> Tom Lane wrote:
> > > How about for TODO:
> > > * Allow COPY to report error lines and continue; requires
> > > nested transactions; optionally allow error codes to be specified
> >
> > Okay, that seems reasonable.
>
> Good. Now that I think of it, nested transactions don't seem required.
> We already allow pg_dump to dump a database using INSERTs, and we don't
> put those inserts in a single transaction when we load them:
>
> CREATE TABLE "test" (
> "x" integer
> );
>
> INSERT INTO "test" VALUES (1);
> INSERT INTO "test" VALUES (2);
>
> Should we be wrapping these INSERTs in a transaction? Can we do COPY
> with each row being its own transaction?

OK, added to TODO:

o Allow COPY to report error lines and continue; optionally
allow error codes to be specified

Seems nested transactions are not required if we load each COPY line in
its own transaction, like we do with INSERT from pg_dump.

--
Bruce Momjian | http://candle.pha.pa.us
pgman(at)candle(dot)pha(dot)pa(dot)us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Brent Verner 2002-01-03 18:56:43 Re: More problem with scripts
Previous Message Bruce Momjian 2002-01-03 18:21:23 Re: Updated TODO item