Skip site navigation (1) Skip section navigation (2)

Re: Bulkloading using COPY - ignore duplicates?

From: "Vadim Mikheev" <vmikheev(at)sectorbase(dot)com>
To: "Bruce Momjian" <pgman(at)candle(dot)pha(dot)pa(dot)us>, "Daniel Kalchev" <daniel(at)digsys(dot)bg>
Cc: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Lee Kindness" <lkindness(at)csl(dot)co(dot)uk>, "Peter Eisentraut" <peter_e(at)gmx(dot)net>, "Jim Buttafuoco" <jim(at)buttafuoco(dot)net>, "PostgreSQL Development" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Bulkloading using COPY - ignore duplicates?
Date: 2002-01-04 07:47:36
Message-ID: 000001c194f4$37c84f50$ed2db841@home (view raw or flat)
Thread:
Lists: pgsql-hackers
> Now, how about the same functionality for
>
> INSERT into table1 SELECT * from table2 ... WITH ERRORS;
>
> Should allow the insert to complete, even if table1 has unique indexes and
we
> try to insert duplicate rows. Might save LOTS of time in bulkloading
scripts
> not having to do single INSERTs.

1. I prefer Oracle' (and others, I believe) way - put statement(s) in PL
block and define
for what exceptions (errors) what actions should be taken (ie IGNORE for
NON_UNIQ_KEY
error, etc).

2. For INSERT ... SELECT statement one can put DISTINCT in select' target
list.

> Guess all this will be available in 7.3?

We'll see.

Vadim



In response to

Responses

pgsql-hackers by date

Next:From: Oleg BartunovDate: 2002-01-04 07:48:04
Subject: Re: RC1 time?
Previous:From: Daniel KalchevDate: 2002-01-04 07:36:01
Subject: Re: Bulkloading using COPY - ignore duplicates?

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group