Skip site navigation (1) Skip section navigation (2)

Re: Bulkloading using COPY - ignore duplicates?

From: Daniel Kalchev <daniel(at)digsys(dot)bg>
To: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>
Cc: "Mikheev, Vadim" <vmikheev(at)sectorbase(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Lee Kindness <lkindness(at)csl(dot)co(dot)uk>, Peter Eisentraut <peter_e(at)gmx(dot)net>, Jim Buttafuoco <jim(at)buttafuoco(dot)net>, PostgreSQL Development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Bulkloading using COPY - ignore duplicates?
Date: 2002-01-04 07:36:01
Message-ID: 200201040736.JAA29349@dcave.digsys.bg (view raw or flat)
Thread:
Lists: pgsql-hackers
>>>Bruce Momjian said:
 > Mikheev, Vadim wrote:
 > > > > Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> writes:
 > > > > > Seems nested transactions are not required if we load
 > > > > > each COPY line in its own transaction, like we do with
 > > > > > INSERT from pg_dump.
 > > > > 
 > > > > I don't think that's an acceptable answer.  Consider
 > > > 
 > > > Oh, very good point.  "Requires nested transactions" added to TODO.
 > > 
 > > Also add performance issue with per-line-commit...
 > > 
 > > Also-II - there is more common name for required feature - savepoints.
 > 
 > OK, updated TODO to prefer savepoints term.

Now, how about the same functionality for

INSERT into table1 SELECT * from table2 ... WITH ERRORS;

Should allow the insert to complete, even if table1 has unique indexes and we 
try to insert duplicate rows. Might save LOTS of time in bulkloading scripts 
not having to do single INSERTs.

Guess all this will be available in 7.3?

Daniel


Responses

pgsql-hackers by date

Next:From: Vadim MikheevDate: 2002-01-04 07:47:36
Subject: Re: Bulkloading using COPY - ignore duplicates?
Previous:From: Tom LaneDate: 2002-01-04 07:13:01
Subject: Re: freebsd/alpha probs

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group