Skip site navigation (1) Skip section navigation (2)

Re: Bulkloading using COPY - ignore duplicates?

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Lee Kindness <lkindness(at)csl(dot)co(dot)uk>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Bulkloading using COPY - ignore duplicates?
Date: 2001-10-01 14:02:54
Message-ID: 23149.1001944974@sss.pgh.pa.us (view raw or flat)
Thread:
Lists: pgsql-hackers
Lee Kindness <lkindness(at)csl(dot)co(dot)uk> writes:
> I see where you're coming from, but seriously what's the use/point of
> COPY aborting and doing a rollback if one duplicate key is found?

Error detection.  If I'm loading what I think is valid data, having the
system silently ignore certain types of errors is not acceptable ---
I'm especially not pleased at the notion of removing an error check
that's always been there because someone else thinks that would make it
more convenient for his application.

> I think it's quite reasonable to presume the input to COPY has had as
> little processing done on it as possible.

The primary and traditional use of COPY has always been to reload dumped
data.  That's why it doesn't do any fancy processing like DEFAULT
insertion, and that's why it should be quite strict about error
conditions.  In a reload scenario, any sort of problem deserves
careful investigation.

			regards, tom lane

In response to

Responses

pgsql-hackers by date

Next:From: Zeugswetter Andreas SB SDDate: 2001-10-01 14:06:39
Subject: Re: Bulkloading using COPY - ignore duplicates?
Previous:From: Lee KindnessDate: 2001-10-01 13:54:25
Subject: Re: Bulkloading using COPY - ignore duplicates?

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group