Re: Bulkloading using COPY - ignore duplicates?

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Lee Kindness <lkindness(at)csl(dot)co(dot)uk>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Bulkloading using COPY - ignore duplicates?
Date: 2001-10-01 14:02:54
Message-ID: 23149.1001944974@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Lee Kindness <lkindness(at)csl(dot)co(dot)uk> writes:
> I see where you're coming from, but seriously what's the use/point of
> COPY aborting and doing a rollback if one duplicate key is found?

Error detection. If I'm loading what I think is valid data, having the
system silently ignore certain types of errors is not acceptable ---
I'm especially not pleased at the notion of removing an error check
that's always been there because someone else thinks that would make it
more convenient for his application.

> I think it's quite reasonable to presume the input to COPY has had as
> little processing done on it as possible.

The primary and traditional use of COPY has always been to reload dumped
data. That's why it doesn't do any fancy processing like DEFAULT
insertion, and that's why it should be quite strict about error
conditions. In a reload scenario, any sort of problem deserves
careful investigation.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Zeugswetter Andreas SB SD 2001-10-01 14:06:39 Re: Bulkloading using COPY - ignore duplicates?
Previous Message Lee Kindness 2001-10-01 13:54:25 Re: Bulkloading using COPY - ignore duplicates?