Skip site navigation (1) Skip section navigation (2)

Re: Bulkloading using COPY - ignore duplicates?

From: "Zeugswetter Andreas SB SD" <ZeugswetterA(at)spardat(dot)at>
To: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Lee Kindness" <lkindness(at)csl(dot)co(dot)uk>
Cc: <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Bulkloading using COPY - ignore duplicates?
Date: 2001-10-01 14:06:39
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
> > Would this seem a reasonable thing to do? Does anyone rely on COPY
> > FROM causing an ERROR on duplicate input?
> Yes.  This change will not be acceptable unless it's made an optional
> (and not default, IMHO, though perhaps that's negotiable) feature of
> The implementation might be rather messy too.  I don't much 
> care for the
> notion of a routine as low-level as bt_check_unique knowing that the
> context is or is not COPY.  We might have to do some restructuring.
> > Would:
> > need to be added to the COPY command (I hope not)?
> It occurs to me that skip-the-insert might be a useful option for
> INSERTs that detect a unique-key conflict, not only for COPY.  (Cf.
> the regular discussions we see on whether to do INSERT first or
> UPDATE first when the key might already exist.)  Maybe a SET variable
> that applies to all forms of insertion would be appropriate.

Imho yes, but:
I thought that the problem was, that you cannot simply skip the 
insert, because at that time the tuple (pointer) might have already 
been successfully inserted into an other index/heap, and thus this was 
only sanely possible with savepoints/undo.

An idea would probably be to at once mark the new tuple dead, and



pgsql-hackers by date

Next:From: Tom LaneDate: 2001-10-01 14:09:13
Subject: Re: Bulkloading using COPY - ignore duplicates?
Previous:From: Tom LaneDate: 2001-10-01 14:02:54
Subject: Re: Bulkloading using COPY - ignore duplicates?

Privacy Policy | About PostgreSQL
Copyright © 1996-2018 The PostgreSQL Global Development Group