Re: Bulkloading using COPY - ignore duplicates?

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Zeugswetter Andreas SB SD" <ZeugswetterA(at)spardat(dot)at>
Cc: "Lee Kindness" <lkindness(at)csl(dot)co(dot)uk>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Bulkloading using COPY - ignore duplicates?
Date: 2001-10-01 14:09:13
Message-ID: 23205.1001945353@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

"Zeugswetter Andreas SB SD" <ZeugswetterA(at)spardat(dot)at> writes:
> I thought that the problem was, that you cannot simply skip the
> insert, because at that time the tuple (pointer) might have already
> been successfully inserted into an other index/heap, and thus this was
> only sanely possible with savepoints/undo.

Hmm, good point. If we don't error out the transaction then that tuple
would become good when we commit. This is nastier than it appears.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Marc G. Fournier 2001-10-01 14:12:09 Re: Moving CVS files around?
Previous Message Zeugswetter Andreas SB SD 2001-10-01 14:06:39 Re: Bulkloading using COPY - ignore duplicates?