Skip site navigation (1) Skip section navigation (2)

Re: Bulkloading using COPY - ignore duplicates?

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Zeugswetter Andreas SB SD" <ZeugswetterA(at)spardat(dot)at>
Cc: "Lee Kindness" <lkindness(at)csl(dot)co(dot)uk>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Bulkloading using COPY - ignore duplicates?
Date: 2001-10-01 14:09:13
Message-ID: 23205.1001945353@sss.pgh.pa.us (view raw or flat)
Thread:
Lists: pgsql-hackers
"Zeugswetter Andreas SB SD" <ZeugswetterA(at)spardat(dot)at> writes:
> I thought that the problem was, that you cannot simply skip the 
> insert, because at that time the tuple (pointer) might have already 
> been successfully inserted into an other index/heap, and thus this was 
> only sanely possible with savepoints/undo.

Hmm, good point.  If we don't error out the transaction then that tuple
would become good when we commit.  This is nastier than it appears.

			regards, tom lane

In response to

Responses

pgsql-hackers by date

Next:From: Marc G. FournierDate: 2001-10-01 14:12:09
Subject: Re: Moving CVS files around?
Previous:From: Zeugswetter Andreas SB SDDate: 2001-10-01 14:06:39
Subject: Re: Bulkloading using COPY - ignore duplicates?

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group