Skip site navigation (1) Skip section navigation (2)

Re: Bulkloading using COPY - ignore duplicates?

From: Thomas Swan <tswan(at)olemiss(dot)edu>
To: Zeugswetter Andreas SB SD <ZeugswetterA(at)spardat(dot)at>
Cc: Lee Kindness <lkindness(at)csl(dot)co(dot)uk>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Bulkloading using COPY - ignore duplicates?
Date: 2001-10-01 14:42:56
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
Zeugswetter Andreas SB SD wrote:

>>IMHO, you should copy into a temporary table and the do a select 
>>distinct from it into the table that you want.
>Which would be way too slow for normal operation :-(
>We are talking about a "fast as possible" data load from a flat file
>that may have duplicates (or even data errors, but that 
>is another issue).
Then the IGNORE_DUPLICATE would definitely be the way to go, if speed is 
the question...

In response to

pgsql-hackers by date

Next:From: Ken HirschDate: 2001-10-01 14:59:10
Subject: Re: When scripting, which is better?
Previous:From: Marc G. FournierDate: 2001-10-01 14:40:46
Subject: Re: Preparation for Beta

Privacy Policy | About PostgreSQL
Copyright © 1996-2018 The PostgreSQL Global Development Group