Re: Bulkloading using COPY - ignore duplicates?

From: Thomas Swan <tswan(at)olemiss(dot)edu>
To: Zeugswetter Andreas SB SD <ZeugswetterA(at)spardat(dot)at>
Cc: Lee Kindness <lkindness(at)csl(dot)co(dot)uk>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Bulkloading using COPY - ignore duplicates?
Date: 2001-10-01 14:42:56
Message-ID: 3BB880F0.8090006@olemiss.edu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Zeugswetter Andreas SB SD wrote:

>>IMHO, you should copy into a temporary table and the do a select
>>distinct from it into the table that you want.
>>
>
>Which would be way too slow for normal operation :-(
>We are talking about a "fast as possible" data load from a flat file
>that may have duplicates (or even data errors, but that
>is another issue).
>
>Andreas
>
Then the IGNORE_DUPLICATE would definitely be the way to go, if speed is
the question...

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Ken Hirsch 2001-10-01 14:59:10 Re: When scripting, which is better?
Previous Message Marc G. Fournier 2001-10-01 14:40:46 Re: Preparation for Beta