Skip site navigation (1) Skip section navigation (2)

Re: Bulkloading using COPY - ignore duplicates?

From: "Zeugswetter Andreas SB SD" <ZeugswetterA(at)spardat(dot)at>
To: "Thomas Swan" <tswan(at)olemiss(dot)edu>, "Lee Kindness" <lkindness(at)csl(dot)co(dot)uk>
Cc: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Bulkloading using COPY - ignore duplicates?
Date: 2001-10-01 14:39:36
Message-ID: 46C15C39FEB2C44BA555E356FBCD6FA41EB3A0@m0114.s-mxs.net (view raw or flat)
Thread:
Lists: pgsql-hackers
> IMHO, you should copy into a temporary table and the do a select 
> distinct from it into the table that you want.

Which would be way too slow for normal operation :-(
We are talking about a "fast as possible" data load from a flat file
that may have duplicates (or even data errors, but that 
is another issue).

Andreas

Responses

pgsql-hackers by date

Next:From: Marc G. FournierDate: 2001-10-01 14:40:46
Subject: Re: Preparation for Beta
Previous:From: Oleg BartunovDate: 2001-10-01 14:37:41
Subject: cvs problem

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group