Re: Bulkloading using COPY - ignore duplicates?

From: Patrick Welche <prlw1(at)newn(dot)cam(dot)ac(dot)uk>
To: Lee Kindness <lkindness(at)csl(dot)co(dot)uk>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Bulkloading using COPY - ignore duplicates?
Date: 2001-12-13 15:29:57
Message-ID: 20011213152957.C12426@quartz.newn.cam.ac.uk
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Dec 13, 2001 at 01:25:11PM +0000, Lee Kindness wrote:
> That's what I'm currently doing as a workaround - a SELECT DISTINCT
> from a temporary table into the real table with the unique index on
> it. However this takes absolute ages - say 5 seconds for the copy
> (which is the ballpark figure I aiming toward and can achieve with
> Ingres) plus another 30ish seconds for the SELECT DISTINCT.

Then your column really isn't unique, so how about dropping the unique index,
import the data, fix the duplicates, recreate the unique index - just as
another possible work around ;)

Patrick

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Lee Kindness 2001-12-13 15:44:31 Re: Bulkloading using COPY - ignore duplicates?
Previous Message Thomas Lockhart 2001-12-13 15:29:22 Re: Intermediate report for AIX 5L port