Re: Bulkloading using COPY - ignore duplicates?

From: Lee Kindness <lkindness(at)csl(dot)co(dot)uk>
To: Peter Eisentraut <peter_e(at)gmx(dot)net>
Cc: Jim Buttafuoco <jim(at)buttafuoco(dot)net>, Lee Kindness <lkindness(at)csl(dot)co(dot)uk>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PostgreSQL Development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Bulkloading using COPY - ignore duplicates?
Date: 2001-12-17 12:48:30
Message-ID: 15389.59806.534505.201283@elsick.csl.co.uk
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Peter Eisentraut writes:
> Jim Buttafuoco writes:
> > I agree with Lee, I also like Oracle's options for a discard file, so
> > you can look at what was rejected, fix your problem and reload if
> > necessary just the rejects.
> How do you know which one is the duplicate and which one is the good one?
> More likely you will have to fix the entire thing. Anything else would
> undermine the general data model except in specific use cases.

Consider SELECT DISTINCT - which is the 'duplicate' and which one is
the good one?

Lee.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message bpalmer 2001-12-17 13:06:54 Re: 7.2 is slow?
Previous Message Lee Kindness 2001-12-17 12:43:45 Re: Bulkloading using COPY - ignore duplicates?