Re: Bulkloading using COPY - ignore duplicates?

From: Lee Kindness <lkindness(at)csl(dot)co(dot)uk>
To: Peter Eisentraut <peter_e(at)gmx(dot)net>
Cc: Jim Buttafuoco <jim(at)buttafuoco(dot)net>, Lee Kindness <lkindness(at)csl(dot)co(dot)uk>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PostgreSQL Development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Bulkloading using COPY - ignore duplicates?
Date: 2001-12-17 12:43:45
Message-ID: 15389.59521.558937.59993@elsick.csl.co.uk
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Peter Eisentraut writes:
> Jim Buttafuoco writes:
> > I agree with Lee, I also like Oracle's options for a discard file, so
> > you can look at what was rejected, fix your problem and reload if
> > necessary just the rejects.
> How do you know which one is the duplicate and which one is the good one?
> More likely you will have to fix the entire thing. Anything else would
> undermine the general data model except in specific use cases.

In the general case most data is sequential, in which case it would be
normal to assume that the first record is the definitive one. Most
database systems go with this assumption apart from MySQL which gives
the user a choice between IGNORE or UPDATE...

Lee.

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Lee Kindness 2001-12-17 12:48:30 Re: Bulkloading using COPY - ignore duplicates?
Previous Message Mathijs Brands 2001-12-17 12:30:02 Re: 7.2 is slow?