Re: Bulkloading using COPY - ignore duplicates?

From: Lee Kindness <lkindness(at)csl(dot)co(dot)uk>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Lee Kindness <lkindness(at)csl(dot)co(dot)uk>, Peter Eisentraut <peter_e(at)gmx(dot)net>, Jim Buttafuoco <jim(at)buttafuoco(dot)net>, PostgreSQL Development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Bulkloading using COPY - ignore duplicates?
Date: 2001-12-18 16:04:13
Message-ID: 15391.26877.931767.773950@elsick.csl.co.uk
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Tom Lane writes:
> Lee Kindness <lkindness(at)csl(dot)co(dot)uk> writes:
> > In an ideal world 'COPY FROM' would only be used with data output by
> > 'COPY TO' and it would be nice and sanitised. However in some fields
> > this often is not a possibility due to performance constraints!
> Of course, the more bells and whistles we add to COPY, the slower it
> will get, which rather defeats the purpose no?

Indeed, but as I've mentioned in this thread in the past, the code
path for COPY FROM already does a check against the unique index (if
there is one) but bombs-out rather than handling it...

It wouldn't add any execution time if there were no duplicates in the
input!

regards, Lee.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Don Baccus 2001-12-18 16:14:57 Re: Connection Pooling, a year later
Previous Message Giuseppe Tanzilli - CSF 2001-12-18 15:59:33 RFC: Locale support for Numeric datatype