Re: Bulkloading using COPY - ignore duplicates?

From: Patrick Welche <prlw1(at)newn(dot)cam(dot)ac(dot)uk>
To: Lee Kindness <lkindness(at)csl(dot)co(dot)uk>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Bulkloading using COPY - ignore duplicates?
Date: 2001-12-13 12:31:14
Message-ID: 20011213123114.B12426@quartz.newn.cam.ac.uk
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Oct 01, 2001 at 03:17:43PM +0100, Lee Kindness wrote:
> Tom Lane writes:
> > I'm especially not pleased at the notion of removing an error check
> > that's always been there because someone else thinks that would make it
> > more convenient for his application.
>
> Please, don't get me wrong - I don't want to come across arrogant. I'm
> simply trying to improve the 'COPY FROM' command in a situation where
> speed is a critical issue and the data is dirty... And that must be a
> relatively common scenario in industry.

Isn't that when you do your bulk copy into into a holding table, then
clean it up, and then insert into your live system?

Patrick

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Jason Tishler 2001-12-13 12:32:59 Re: Platform Testing - Cygwin
Previous Message Vince Vielhaber 2001-12-13 11:02:12 Re: Regression test database is not working