Re: Functions with COPY

From: Rod Taylor <pg(at)rbt(dot)ca>
To: Stephen Frost <sfrost(at)snowman(dot)net>
Cc: Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Functions with COPY
Date: 2003-11-27 14:38:49
Message-ID: 1069943928.17262.92.camel@jester
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, 2003-11-27 at 09:28, Stephen Frost wrote:
> * Bruno Wolff III (bruno(at)wolff(dot)to) wrote:
> > On Thu, Nov 27, 2003 at 09:15:20 -0500,
> > Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> > > I don't believe it's possible, currently, to correctly import this
> > > data with copy. I'm not sure the date fields would even be accepted
> > > as date fields. It'd be nice if this could be made to work. From a
> > > user standpoint consider:
> >
> > You can write a filter program that reads the data and passes it off
> > to copy. Perl works pretty well for this.
>
> I already did, but it's basically a poor duplication of what the
> Postgres functions listed already do. Not what I'd consider the best
> scenario. Additionally, overall I'd expect it to be less work to have
> the conversion from text->data type done once and correctly instead of
> run through a filter program to 'clean it up' for Postgres and then also
> run through functions in Postgres (casts at least) to convert it.

How about COPY into a TEMP TABLE for 10k lines, then do an
insert into real_table .... select .... from temp_table;
which converts the data?

You could of course thread the load so 2 or 3 processes execute the data
import.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Dave Cramer 2003-11-27 14:55:30 Re: Functions with COPY
Previous Message Stephen Frost 2003-11-27 14:28:05 Re: Functions with COPY