Re: COPY Transform support

From: Sam Mason <sam(at)samason(dot)me(dot)uk>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: COPY Transform support
Date: 2008-04-04 11:41:16
Message-ID: 20080404114116.GQ6870@frubble.xen.chris-lamb.co.uk
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Apr 03, 2008 at 09:38:42PM -0400, Tom Lane wrote:
> Sam Mason <sam(at)samason(dot)me(dot)uk> writes:
> > On Thu, Apr 03, 2008 at 03:57:38PM -0400, Tom Lane wrote:
> >> I liked the idea of allowing COPY FROM to act as a table source in a
> >> larger SELECT or INSERT...SELECT. Not at all sure what would be
> >> involved to implement that, but it seems a lot more flexible than
> >> any other approach.
>
> > I'm not sure why new syntax is needed, what's wrong with having a simple
> > set of procedures like:
> > readtsv(filename TEXT) AS SETOF RECORD
>
> Yeah, I was thinking about that too. The main stumbling block is that
> you need to somehow expose all of COPY's options for parsing an input
> line (CSV vs default mode, quote and delimiter characters, etc).

Guess why I chose a nice simple example!

> It's surely doable but it might be pretty ugly compared to bespoke
> syntax.

Yes, that's an easy way to get it looking pretty.

As an alternative solution, how about having some datatype that stores
these parameters. E.g:

CREATE TYPE copyoptions (
delimiter TEXT CHECK (delimiter <> ""),
nullstr TEXT,
hasheader BOOLEAN,
quote TEXT,
escape TEXT
);

And have the input_function understand the current PG syntax for COPY
options. You'd then be able to do:

copyfrom('dummy.csv',$$ DELIMITER ';' CSV HEADER $$)

And the procedure would be able to pull out what it wanted from the
options.

> Another thing is that nodeFunctionScan.c is not really designed for
> enormous function result sets --- it dumps the results into a tuplestore
> whether that's needed or not. This is a performance bug that we ought
> to address anyway, but we'd really have to fix it if we want to approach
> the COPY problem this way. Just sayin'.

So you'd end up with something resembling a coroutine? When would it
be good to actually dump everything into a tuplestore as it does at the
moment?

It'll be fun to see how much code breaks because it relies on the
current behaviour of a SRF running to completion without other activity
happening between!

Sam

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Aidan Van Dyk 2008-04-04 13:15:31 Re: modules
Previous Message Martijn van Oosterhout 2008-04-04 09:29:59 Re: Locale, Collation, ICU patch