2009/11/25 Daniel Farina <drfarina(at)gmail(dot)com>:
> On Tue, Nov 24, 2009 at 8:45 PM, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com> wrote:
>> It depends on design. I don't thing so internal is necessary. It is
>> just wrong design.
> Depends on how lean you want to be when doing large COPY...right now
> the cost is restricted to having to call a function pointer and a few
> branches. If you want to take SQL values, then the semantics of
> function calling over a large number of rows is probably notably more
> expensive, although I make no argument against the fact that the
> non-INTERNAL version would give a lot more people more utility.
I believe so using an "internal" minimalize necessary changes in COPY
implementation. Using a funcapi needs more work inside COPY - you
have to take some functionality from COPY to stream functions.
Probably the most slow operations is parsing - calling a input
functions. This is called once every where. Second slow operation is
reading from network - it is same. So I don't see too much reasons,
why non internal implementation have to be significant slower than
your actual implementation. I am sure, so it needs more work.
What is significant - when I better join COPY and some streaming
function, then I don't need use tuplestore - or SRF functions. COPY
reads data directly.
In response to
pgsql-hackers by date
|Next:||From: Emmanuel Cecchet||Date: 2009-11-25 05:37:28|
|Subject: Re: Syntax for partitioning|
|Previous:||From: Daniel Farina||Date: 2009-11-25 05:31:57|
|Subject: Re: [PATCH 4/4] Add tests to dblink covering use of COPY TO FUNCTION|