2010/3/8 Takahiro Itagaki <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>:
> Takahiro Itagaki <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp> wrote:
>> > * I'm not very happy with "Getting tuples from the foreign server"
>> > section. Present tuplestore isn't quite efficient and putting all
>> > tuples into TSS adds devastating overhead. In principle, storing
>> > tuples doesn't match SQL exectuor model. So something like cursor is
>> > needed here.
>> Sure, but your optimization requires some extensions in libpq protocol.
>> We could send HeapTuple in a binary form if the remote and the local
>> server uses the same format, but the present libpq can return tuples
>> only as text or libpq-specific binary forms (that is not a HeapTuple).
> In addition, I beleive the tuplestore is requried *for performance*
> because per-tuple cursor fetching is very slow if we retrieve tuples from
> remote servers. We should fetch tuples in some resonable-size of batches.
> If we will optimize the part, we could remove PGresult-to-tuplestore
> convertson here. But we also need to some codes to avoid memory leak
> of PGresult on error because PGresult is allocaed with malloc, not palloc.
> (That is the same bug in contrib/dblink fixed recently.)
So, as the first step we implement it by tuplestore with the present
libpq, but for further improvement we need to refactor or to extend
our libpq to buffer some sized tuples. Or invent another
more-data-fetching-oriented protocol like existing copy?
In response to
pgsql-cluster-hackers by date
|Next:||From: Takahiro Itagaki||Date: 2010-03-09 08:32:13|
|Subject: Re: Function scan push-down using SQL/MED syntax|
|Previous:||From: Brendan Jurd||Date: 2010-03-08 02:55:50|
|Subject: Re: Expanded information template|