From: | "Andrey V(dot) Lepikhov" <a(dot)lepikhov(at)postgrespro(dot)ru> |
---|---|
To: | Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com> |
Cc: | etsuro(dot)fujita(at)gmail(dot)com, movead(dot)li(at)highgo(dot)ca, pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Re: Asynchronous Append on postgres_fdw nodes. |
Date: | 2020-06-17 10:01:08 |
Message-ID: | 581520e2-1475-c614-b879-98b350220508@postgrespro.ru |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 6/16/20 1:30 PM, Kyotaro Horiguchi wrote:
> They return 25056 rows, which is far more than 9741 rows. So remote
> join won.
>
> Of course the number of returning rows is not the only factor of the
> cost change but is the most significant factor in this case.
>
Thanks for the attention.
I see one slight flaw of this approach to asynchronous append:
AsyncAppend works only for ForeignScan subplans. if we have
PartialAggregate, Join or another more complicated subplan, we can't use
asynchronous machinery.
It may lead to a situation than small difference in a filter constant
can cause a big difference in execution time.
I imagine an Append node, that can switch current subplan from time to
time and all ForeignScan nodes of the overall plan are added to one
queue. The scan buffer can be larger than a cursor fetch size and each
IterateForeignScan() call can induce asynchronous scan of another
ForeignScan node if buffer is not full.
But these are only thoughts, not an proposal. I have no questions to
your patch right now.
--
Andrey Lepikhov
Postgres Professional
https://postgrespro.com
From | Date | Subject | |
---|---|---|---|
Next Message | Laurenz Albe | 2020-06-17 10:06:18 | Re: language cleanups in code and docs |
Previous Message | Magnus Hagander | 2020-06-17 08:53:31 | Re: language cleanups in code and docs |