|From:||Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>|
|To:||Pg Hackers <pgsql-hackers(at)postgresql(dot)org>|
|Subject:||Re: WIP: [[Parallel] Shared] Hash|
|Views:||Raw Message | Whole Thread | Download mbox|
Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com> wrote:
> The basic approach is visible and simple cases are working though, so
> I am submitting this WIP work for a round of review in the current
> commitfest and hoping to get some feedback and ideas. I will post the
> patch in a follow-up email shortly...
Please find a WIP patch attached. Everything related to batch reading
is not currently in a working state, which breaks multi-batch joins,
but many single batch cases work correctly. In an earlier version I
had multi-batch joins working but was before I started tackling
problems 2 and 3 listed in my earlier message. There is some error
handling and resource cleanup missing, and doubtless some cases not
handled correctly. But I thought it would be good to share this
development snapshot for discussion, so I'm posting this as is, and
will post an updated version when I've straightened out the batching
code some more.
To apply parallel-hash-v1, first apply the following patches, in this order:
When applying dsa-v4 on top of barrier-v3, it will reject a hunk in
src/backend/storage/ipc/Makefile where they both add their object
file. Simply add dsa.o to OBJS manually.
Then you can apply parallel-hash-v1.patch, which is attached to this message.
|Next Message||Tsunakawa, Takayuki||2016-11-01 04:35:22||Re: ECPG BUlk insert support using arrays|
|Previous Message||Thomas Munro||2016-11-01 04:06:51||Re: Dynamic shared memory areas|