| From: | Greg Stark <stark(at)mit(dot)edu> |
|---|---|
| To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
| Cc: | Simon Riggs <simon(at)2ndquadrant(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
| Subject: | Re: parallel joins, and better parallel explain |
| Date: | 2015-11-30 17:01:43 |
| Message-ID: | CAM-w4HNOuoat3nxi3DPCxtB72d+OYi3-M9jkD=9bkBTzXOVqow@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
On Mon, Nov 30, 2015 at 4:52 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> Not only does this build only one copy of the hash table instead of N
> copies, but we can parallelize the hash table construction itself by
> having all workers insert in parallel, which is pretty cool.
Hm. The case where you don't want parallel building of the hash table
might be substantially simpler. You could build a hash table and then
copy it into shared memory as single contiguous read-only data
structure optimized for lookups. I have an inkling that there are even
ways of marking the memory as being read-only and not needing cache
synchronization.
--
greg
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Masahiko Sawada | 2015-11-30 17:18:04 | Re: Freeze avoidance of very large table. |
| Previous Message | Robert Haas | 2015-11-30 16:52:41 | Re: parallel joins, and better parallel explain |