Re: parallel joins, and better parallel explain

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Greg Stark <stark(at)mit(dot)edu>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: parallel joins, and better parallel explain
Date: 2015-11-30 17:26:21
Message-ID: CA+TgmoZJ0=opBxmSyxLyE=Cnk5rxMOQhCefuOLaRWCuBye0FvA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Nov 30, 2015 at 12:01 PM, Greg Stark <stark(at)mit(dot)edu> wrote:
> On Mon, Nov 30, 2015 at 4:52 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> Not only does this build only one copy of the hash table instead of N
>> copies, but we can parallelize the hash table construction itself by
>> having all workers insert in parallel, which is pretty cool.
>
> Hm. The case where you don't want parallel building of the hash table
> might be substantially simpler. You could build a hash table and then
> copy it into shared memory as single contiguous read-only data
> structure optimized for lookups. I have an inkling that there are even
> ways of marking the memory as being read-only and not needing cache
> synchronization.

Yes, that's another approach that we could consider. I suspect it's
not really a lot better than the parallel-build case. If the inner
table is small, then it's probably best to have every worker build its
own unshared copy of the table rather than having one worker build the
table and everybody else wait, which might lead to stalls during the
build phase and additional traffic on the memory bus during the probe
phase (though, as you say, giving the kernel a hint could help in some
cases). If the inner table is big, then having everybody wait for a
single process to perform the build probably sucks.

But it's not impossible that there could be cases when it trumps every
other strategy. For example, if you're going to be doing a huge
number of probes, you could try building the hash table with several
different hash functions until you find one that is collision-free or
nearly so, and then use that one. The extra effort spent during the
build phase might speed up the probe phase enough to win. You can't
do that sort of thing so easily in a parallel build. Even apart from
that, if you build the hash table locally first and then copy it into
shared memory afterwards, you can free up any extra memory and use
only the minimal amount that you really need, which could be
beneficial in some cases. I'm just not sure that's appealing enough
to justify carrying a third system for building hash tables for hash
joins.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Pavel Stehule 2015-11-30 17:30:18 Re: custom function for converting human readable sizes to bytes
Previous Message Masahiko Sawada 2015-11-30 17:18:04 Re: Freeze avoidance of very large table.