Re: parallel joins, and better parallel explain

From: Dilip Kumar <dilipbalaut(at)gmail(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: parallel joins, and better parallel explain
Date: 2015-12-23 07:34:16
Message-ID: CAFiTN-uq8pymV3pmvUEGPXGiWuVA-oR+10ZhD9-58QW3TjB-gA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Dec 22, 2015 at 8:30 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:

> On Tue, Dec 22, 2015 at 4:14 AM, Dilip Kumar <dilipbalaut(at)gmail(dot)com>
> wrote:
> > On Fri, Dec 18, 2015 at 8:47 PM Robert Wrote,
> >>> Yes, you are right, that create_gather_path() sets parallel_safe to
> false
> >>> unconditionally but whenever we are building a non partial path, that
> >>> time
> >>> we should carry forward the parallel_safe state to its parent, and it
> >>> seems
> >>> like that part is missing here..
> >
> >>Ah, right. Woops. I can't exactly replicate your results, but I've
> >>attempted to fix this in a systematic way in the new version attached
> >>here (parallel-join-v3.patch).
> >
> > I Have tested with the latest patch, problem is solved..
> >
> > During my testing i observed one more behaviour in the hash join, where
> > Parallel hash join is taking more time compared to Normal hash join,
>
> I think the gather-reader-order patch will fix this. Here's a test
> with all three patches.
>
>
Yeah right, After applying all three patches this problem is fixed, now
parallel hash join is faster than normal hash join.

I have tested one more case which Amit mentioned, I can see in that case
parallel plan (parallel degree>= 3) is still slow, In Normal case it
selects "Hash Join" but in case of parallel worker > 3 it selects Parallel
"Nest Loop Join" which is making it costlier.

CREATE TABLE t1(c1, c2) AS SELECT g, repeat('x', 5) FROM
generate_series(1, 10000000) g;

CREATE TABLE t2(c1, c2) AS SELECT g, repeat('x', 5) FROM
generate_series(1, 3000000) g;
Analyze t1;
Analyze t2;

postgres=# set max_parallel_degree=0;
SET
postgres=# Explain Analyze SELECT count(*) FROM t1 JOIN t2 ON t1.c1 =
t2.c1 AND t1.c1 BETWEEN 100 AND 200;
QUERY
PLAN
--------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=223208.93..223208.94 rows=1 width=0) (actual
time=2148.840..2148.841 rows=1 loops=1)
-> Hash Join (cost=204052.91..223208.92 rows=1 width=0) (actual
time=1925.309..2148.812 rows=101 loops=1)
Hash Cond: (t2.c1 = t1.c1)
-> Seq Scan on t2 (cost=0.00..15406.00 rows=1000000 width=4)
(actual time=0.025..104.028 rows=1000000 loops=1)
-> Hash (cost=204052.90..204052.90 rows=1 width=4) (actual
time=1925.219..1925.219 rows=101 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 12kB
-> Seq Scan on t1 (cost=0.00..204052.90 rows=1 width=4)
(actual time=0.029..1925.196 rows=101 loops=1)
Filter: ((c1 >= 100) AND (c1 <= 200))
Rows Removed by Filter: 9999899
Planning time: 0.470 ms
Execution time: 2148.928 ms
(11 rows)

postgres=# set max_parallel_degree=3;
SET
postgres=# Explain Analyze SELECT count(*) FROM t1 JOIN t2 ON t1.c1 =
t2.c1 AND t1.c1 BETWEEN 100 AND 200;
QUERY
PLAN
----------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=78278.36..78278.37 rows=1 width=0) (actual
time=19944.113..19944.113 rows=1 loops=1)
-> Gather (cost=1000.00..78278.36 rows=1 width=0) (actual
time=0.682..19943.928 rows=101 loops=1)
Number of Workers: 3
-> Nested Loop (cost=0.00..77278.26 rows=1 width=0) (actual
time=690.633..6556.201 rows=25 loops=4)
Join Filter: (t1.c1 = t2.c1)
Rows Removed by Join Filter: 25249975
-> Parallel Seq Scan on t1 (cost=0.00..58300.83 rows=0
width=4) (actual time=619.198..619.262 rows=25 loops=4)
Filter: ((c1 >= 100) AND (c1 <= 200))
Rows Removed by Filter: 2499975
-> Seq Scan on t2 (cost=0.00..15406.00 rows=1000000
width=4) (actual time=0.008..105.757 rows=1000000 loops=101)
Planning time: 0.206 ms
Execution time: 19944.748 ms

postgres=# set max_parallel_degree=1;
SET
postgres=# Explain Analyze SELECT count(*) FROM t1 JOIN t2 ON t1.c1 =
t2.c1 AND t1.c1 BETWEEN 100 AND 200;
QUERY
PLAN
----------------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=156191.39..156191.40 rows=1 width=0) (actual
time=1336.401..1336.401 rows=1 loops=1)
-> Hash Join (cost=137035.38..156191.39 rows=1 width=0) (actual
time=1110.562..1336.386 rows=101 loops=1)
Hash Cond: (t2.c1 = t1.c1)
-> Seq Scan on t2 (cost=0.00..15406.00 rows=1000000 width=4)
(actual time=0.025..101.659 rows=1000000 loops=1)
-> Hash (cost=137035.37..137035.37 rows=1 width=4) (actual
time=1110.486..1110.486 rows=101 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 12kB
-> Gather (cost=1000.00..137035.37 rows=1 width=4) (actual
time=0.493..1110.445 rows=101 loops=1)
Number of Workers: 1
-> Parallel Seq Scan on t1 (cost=0.00..136035.27
rows=1 width=4) (actual time=553.212..1107.992 rows=50 loops=2)
Filter: ((c1 >= 100) AND (c1 <= 200))
Rows Removed by Filter: 4999950
Planning time: 0.211 ms
Execution time: 1336.618 ms
(13 rows)

postgres=# set max_parallel_degree=2;
SET
postgres=# Explain Analyze SELECT count(*) FROM t1 JOIN t2 ON t1.c1 =
t2.c1 AND t1.c1 BETWEEN 100 AND 200;
QUERY
PLAN
--------------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=101777.29..101777.29 rows=1 width=0) (actual
time=1014.506..1014.507 rows=1 loops=1)
-> Hash Join (cost=82621.27..101777.28 rows=1 width=0) (actual
time=796.628..1014.493 rows=101 loops=1)
Hash Cond: (t2.c1 = t1.c1)
-> Seq Scan on t2 (cost=0.00..15406.00 rows=1000000 width=4)
(actual time=0.023..99.313 rows=1000000 loops=1)
-> Hash (cost=82621.26..82621.26 rows=1 width=4) (actual
time=796.552..796.552 rows=101 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 12kB
-> Gather (cost=1000.00..82621.26 rows=1 width=4) (actual
time=0.435..796.499 rows=101 loops=1)
Number of Workers: 2
-> Parallel Seq Scan on t1 (cost=0.00..81621.16
rows=0 width=4) (actual time=528.052..793.243 rows=34 loops=3)
Filter: ((c1 >= 100) AND (c1 <= 200))
Rows Removed by Filter: 3333300
Planning time: 0.200 ms
Execution time: 1014.672 ms
(13 rows)

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message David Rowley 2015-12-23 08:50:03 Re: Combining Aggregates
Previous Message Peter Geoghegan 2015-12-23 06:49:35 Re: Re: Reusing abbreviated keys during second pass of ordered [set] aggregates