Query running for very long time (server hanged) with parallel append

From: Rajkumar Raghuwanshi <rajkumar(dot)raghuwanshi(at)enterprisedb(dot)com>
To: pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Query running for very long time (server hanged) with parallel append
Date: 2018-02-01 10:29:06
Message-ID: CAKcux6kfXvOgz5WwE7Pc+pW+OpW-+nvcu9ybJF+jvq+nA87J+g@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi,

I am getting server hang kind of issue with the below postgres.conf setup.
Issue may occur while running below query single/multiple times (random).
Not getting terminal back even after cancelling query.
explain output and query is given below.

SET enable_hashjoin TO off;
SET enable_nestloop TO off;
SET enable_seqscan TO off;
SET parallel_setup_cost=0;
SET parallel_tuple_cost=0;
SET max_parallel_workers_per_gather=4;
SET enable_parallel_append = on;
SET enable_partition_wise_join TO true;

CREATE TABLE plt1 (a int, b int, c text) PARTITION BY LIST(c);
CREATE TABLE plt1_p1 PARTITION OF plt1 FOR VALUES IN ('0000', '0003',
'0004', '0010');
CREATE TABLE plt1_p2 PARTITION OF plt1 FOR VALUES IN ('0001', '0005',
'0002', '0009');
CREATE TABLE plt1_p3 PARTITION OF plt1 FOR VALUES IN ('0006', '0007',
'0008', '0011');
INSERT INTO plt1 SELECT i, i, to_char(i%12, 'FM0000') FROM
generate_series(0, 599, 2) i;

CREATE TABLE plt2 (a int, b int, c text) PARTITION BY LIST(c);
CREATE TABLE plt2_p1 PARTITION OF plt2 FOR VALUES IN ('0000', '0003',
'0004', '0010');
CREATE TABLE plt2_p2 PARTITION OF plt2 FOR VALUES IN ('0001', '0005',
'0002', '0009');
CREATE TABLE plt2_p3 PARTITION OF plt2 FOR VALUES IN ('0006', '0007',
'0008', '0011');
INSERT INTO plt2 SELECT i, i, to_char(i%12, 'FM0000') FROM
generate_series(0, 599, 3) i;

CREATE INDEX iplt1_p1_c on plt1_p1(c);
CREATE INDEX iplt1_p2_c on plt1_p2(c);
CREATE INDEX iplt1_p3_c on plt1_p3(c);
CREATE INDEX iplt2_p1_c on plt2_p1(c);
CREATE INDEX iplt2_p2_c on plt2_p2(c);
CREATE INDEX iplt2_p3_c on plt2_p3(c);

ANALYZE plt1;
ANALYZE plt2;

EXPLAIN (COSTS OFF) SELECT DISTINCT t1.c,count(*) FROM plt1 t1 LEFT JOIN
LATERAL (SELECT t2.c AS t2c, t3.c AS t3c, least(t1.c,t2.c,t3.c) FROM plt1
t2 JOIN plt2 t3 ON (t2.c = t3.c)) ss ON t1.c = ss.t2c WHERE t1.a % 25 = 0
GROUP BY 1 ORDER BY 1,2;
QUERY
PLAN
--------------------------------------------------------------------------------------------------------------------
Unique
-> Sort
Sort Key: t1.c, (count(*))
-> Finalize GroupAggregate
Group Key: t1.c
-> Sort
Sort Key: t1.c
-> Gather
Workers Planned: 2
-> Partial HashAggregate
Group Key: t1.c
-> Parallel Append
-> Merge Right Join
Merge Cond: (t2.c = t1.c)
-> Merge Join
Merge Cond: (t3.c = t2.c)
-> Index Only Scan
using iplt2_p1_c on plt2_p1 t3
-> Materialize
-> Index Only
Scan using iplt1_p1_c on plt1_p1 t2
-> Materialize
-> Index Scan using
iplt1_p1_c on plt1_p1 t1
Filter: ((a % 25)
= 0)
-> Merge Left Join
Merge Cond: (t1_2.c = t2_2.c)
-> Parallel Index Scan using
iplt1_p3_c on plt1_p3 t1_2
Filter: ((a % 25) = 0)
-> Materialize
-> Merge Join
Merge Cond:
(t3_2.c = t2_2.c)
-> Index Only
Scan using iplt2_p3_c on plt2_p3 t3_2
-> Materialize
-> Index
Only Scan using iplt1_p3_c on plt1_p3 t2_2
-> Merge Left Join
Merge Cond: (t1_1.c = t2_1.c)
-> Parallel Index Scan using
iplt1_p2_c on plt1_p2 t1_1
Filter: ((a % 25) = 0)
-> Materialize
-> Merge Join
Merge Cond:
(t2_1.c = t3_1.c)
-> Index Only
Scan using iplt1_p2_c on plt1_p2 t2_1
-> Index Only
Scan using iplt2_p2_c on plt2_p2 t3_1
(41 rows)

SELECT DISTINCT t1.c,count(*) FROM plt1 t1 LEFT JOIN LATERAL (SELECT t2.c
AS t2c, t3.c AS t3c, least(t1.c,t2.c,t3.c) FROM plt1 t2 JOIN plt2 t3 ON
(t2.c = t3.c)) ss ON t1.c = ss.t2c WHERE t1.a % 25 = 0 GROUP BY 1 ORDER BY
1,2;
.
.
.
"hanged".

Thanks & Regards,
Rajkumar Raghuwanshi
QMG, EnterpriseDB Corporation

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Daniel Verite 2018-02-01 11:01:37 Re: proposal: alternative psql commands quit and exit
Previous Message Arthur Zakirov 2018-02-01 10:24:51 Re: [HACKERS] Bug in to_timestamp().