Re: strange slow query - lost lot of time somewhere

From: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
To: David Rowley <dgrowleyml(at)gmail(dot)com>
Cc: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: strange slow query - lost lot of time somewhere
Date: 2022-05-03 04:09:13
Message-ID: CAFj8pRDdSsLja+-KiVZyWx9HQVuwnrs2e6-4+K_jGH=uGsPm6A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

po 2. 5. 2022 v 23:48 odesílatel David Rowley <dgrowleyml(at)gmail(dot)com> napsal:

> On Mon, 2 May 2022 at 21:00, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
> wrote:
> > I found a query that is significantly slower with more memory
>
> Can you clarify what you mean here? More memory was installed on the
> machine? or work_mem was increased? or?
>
> > plan 1 - fast https://explain.depesz.com/s/XM1f
> >
> > plan 2 - slow https://explain.depesz.com/s/2rBw
>
> If it was work_mem you increased, it seems strange that the plan would
> switch over to using a Nested Loop / Memoize plan. Only 91 rows are
> estimated on the outer side of the join. It's hard to imagine that
> work_mem was so low that the Memoize costing code thought there would
> ever be cache evictions.
>
> > Strange - the time of last row is +/- same, but execution time is 10x
> worse
> >
> > It looks like slow environment cleaning
>
> Can you also show EXPLAIN for the Memoize plan without ANALYZE?
>

yes - it is strange - it is just slow without execution

┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐

QUERY PLAN

├──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Nested Loop Anti Join (cost=59.62..3168.15 rows=1 width=16)


│ -> Nested Loop Anti Join (cost=59.34..3165.24 rows=1 width=16)


│ -> Nested Loop Semi Join (cost=58.48..3133.09 rows=1 width=16)


│ -> Bitmap Heap Scan on item itembo0_ (cost=57.34..2708.22
rows=11 width=16)

│ Recheck Cond: ((ending_time IS NULL) OR ((status_id =
ANY ('{1,4,5}'::bigint[])) AND (CURRENT_TIMESTAMP < ending_time) AND
(starting_time <= CURRENT_TIMESTAMP))) │
│ Filter: ((to_expose_date IS NULL) AND (status_id =
ANY ('{1,4,5}'::bigint[])) AND (starting_time <= CURRENT_TIMESTAMP) AND
((ending_time IS NULL) OR (CURRENT_TIMESTAMP < ending_time))) │
│ -> BitmapOr (cost=57.34..57.34 rows=1751 width=0)


│ -> Bitmap Index Scan on stehule354
(cost=0.00..2.08 rows=1 width=0)

│ Index Cond: (ending_time IS NULL)


│ -> Bitmap Index Scan on stehule1010
(cost=0.00..55.26 rows=1751 width=0)

│ Index Cond: ((status_id = ANY
('{1,4,5}'::bigint[])) AND (ending_time > CURRENT_TIMESTAMP) AND
(starting_time <= CURRENT_TIMESTAMP))

│ -> Nested Loop (cost=1.14..37.71 rows=91 width=8)


│ -> Index Only Scan using uq_isi_itemid_itemimageid
on item_share_image itemsharei2__1 (cost=0.57..3.80 rows=91 width=16)

│ Index Cond: (item_id = itembo0_.id)


│ -> Memoize (cost=0.57..2.09 rows=1 width=8)


│ Cache Key: itemsharei2__1.item_image_id


│ Cache Mode: logical


│ -> Index Only Scan using pk_item_image on
item_image itemimageb3__1 (cost=0.56..2.08 rows=1 width=8)

│ Index Cond: (id =
itemsharei2__1.item_image_id)


│ -> Nested Loop (cost=0.85..32.14 rows=1 width=8)


│ -> Index Only Scan using uq_isi_itemid_itemimageid on
item_share_image itemsharei2_ (cost=0.57..3.80 rows=91 width=16)

│ Index Cond: (item_id = itembo0_.id)


│ -> Memoize (cost=0.29..1.72 rows=1 width=8)


│ Cache Key: itemsharei2_.item_image_id


│ Cache Mode: logical


│ -> Index Only Scan using stehule3001 on item_image
itemimageb3_ (cost=0.28..1.71 rows=1 width=8)

│ Index Cond: (id = itemsharei2_.item_image_id)


│ -> Index Only Scan using ixfk_ima_itemid on item_missing_attribute
itemmissin1_ (cost=0.28..1.66 rows=1 width=8)

│ Index Cond: (item_id = itembo0_.id)


└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
(29 řádek)

Čas: 420,392 ms

> Does the slowness present every time that plan is executed?
>

looks yes

>
> Can you show the EXPLAIN ANALYZE of the nested loop plan with
> enable_memoize = off? You may ned to disable hash and merge join.
>

┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐

QUERY PLAN

├─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Nested Loop Anti Join (cost=1093.22..4488.89 rows=1 width=16) (actual
time=5.723..60.470 rows=13 loops=1)

│ -> Nested Loop Anti Join (cost=1092.94..4485.97 rows=1 width=16)
(actual time=5.165..60.368 rows=41 loops=1)

│ -> Gather (cost=1001.70..4391.26 rows=1 width=16) (actual
time=1.909..56.913 rows=41 loops=1)

│ Workers Planned: 2


│ Workers Launched: 2


│ -> Nested Loop Semi Join (cost=1.70..3391.16 rows=1
width=16) (actual time=22.032..39.253 rows=14 loops=3)

│ -> Parallel Index Only Scan using stehule1010 on
item itembo0_ (cost=0.57..2422.83 rows=5 width=16) (actual
time=21.785..38.851 rows=14 loops=3) │
│ Index Cond: ((status_id = ANY
('{1,4,5}'::bigint[])) AND (starting_time <= CURRENT_TIMESTAMP))

│ Filter: ((to_expose_date IS NULL) AND
((ending_time IS NULL) OR (CURRENT_TIMESTAMP < ending_time)))

│ Rows Removed by Filter: 1589


│ Heap Fetches: 21


│ -> Nested Loop (cost=1.13..192.76 rows=91 width=8)
(actual time=0.029..0.029 rows=1 loops=41)

│ -> Index Only Scan using
uq_isi_itemid_itemimageid on item_share_image itemsharei2__1
(cost=0.57..3.80 rows=91 width=16) (actual time=0.015..0.015 rows=1
loops=41) │
│ Index Cond: (item_id = itembo0_.id)


│ Heap Fetches: 2


│ -> Index Only Scan using pk_item_image on
item_image itemimageb3__1 (cost=0.56..2.08 rows=1 width=8) (actual
time=0.013..0.013 rows=1 loops=41) │
│ Index Cond: (id =
itemsharei2__1.item_image_id)

│ Heap Fetches: 2


│ -> Hash Join (cost=91.24..94.71 rows=1 width=8) (actual
time=0.084..0.084 rows=0 loops=41)

│ Hash Cond: (itemsharei2_.item_image_id = itemimageb3_.id)


│ -> Index Only Scan using uq_isi_itemid_itemimageid on
item_share_image itemsharei2_ (cost=0.57..3.80 rows=91 width=16) (actual
time=0.003..0.004 rows=6 loops=41) │
│ Index Cond: (item_id = itembo0_.id)


│ Heap Fetches: 2


│ -> Hash (cost=67.41..67.41 rows=1861 width=8) (actual
time=3.213..3.214 rows=3950 loops=1)

│ Buckets: 4096 (originally 2048) Batches: 1
(originally 1) Memory Usage: 187kB

│ -> Index Only Scan using stehule3001 on item_image
itemimageb3_ (cost=0.28..67.41 rows=1861 width=8) (actual
time=0.029..2.479 rows=3950 loops=1) │
│ Heap Fetches: 2203


│ -> Index Only Scan using ixfk_ima_itemid on item_missing_attribute
itemmissin1_ (cost=0.28..1.66 rows=1 width=8) (actual time=0.002..0.002
rows=1 loops=41) │
│ Index Cond: (item_id = itembo0_.id)


│ Heap Fetches: 0


│ Planning Time: 1.471 ms


│ Execution Time: 60.570 ms


└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
(32 řádek)

Čas: 62,982 ms

>
> David
>

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Andres Freund 2022-05-03 04:11:33 Re: mylodon's failures in the back branches
Previous Message Andres Freund 2022-05-03 04:05:57 Re: failures in t/031_recovery_conflict.pl on CI