Re: strange slow query - lost lot of time somewhere

From: David Rowley <dgrowleyml(at)gmail(dot)com>
To: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
Cc: Jakub Wartak <Jakub(dot)Wartak(at)tomtom(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: strange slow query - lost lot of time somewhere
Date: 2022-05-06 09:27:57
Message-ID: CAApHDvrQoR4NknjOif+zC6N=CRw1rh83kptztBPgOK_i9tjHdw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, 6 May 2022 at 20:04, David Rowley <dgrowleyml(at)gmail(dot)com> wrote:
> Thanks for the report. I think I now see the problem. Looking at
> [1], it seems that's a bushy plan. That's fine, but less common than a
> left deep plan.

On second thoughts, it does not need to be a bushy plan for the outer
side of the nested loop to be parameterized by some higher-level
nested loop. There's an example of a plan like this in the regression
tests.

regression=# explain (analyze, costs off, summary off)
regression-# select * from tenk1 t1 left join
regression-# (tenk1 t2 join tenk1 t3 on t2.thousand = t3.unique2)
regression-# on t1.hundred = t2.hundred and t1.ten = t3.ten
regression-# where t1.unique1 = 1;
QUERY PLAN
---------------------------------------------------------------------------------------------------------
Nested Loop Left Join (actual time=0.258..0.487 rows=20 loops=1)
-> Index Scan using tenk1_unique1 on tenk1 t1 (actual
time=0.049..0.049 rows=1 loops=1)
Index Cond: (unique1 = 1)
-> Nested Loop (actual time=0.204..0.419 rows=20 loops=1)
Join Filter: (t1.ten = t3.ten)
Rows Removed by Join Filter: 80
-> Bitmap Heap Scan on tenk1 t2 (actual time=0.064..0.194
rows=100 loops=1)
Recheck Cond: (t1.hundred = hundred)
Heap Blocks: exact=86
-> Bitmap Index Scan on tenk1_hundred (actual
time=0.036..0.036 rows=100 loops=1)
Index Cond: (hundred = t1.hundred)
-> Memoize (actual time=0.001..0.001 rows=1 loops=100)
Cache Key: t2.thousand
Cache Mode: logical
Hits: 90 Misses: 10 Evictions: 0 Overflows: 0
Memory Usage: 4kB
-> Index Scan using tenk1_unique2 on tenk1 t3 (actual
time=0.009..0.009 rows=1 loops=10)
Index Cond: (unique2 = t2.thousand)
(17 rows)

debugging this I see that the memorize plan won because it was passing
10000 as the number of calls. It should have been passing 100. The
memorize node's number of loops agrees with that. Fixing the calls to
correctly pass 100 gets rid of the Memoize node.

I've attached a patch to fix. I'll look at it in more detail after the weekend.

I'm very tempted to change the EXPLAIN output in at least master to
display the initial and final (maximum) hash table sizes. Wondering if
anyone would object to that?

David

Attachment Content-Type Size
fix_incorrect_calls_estimate_in_memoize.patch text/plain 2.9 KB

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2022-05-06 09:43:24 Re: Configuration Parameter/GUC value validation hook
Previous Message Peter Smith 2022-05-06 09:25:34 Fix typo in code comment - origin.c