From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | xiaohongjun(at)stu(dot)xidian(dot)edu(dot)cn |
Cc: | pgsql-bugs(at)lists(dot)postgresql(dot)org |
Subject: | Re: BUG #18935: The optimiser's choice of sort doubles the execution time. |
Date: | 2025-05-19 15:21:27 |
Message-ID: | 409752.1747668087@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
PG Bug reporting form <noreply(at)postgresql(dot)org> writes:
> database4=# explain analyze SELECT t0.c0 FROM t0 INNER JOIN t1* ON
> ((t1.c0)=(((t1.c0)-(((((t1.c0)*('(-795716537,-245904803]'::int4range)))-(range_merge(t1.c0,
> t0.c0))))))) GROUP BY t0.c0;
[ planner incorrectly prefers sort/group over hashed grouping ]
I don't think there's much to be done about this. The core of the
problem is that the estimate of the number of rows coming into the
grouping step is off by more than two orders of magnitude:
> -> Nested Loop (cost=0.00..363.13 rows=70 width=13) (actual
> time=0.055..8.431 rows=12688 loops=1)
There's little point in complaining that the cost of the grouping
is off by a factor of two when there's such a large error in its
input information.
So the main thing that would have to be done is improving the
selectivity estimate for
> Join Filter: (t1.c0 = (t1.c0 - ((t1.c0 *
> '[-795716536,-245904802)'::int4range) - range_merge(t1.c0, t0.c0))))
If this condition weren't so obviously random junk generated by
a fuzzer, maybe people would be motivated to try to improve that.
But as it stands, there's neither a clear path to improving it
nor a lot of motivation to try.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Laurenz Albe | 2025-05-19 15:23:58 | Re: BUG #18935: The optimiser's choice of sort doubles the execution time. |
Previous Message | PG Bug reporting form | 2025-05-19 12:42:13 | BUG #18935: The optimiser's choice of sort doubles the execution time. |