| From: | Sergey Naumov <sknaumov(at)gmail(dot)com> |
|---|---|
| To: | David Rowley <dgrowleyml(at)gmail(dot)com> |
| Cc: | pgsql-bugs(at)lists(dot)postgresql(dot)org |
| Subject: | Re: BUG #19332: Sudden 330x performance degradation of SELECT amid INSERTs |
| Date: | 2025-12-04 10:19:52 |
| Message-ID: | CAH3pVZO_awD7p-FQ-32XD47eY+Jzu6j=y95NAjbiuMq8+HOgng@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-bugs |
Hello again.
I've collected slow and fast query plans and it looks like when data is
cleaned up, PostgreSQL doesn't know what table is big and what is small,
and when data generation is in one big transaction, data from this
uncommitted transaction already affects SELECT queries, but VACUUM doesn't
see this uncommitted data to adjust stats => query planner could come up
with suboptimal query plan.
But still, the query itself has a hint as to what table has to be filtered
first - there is a WHERE clause to keep just one line from this table. But
query planner decides to join another (very big) table first => performance
degrades by orders of magnitude.
For me It looks like a flaw in query planner logic, that, having no data
about tables content, ignores the WHERE clause that hints what table has to
be processed first => not sure whether it should be treated as performance
issue or bug.
Query plans are attached as PEV2 standalone HTML pages.
Thanks,
Sergey.
| Attachment | Content-Type | Size |
|---|---|---|
| fast.html | text/html | 7.5 KB |
| slow.html | text/html | 7.7 KB |
| From | Date | Subject | |
|---|---|---|---|
| Next Message | VASUKI M | 2025-12-04 11:08:40 | Re: BUG #19095: Test if function exit() is used fail when linked static |
| Previous Message | Michael Paquier | 2025-12-04 00:32:03 | Re: BUG #19095: Test if function exit() is used fail when linked static |