From: | David Rowley <dgrowleyml(at)gmail(dot)com> |
---|---|
To: | Marco Boeringa <marco(at)boeringa(dot)demon(dot)nl> |
Cc: | Andres Freund <andres(at)anarazel(dot)de>, pgsql-bugs(at)lists(dot)postgresql(dot)org, Thom Brown <thom(at)linux(dot)com> |
Subject: | Re: Potential "AIO / io workers" inter-worker locking issue in PG18? |
Date: | 2025-10-20 01:33:35 |
Message-ID: | CAApHDvpHXvCiWADyyLmAzp+YLczBNy8rQxR0yP3gZ=noQgQhVg@mail.gmail.com |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
On Mon, 20 Oct 2025 at 09:37, Marco Boeringa <marco(at)boeringa(dot)demon(dot)nl> wrote:
> I am not sure why in this nested loop, two index scans on essentially
> the same key and table are executed. You can compare this bad plan with
The query contains a self-join, so that's why you're seeing the index
scanned twice in the query plan. If that's not needed, then you should
remove it from the query. If objectid is unique for this table then I
don't see why you need to join the table again to access the very same
row that you're updating. Just put those function calls in the
UPDATE's SET clause.
(We do have self join elimination in v18, but I see that it's a bit
overly strict in what it removes around looking for duplicate
relations when one of them is the query's result relation. Likely that
can be made better so it still looks for duplicate relations including
the result relation, but just never considers removing that one, only
the other duplicate(s).)
> *** ACTUAL BAD PLAN AS CAPTURED BY auto_explain ***:
> -> Index Scan using
> landcover_grassy_small_scale_2_ply_pkey on
> osm.landcover_grassy_small_scale_2_ply t1 (cost=0.38..2.39 rows=1
> width=310) (actual time=5.176..462.613 rows=222396.00 loops=1)
> Output: t1.way, t1.ctid, t1.objectid
> Index Searches: 1
> Buffers: shared hit=66411
This table must have been VACUUMed or ANALYZEd either when it was
empty or when it contained 1 row. There's no predicate here, so that
estimate, aside from clamping to 1, comes directly from
pg_class.reltuples. A new table or truncated table would never
estimate 1 row as the planner always plays it safe when there are no
statistics generated yet and assumes 10 pages worth of rows. I can't
think of any specific reason why v18 behaves differently from v17 on
this... Maybe you've gotten unlikely with an autovacuum timing thing
and it's running at a slightly different time than in v17, perhaps
because it completed the autovacuum of another table slightly quicker
than v17 did. v18 can perform asynchronous reads for vacuum, maybe
that could mean more vacuum_cost_page_hits and less
vacuum_cost_page_misses when calculating vacuum_cost_limit.
Or, perhaps you're doing something like performing a manual VACUUM
after the tables have had all of their rows deleted?
David
From | Date | Subject | |
---|---|---|---|
Next Message | Dilip Kumar | 2025-10-20 02:30:26 | Re: BUG #19074: pg_dump from v18 loses the NOT NULL flag in the inherited table field when dumping v17-databases |
Previous Message | Tom Lane | 2025-10-19 22:30:38 | Re: Issues with blocksize smaller than 8KB |