From: | Melanie Plageman <melanieplageman(at)gmail(dot)com> |
---|---|
To: | John Naylor <johncnaylorls(at)gmail(dot)com> |
Cc: | Peter Geoghegan <pg(at)bowt(dot)ie>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, Andres Freund <andres(at)anarazel(dot)de>, Noah Misch <noah(at)leadboat(dot)com> |
Subject: | Re: Vacuum ERRORs out considering freezing dead tuples from before OldestXmin |
Date: | 2025-06-18 01:04:57 |
Message-ID: | CAAKRu_a7AGVQm4vup8WRr8TNrv96byMpa=_2hTq_t_G1ArA5vQ@mail.gmail.com |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Jun 17, 2025 at 3:23 AM John Naylor <johncnaylorls(at)gmail(dot)com> wrote:
>
> On Mon, Jun 16, 2025 at 9:58 PM Melanie Plageman
> <melanieplageman(at)gmail(dot)com> wrote:
> >
> > Test in attached patch seems to do the job on 32 bit and 64 bit when tested.
>
> Great!
>
> +log_recovery_conflict_waits = true
>
> I don't see this on pg16 -- If this is good to have, maybe worth
> calling out in the commit message as a difference?
Yea, I'm not 100% sure how useful it is because with
hot_standby_feedback on (which we need for the standby to hold the
horizon back when it is connected) we shouldn't really see recovery
conflicts. I think I added it at some point for debugging issues while
writing the test. Perhaps I'll remove it now, as it may be more
confusing than anything else.
> +# The TIDStore which vacuum uses to store dead items is optimized for its
> +# target system. On a 32-bit system, our example requires around 9000 rows to
> +# have enough dead items spread out across enough pages to fill the TIDStore
> +# and trigger a second round of index vacuuming. We could get away with fewer
> +# rows on 64-bit systems, but it doesn't seem worth the special case.
>
> Minor quibble: I wouldn't say it is deliberately optimized (at least
> not on local memory) -- it's more of a consequence of pointer-sizes
> and the somewhat arbitrary choice to set the slab block sizes to hold
> about X number of chunks. For v19, it might be good to hard-code the
> block sizes to reduce the possibility of difference and allow a
> smaller table.
Thanks for the clarification. I'll clean this comment up in the next
version I post after resolving the issue I mention below.
> +my $nrows = 9000;
>
> Running the queries in isolation on an -m32 build shows running 5
> index scans, and I found 4000 runs 3 index scans both with and without
> asserts. Of course, I'm only using the normal 8kB block sizes. In any
> case, 9000 is already a lot less than 200000, so we can go with that
> for v17 and v18.
What's odd is that I'm seeing now that I need at least 8000 tuples to
get > 1 pass of index vacuuming locally with a 64-bit assert build --
which is more than you are reporting and more than I remember having
needed for 64-bit builds when I tested this last year with your patch
applied.
What's even odder is that I tested on a 32-bit build as well (
-Dc_args='-m32' -Dc_link_args='-m32' --auto-features=disabled) and it
doesn't require any more than 8000 tuples to get > 1 pass of index
vacuuming.
So, currently, on both 32 and 64 bit builds and nrows == 8000, I get 2
passes of index vacuuming.
I can't tell what I'm doing wrong. Could you give your full build
details? There's no chance that you made a change to the TIDStore that
would make it possible for any configuration to have the same size
TIDStore on a 32 and 64 bit build, right?
- Melanie
From | Date | Subject | |
---|---|---|---|
Next Message | Matt Smith (matts3) | 2025-06-18 01:11:44 | Re: [PATCH] Add an ldflags_sl meson build option |
Previous Message | Peter Smith | 2025-06-18 01:04:31 | Re: Skipping schema changes in publication |