From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Melanie Plageman <melanieplageman(at)gmail(dot)com> |
Cc: | Andres Freund <andres(at)anarazel(dot)de>, Kirill Reshke <reshkekirill(at)gmail(dot)com>, Andrey Borodin <x4mmm(at)yandex-team(dot)ru>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Heikki Linnakangas <hlinnaka(at)iki(dot)fi> |
Subject: | Re: eliminate xl_heap_visible to reduce WAL (and eventually set VM on-access) |
Date: | 2025-09-09 14:00:04 |
Message-ID: | CA+Tgmob05A07mtzeUGwxQKU9KZSf4BhJU9CXgcy4Pe3ZHxZrcw@mail.gmail.com |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Sep 8, 2025 at 6:29 PM Melanie Plageman
<melanieplageman(at)gmail(dot)com> wrote:
> But, I think you're right that maintaining the order of operations
> proposed in transam/README is more important. As such, in attached
> v11, I've modified this patch and the other patches where I replace
> visibilitymap_set() with visibilitymap_set_vmbits() to exclusively
> lock the vmbuffer before the critical section.
> visibilitymap_set_vmbits() asserts that we have the vmbuffer
> exclusively locked, so we should be good.
That sounds good. I think it is OK to keep some of the odd things that
we're currently doing if they're hard to eliminate, but if they're not
really needed then I'd rather see us standardize the code. I feel (and
I think you may agree, based on other conversations that we've had)
that the visibility map code is somewhat oddly structured, and I'd
like to see us push the amount of oddness down rather than up, if we
can reasonably do so without breaking everything.
> The only difference is I replaced the phrase "LSN interlock" with
> "being dropped or truncated later in recovery" -- which is more
> specific and, I thought, more clear. Without this comment, it took me
> some time to understand the scenarios that might lead us to skip
> updating the heap block. heap_xlog_visible() has cause to describe
> this situation in an earlier comment -- which is why I think the LSN
> interlock comment is less confusing there.
>
> Anyway, I'm open to changing the comment. I could:
> 1) copy-paste the same comment as heap_xlog_visible()
> 2) refer to the comment in heap_xlog_visible() (comment seemed a bit
> short for that)
> 3) diverge the comments further by improving the new comment in
> heap_xlog_multi_insert() in some way
> 4) something else?
IMHO, copying and pasting comments is not great, and comments with
identical intent and divergent wording are also not great. The former
is not great because having a whole bunch of copies of the same
comment, especially if it's a block comment rather than a 1-liner,
uses up a bunch of space and creates a maintenance hazard in the sense
that future updates might not get propagated to all copies. The latter
is not great because it makes it hard to grep for other instances that
should be adjusted when you adjust one, and also because if one
version really is better than the other than ideally we'd like to have
the good version everywhere. Of course, there's some tension between
these two goals. In this particular case, thinking a little harder
about your proposed change, it seems to me that "LSN interlock" is
more clear about what the immediate test is that would cause us to
skip updating the heap page, and "being dropped or truncated later in
recovery" is more clear about what the larger state of the world that
would lead to that situation is. But whatever preference anyone might
have about which way to go with that choice, it is hard to see why the
preference should go one way in one case and the other way in another
case. Therefore, I favor an approach that leads either to an identical
comment in both places, or to one comment referring to the other.
> > The second paragraph does not convince me at all. I see no reason to
> > believe that this is safe, or that it is a good idea. The code in
> > xlog_heap_visible() thinks its OK to unlock and relock the page to
> > make visibilitymap_set() happy, which is cringy but probably safe for
> > lack of concurrent writers, but skipping locking altogether seems
> > deeply unwise.
>
> Actually in master, heap_xlog_visible() has no lock on the heap page
> when it calls visibiltymap_set(). It releases that lock before
> recording the freespace in the FSM and doesn't take it again.
>
> It does unlock and relock the VM page -- because visibilitymap_set()
> expects to take the lock on the VM.
>
> I agree that not holding the heap lock while updating the VM is
> unsatisfying. We can't hold it while doing the IO to read in the VM
> block in XLogReadBufferForRedoExtended(). So, we could take it again
> before calling visibilitymap_set(). But we don't always have the heap
> buffer, though. I suspect this is partially why heap_xlog_visible()
> unconditionally passes InvalidBuffer to visibilitymap_set() as the
> heap buffer and has special case handling for recovery when we don't
> have the heap buffer.
You know, I wasn't thinking carefully enough about the distinction
between the heap page and the visibility map page here. I thought you
were saying that you were modifying a page without a lock on that
page, but you aren't: you're saying you're modifying a page without a
lock on another page to which it is related. The former seems
disastrous, but the latter might be OK. However, I'm sort of confused
about what the comment is trying to say to justify that:
+ * It is only okay to set the VM bits without holding the heap page lock
+ * because we can expect no other writers of this page.
It is not exactly clear to me whether "this page" here refers to the
heap page or the VM page. If it means the heap page, why should that
be so if we haven't got any kind of lock? If it means the VM page,
then why is the heap page even relevant?
--
Robert Haas
EDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Matheus Alcantara | 2025-09-09 14:18:09 | Re: Only one version can be installed when using extension_control_path |
Previous Message | Vivek Gadge | 2025-09-09 13:50:58 | Re: Query Performance Degradation Due to Partition Scan Order – PostgreSQL v17.6 |