Re: [BUG + PATCH] DSA pagemap out-of-bounds in make_new_segment odd-sized path

From: Michael Paquier <michael(at)paquier(dot)xyz>
To: Sami Imseih <samimseih(at)gmail(dot)com>
Cc: paul(dot)bunn(at)icloud(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: [BUG + PATCH] DSA pagemap out-of-bounds in make_new_segment odd-sized path
Date: 2026-03-05 22:49:39
Message-ID: aaoIg049JvfqWBXN@paquier.xyz
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Mar 05, 2026 at 04:25:57PM -0600, Sami Imseih wrote:
> With the above assert in place, the usable_pages of 879 ars you have
> in test_dsa.c crashes
> due to the assertion failure.

Yep, the assertion looks useful to have in place.

> As far as the fix, I do agree with what you have in 0001-, except I am
> not so sure about the "+1 for rounding".
> Can we just do ceiling division?

Ceiling division is more precise at the end, and can be checked, I'd
tend to stick with your approach.

> + /*
> + * We must also account for pagemap entries needed to cover the
> + * metadata pages themselves. The pagemap must track
> all pages in the
> + * segment, including the pages occupied by metadata.
> + */
> + metadata_bytes +=
> + ((metadata_bytes + (FPM_PAGE_SIZE -
> sizeof(dsa_pointer)) - 1) /
> + (FPM_PAGE_SIZE - sizeof(dsa_pointer))) *
> + sizeof(dsa_pointer);
>
> I don't think we should add a test_dsa, but I do think it was useful
> to verify the issue.

I think that a test would be actually nice to have in test_dsa, but the
proposed one lacks flexibility. How about changing it to a function
that takes in input the values you'd want to test for pagemap_start,
usable_pages and the offset? The point is to check a dsa_allocate()
with the sanity of an offset, hence a simple test_dsa_allocate() as
function name? The test module exists down to v17, which should be
enough to check things across platforms moving ahead.

> This sounds like it should be backpatched, but we'll see what a
> committer thinks.

This should be backpatched. If you'd have told me that the
allocation is oversized, then an optimization may have been possible
on HEAD, especially if overestimation was large. A small reduction in
size may not even be worth worrying in some cases, as we tend to
oversize some shmem areas as well. An undersized calculation is a
very different story: memory corruptions on the stack are not cool at
all, especially on stable branches, and they lead to unpredictible
behaviors.
--
Michael

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2026-03-05 23:11:43 Re: Why clearing the VM doesn't require registering vm buffer in wal record
Previous Message Tom Lane 2026-03-05 22:44:06 Re: generating function default settings from pg_proc.dat