| From: | Maxim Orlov <orlovmg(at)gmail(dot)com> |
|---|---|
| To: | Heikki Linnakangas <hlinnaka(at)iki(dot)fi> |
| Cc: | wenhui qiu <qiuwenhuifx(at)gmail(dot)com>, Alexander Korotkov <aekorotkov(at)gmail(dot)com>, Postgres hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
| Subject: | Re: POC: make mxidoff 64 bits |
| Date: | 2025-11-14 15:40:50 |
| Message-ID: | CACG=ezYUJSvnuxntkURNWo_1vZ+AtmcQfqd_h6WgDzGaudfw+Q@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
On Wed, 12 Nov 2025 at 16:00, Heikki Linnakangas <hlinnaka(at)iki(dot)fi> wrote:
>
> I added an
> inlined fast path to SlruReadSwitchPage and SlruWriteSwitchPage to
> eliminate the function call overhead of those in the common case that no
> page switch is needed. With that, the 100 million mxid test case I used
> went from 1.2 s to 0.9 s. We could optimize this further but I think
> this is good enough.
>
I agree with you.
- I added an SlruFileName() helper function to slru_io.c, and support
> for reading SLRUs with long_segment_names==true. It's not needed
> currently, but it seemed like a weird omission. AllocSlruRead() actually
> left 'long_segment_names' uninitialized which is error-prone. We
> could've just documented it, but it seems just as easy to support it.
>
Yeah, I didn't particularly like that place either. But then I decided it
was
overkill to do it for the sake of symmetry and would raise questions.
It turned out much better this way.
> I kept all the new test cases for now. We need to decide which ones are
> worth keeping, and polish and speed up the ones we decide to keep.
>
I think of two cases here.
A) Upgrade from "new cluster":
* created cluster with pre 32-bit overflow mxoff
* consume around of 2k of mxacts (1k before 32-bit overflow
and 1k after)
* run pg_upgrade
* check upgraded cluster is working
* check data invariant
B) Same as A), but for an "old cluster" with oldinstall env.
On Thu, 13 Nov 2025 at 19:04, Heikki Linnakangas <hlinnaka(at)iki(dot)fi> wrote:
>
> Here's a new patch version that addresses the above issue. I resurrected
> MultiXactMemberFreezeThreshold(), using the same logic as before, just
> using pretty arbitrary thresholds of 1 and 2 billion offsets instead of
> the safe/danger thresholds derived from MaxMultiOffset. That gives
> roughly the same behavior wrt. calculating effective freeze age as before.
>
Yes, I think it's okay for now. This reflects the existing logic well.
I wonder what the alternative solution might be? Can we make a
"vacuum freeze" also do pg_multixact segments truncation?
In any case, this can be discussed later.
--
Best regards,
Maxim Orlov.
| From | Date | Subject | |
|---|---|---|---|
| Next Message | jian he | 2025-11-14 15:47:35 | PartitionKeyData->partattrs, refactor some 0 to InvalidAttrNumber |
| Previous Message | Greg Burd | 2025-11-14 15:31:28 | Refactor how we form HeapTuples for CatalogTuple(Insert|Update) |