Re: Add progressive backoff to XactLockTableWait functions

From: Xuneng Zhou <xunengzhou(at)gmail(dot)com>
To: Fujii Masao <masao(dot)fujii(at)oss(dot)nttdata(dot)com>, Andres Freund <andres(at)anarazel(dot)de>
Cc: Kevin K Biju <kevinkbiju(at)gmail(dot)com>, pgsql-hackers(at)lists(dot)postgresql(dot)org
Subject: Re: Add progressive backoff to XactLockTableWait functions
Date: 2025-07-16 15:57:33
Message-ID: CABPTF7Wbp7MRPGsqd9NA4GbcSzUcNz1ymgWfir=Yf+N0oDRbjA@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi all,

I spent some extra time walking the code to see where
XactLockTableWait() actually fires.
A condensed recap:

1) Current call-paths

A. Logical walsender (XLogSendLogical → … → SnapBuildWaitSnapshot) in
cascading standby

B. SQL slot functions
(pg_logical_slot_get_changes[_peek])
create_logical_replication_slot
pg_sync_replication_slots
pg_replication_slot_advance
binary_upgrade_logical_slot_has_caught_up

2) How many backends and XIDs in practice

A. Logical walsenders on a cascading standby
One per replication connection, capped by max_wal_senders.
default 10; hubs might run 10–40.

B. Logical slot creation is infrequent and bounded by
max_replication_slots, default 10;
other functions are not called that often either.

C. Wait pattern
XIDs waited-for during a snapshot build: SnapBuildWaitSnapshot wait
for one xid a time;

So, under today’s workloads both the number of xids and waiters stay
modest concurrently.

3) Future growth
Some features could multiply the number of concurrent waiters, but I
don’t have enough knowledge to predict those shapes.

Feedbacks welcome.

Best,
Xuneng

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Fujii Masao 2025-07-16 16:00:08 Re: Log prefix missing for subscriber log messages received from publisher
Previous Message Andres Freund 2025-07-16 15:54:52 Re: Read-Write optimistic lock (Re: sinvaladt.c: remove msgnumLock, use atomic operations on maxMsgNum)