From: | Tomas Vondra <tomas(at)vondra(dot)me> |
---|---|
To: | Ashutosh Bapat <ashutosh(dot)bapat(dot)oss(at)gmail(dot)com>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com> |
Cc: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org, Jack Ng <Jack(dot)Ng(at)huawei(dot)com>, Ni Ku <jakkuniku(at)gmail(dot)com> |
Subject: | Re: Changing shared_buffers without restart |
Date: | 2025-07-04 00:06:16 |
Message-ID: | a162df8e-fc90-4645-822b-93e7c9c94608@vondra.me |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi Ashutosh dn Dmitry,
I took a look at this patch, because it's somewhat related to the NUMA
patch series I posted a couple days ago, and I've been wondering if
it makes some of the NUMA stuff harder or simpler.
I don't think it makes a bit difference (for the NUMA stuff). My main
question was when would we adjust the "NUMA location" of parts of memory
to keep stuff balanced, but this patch series already needs to update
some of these structs (like the freelists), so those places would be
updated to be NUMA-aware. Some of the changes could be made lazily,
to minimize the amount of time when activity is stopped (like shifting
the buffers to different NUMA nodes). It'd be harder if we wanted to
resize e.g. PGPROC, but that's not the case. So I think this is fine.
I agree it'd be useful to be able to resize shared buffers, without
having to restart the instance (which is obviously very disruptive). So
if we can make this work reliably, with reasonable trade offs (both on
the backends, and also the risks/complexity introduced by the feature).
I'm far from an expert on mmap() and similar low-level stuff, but the
current appproach (reserving a big chunk of shared memory and slicing
it by mmap() into smaller segments) seems reasonable.
But I'm getting a bit lost in how exactly this interacts with things
like overcommit, system memory accounting / OOM killer and this sort of
stuff. I went through the thread and it seems to me the reserve+map
approach works OK in this regard (and the messages on linux-mm seem to
confirm this). But this information is scattered over many messages and
it's hard to say for sure, because some of this might be relevant for
an earlier approach, or a subtly different variant of it.
A similar question is portability. The comments and commit messages
seem to suggest most of this is linux-specific, and other platforms just
don't have these capabilities. But there's a bunch of messages (mostly
by Thomas Munro) that hint FreeBSD might be capable of this too, even if
to some limited extent. And possibly even Windows/EXEC_BACKEND, although
that seems much trickier.
FWIW I think it's perfectly fine to only support resizing on selected
platforms, especially considering Linux is the most widely used system
for running Postgres. We still need to be able to build/run on other
systems, of course. And maybe it'd be good to be able to disable this
even on Linux, if that eliminates some overhead and/or risks for people
who don't need the feature. Just a thought.
Anyway, my main point is that this information is important, but very
scattered over the thread. It's a bit foolish to expect everyone who
wants to do a review to read the whole thread (which will inevitably
grow longer over time), and assemble all these pieces again an again,
following all the changes in the design etc. Few people will get over
that hurdle, IMHO.
So I think it'd be very helpful to write a README, explaining the
currnent design/approach, and summarizing all these aspects in a single
place. Including things like portability, interaction with the OS
accounting, OOM killer, this kind of stuff. Some of this stuff may be
already mentioned in code comments, but you it's hard to find those.
Especially worth documenting are the states the processes need to go
through (using the barriers), and the transacitons between them (i.e.
what is allowed in each phase, what blocks can be visible, etc.).
I'll go over some higher-level items first, and then over some comments
for individual patches.
1) no user docs
There are no user .sgml docs, and maybe it's time to write some,
explaining how to use this thing - how to configure it, how to trigger
the resizing, etc. It took me a while to realize I need to do ALTER
SYSTEM + pg_reload_conf() to kick this off.
It should also document the user-visible limitations, e.g. what activity
is blocked during the resizing, etc.
2) pending GUC changes
I'm somewhat skeptical about the GUC approach. I don't think it was
designed with this kind of use case in mind, and so I think it's quite
likely it won't be able to handle it well.
For example, there's almost no validation of the values, so how do you
ensure the new value makes sense? Because if it doesn't, it can easily
crash the system (I've seen such crashes repeatedly, I'll get to that).
Sure, you may do ALTER SYSTEM to set shared_buffers to nonsense and it
won't start after restart/reboot, but crashing an instance is maybe a
little bit more annoying.
Let's say we did the ALTER SYSTEM + pg_reload_conf(), and it gets stuck
waiting on something (can't evict a buffer or something). How do you
cancel it, when the change is already written to the .auto.conf file?
Can you simply do ALTER SYSTEM + pg_reload_conf() again?
It also seems a bit strange that the "switch" gets to be be driven by a
randomly selected backend (unless I'm misunderstanding this bit). It
seems to be true for the buffer eviction during shrinking, at least.
Perhaps this should be a separate utility command, or maybe even just
a new ALTER SYSTEM variant? Or even just a function, similar to what
the "online checksums" patch did, possibly combined with a bgworder
(but probably not needed, there are no db-specific tasks to do).
3) max_available_memory
Speaking of GUCs, I dislike how max_available_memory works. It seems a
bit backwards to me. I mean, we're specifying shared_buffers (and some
other parameters), and the system calculates the amount of shared memory
needed. But the limit determines the total limit?
I think the GUC should specify the maximum shared_buffers we want to
allow, and then we'd work out the total to pre-allocate? Considering
we're only allowing to resize shared_buffers, that should be pretty
trivial. Yes, it might happen that the "total limit" happens to exceed
the available memory or something, but we already have the problem
with shared_buffers. Seems fine if we explain this in the docs, and
perhaps print the calculated memory limit on start.
In any case, we should not allow setting a value that ends up
overflowing the internal reserved space. It's true we don't have a good
way to do checks for GUcs, but it's a bit silly to crash because of
hitting some non-obvious internal limit that we necessarily know about.
Maybe this is a reason why GUC hooks are not a good way to set this.
4) SHMEM_RESIZE_RATIO
The SHMEM_RESIZE_RATIO thing seems a bit strange too. There's no way
these ratios can make sense. For example, BLCKSZ is 8192 but the buffer
descriptor is 64B. That's 128x difference, but the ratios says 0.6 and
0.1, so 6x. Sure, we'll actually allocate only the memory we need, and
the rest is only "reserved".
However, that just makes the max_available_memory a bit misleading,
because you can't ever use it. You can use the 60% for shared buffers
(which is not mentioned anywhere, and good luck not overflowing that,
as it's never checked), but those smaller regions are guaranteed to be
mostly unused. Unfortunate.
And it's not just a matter of fixing those ratios, because then someone
rebuilds with 32kB blocks and you're in the same situation.
Moreover, all of the above is for mappings sized based on NBuffers. But
if we allocate 10% for MAIN_SHMEM_SEGMENT, won't that be a problem the
moment someone increases of max_connection, max_locks_per_transaction
and possibly some other stuff?
5) no tests
I mentioned no "user docs", but the patch has 0 tests too. Which seems
a bit strange for a patch of this age.
A really serious part of the patch series seems to be the coordination
of processes when going through the phases, enforced by the barriers.
This seems like a perfect match for testing using injection points, and
I know we did something like this in the online checksums patch, which
needs to coordinate processes in a similar way.
But even just a simple TAP test that does a bunch of (random?) resizes
while running a pgbench seem better than no tests. (That's what I did
manually, and it crashed right away.)
There's a lot more stuff to test here, I think. Idle sessions with
buffers pinned by open cursors, multiple backends doing ALTER SYSTEM
+ pg_reload_conf concurrently, other kinds of failures.
6) SIGBUS failures
As mentioned, I did some simple tests with shrink/resize with a pgbench
in the background, and it almost immediately crashed for me :-( With a
SIGBUS, which I think is fairly rare on x86 (definitely much less common
than e.g. SIGSEGV).
An example backtrace attached.
7) EXEC_BACKEND, FreeBSD
We clearly need to keep this working on systems without the necessary
bits (so likely EXEC_BACKEND, FreeBSD etc.). But the builds currently
fail in both cases, it seems.
I think it's fine to not support resizing on every platform, then we'd
never get it, but it still needs to build. It would be good to not have
two very different code versions, one for resizing and one without it,
though. I wonder if we can just have the "no-resize" use the same struct
(with the segments/mapping, ...) and all that, but skipping the space
reservation.
8) monitoring
So, let's say I start a resize of shared buffers. How will I know what
it's currently doing, how much longer it might take, what it's waiting
for, etc.? I think it'd be good to have progress monitoring, through
the regular system view (e.g. pg_stat_shmem_resize_progress?).
10) what to do about stuck resize?
AFAICS the resize can get stuck for various reasons, e.g. because it
can't evict pinned buffers, possibly indefinitely. Not great, it's not
clear to me if there's a way out (canceling the resize) after a timeout,
or something like that? Not great to start an "online resize" only to
get stuck with all activity blocked for indefinite amount of time, and
get to restart anyway.
Seems related to Thomas' message [2], but AFAICS the patch does not do
anything about this yet, right? What's the plan here?
11) preparatory actions?
Even if it doesn't get stuck, some of the actions can take a while, like
evicting dirty buffers before shrinking, etc. This is similar to what
happens on restart, when the shutdown checkpoint can take a while, while
the system is (partly) unavailable.
The common mitigation is to do an explicit checkpoint right before the
restart, to make the shutdown checkpoint cheap. Could we do something
similar for the shrinking, e.g. flush buffers from the part to be
removed before actually starting the resize?
12) does this affect e.g. fork() costs?
I wonder if this affects the cost of fork() in some undesirable way?
Could it make fork() more measurably more expensive?
13) resize "state" is all over the place
For me, a big hurdle when reasoning about the resizing correctness is
that there's quite a lot of distinct pieces tracking what the current
"state" is. I mean, there's:
- ShmemCtrl->NSharedBuffers
- NBuffers
- NBuffersOld
- NBuffersPending
- ... (I'm sure I missed something)
There's no cohesive description how this fits together, it seems a bit
"ad hoc". Could be correct, but I find it hard to reason about.
14) interesting messages from the thread
While reading through the thread, I noticed a couple messages that I
think are still relevant:
- I see Peter E posted some review in 2024/11 [3], but it seems his
comments were mostly ignored. I agree with most of them.
- Robert mentioned a couple interesting failure scenarios in [4], not
sure if all of this was handled. He howerver assumes pointers would
not be stable (and that's something we should not allow, and the
current approach works OK in this regard, I think). He also outlines
how it'd happen in phases - this would be useful for the design README
I think. It also reminds me the "phases" in the checksums patch.
- Robert asked [5] if Linux might abruptly break this, but I find that
unlikely. We'd point out we rely on this, and they'd likely rethink.
This would be made safer if this was specified by POSIX - taking that
away once implemented seems way harder than for custom extensions.
It's likely they'd not take away the feature without an alternative
way to achieve the same effect, I think (yes, harder to maintain).
Tom suggests [7] this is not in POSIX.
- Matthias mentioned [6] similar flags on other operating systems. Could
some of those be used to implement the same resizing?
- Andres had an interesting comment about how overcommit interacts with
MAP_NORESERVE. AFAIK it means we need the flag to not break overcommit
accounting. There's also some comments about from linux-mm people [9].
- There seem to be some issues with releasing memory backing a mapping
with hugetlb [10]. With the fd (and truncating the file), this seems
to release the memory, but it's linux-specific? But most of this stuff
is specific to linux, it seems. So is this a problem? With this it
should be working even for hugetlb ...
- It seems FreeBSD has MFD_HUGETLB [11], so maybe we could use this and
make the hugetlb stuff work just like on Linux? Unclear. Also, I
thought the mfd stuff is linux-specific ... or am I confused?
- Andres objected to any approach without pointer stability, and I agree
with that. If we can figure out such solution, of course.
- Thomas asked [13] why we need to stop all the backends, instead of
just waiting for them to acknowledge the new (smaller) NBuffers value
and then let them continue. I also don't quite see why this should
not work, and it'd limit the disruption when we have to wait for
eviction of buffers pinned by paused cursors, etc.
Now, some comments about the individual patches (some of this may be a
bit redundant with the earlier points):
v5-0001-Process-config-reload-in-AIO-workers.patch
1) Hmmm, so which other workers may need such explicit handling? Do all
other processes participate in procsignal stuff, or does anything
need an explicit handling?
v5-0002-Introduce-pending-flag-for-GUC-assign-hooks.patch
No additional comments, see the points about resizing through a GUC
callback with pending flag vs. a separate utility command, monitoring
and so on.
v5-0003-Introduce-pss_barrierReceivedGeneration.patch
1) Do we actually need this? Isn't it enough to just have two barriers?
Or a barrier + condition variable, or something like that.
2) The comment talks about "coordinated way" when processing messages,
but it's not very clear to me. It should explain what is needed and
not possible with the current barrier code.
3) This very much reminds me what the online checksums patch needed to
do, and we managed to do it using plain barriers. So why does this
need this new thing? (No opinion on whether it's correct.)
v5-0004-Allow-to-use-multiple-shared-memory-mappings.patch
1) "int shmem_segment" - wouldn't it be better to have a separate enum
for this? I mean, we'll have a predefined list of segments, right?
2) typedef struct AnonymousMapping would deserve some comment
3) ANON_MAPPINGS - Probably should be MAX_ANON_MAPPINGS? But we'll know
how many we have, so why not to allocate exactly the right number?
Or even just an array of structs, like in similar cases?
4) static int next_free_segment = 0;
We exactly know what segments we'll create and in which order, no? So
why do we even bother with this next_free_segment thing? Can't we
simply declare an array of AnonymousMapping elements, with all the
elements, and then just walk it and calculate the sizes/pointers?
5) I'm a bit confused about the segment/mapping difference. The patch
seems to randomly mix those, or maybe I'm just confused. I mean,
we are creating just shmem segment, and the pieces are mappings,
right? So why do we index them by "shmem_segment"?
Also, consider
CreateAnonymousSegment(AnonymousMapping *mapping)
so is that creating a segment or mapping? Or what's the difference?
Or are we creating multiple segments, and I missed that? Or are there
different "segment" concepts, or what?
6) There should probably be some sort of API wrapping the mappings, so
that the various places don't need to mess with next_free_segments
directly, etc. Perhaps PGSharedMemoryCreate() shouldn't do this, and
should just pass size to CreateAnonymousSegment(), and that finding
empty slot in Mappings, etc.? Not sure that'll work, but it's a bit
error-prone if a struct is modified from multiple places like this.
7) We should remember which segments got to use huge pages and which
did not. And we should make it optional for each segment. Although,
maybe I'm just confused about the "segment" definition - if we only
have one, that's where huge pages are applied.
If we could have multiple segments for different segments (whatever
that means), not sure what we'll report for cases when some segments
get to use huge pages and others don't. Either because we don't want
to use that for some segments, or because we happen to run out of
the available huge pages.
8) It seems PGSharedMemoryDetach got some significant changes, but the
comment was not modified at all. I'd guess that means the comment is
perhaps stale, or maybe there's something we should mention.
9) I doubt the Assert on GetConfigOption needs to be repeated for all
segments (in CreateSharedMemoryAndSemaphores).
10) Why do we have the Mapping and Segments indexed in different ways?
I mean, Mappings seem to be filled in FIFO (just grab the next free
slot), while Segments are indexed by segment ID.
11) Actually, what's the difference between the contents of Mappings
and Segments? Isn't that the same thing, indexed in the same way?
Or could it be unified? Or are they conceptually different thing?
12) I believe we'll have a predefined list of segments, with fixed IDs,
so why not just have a MAX of those IDs as the capacity?
13) Would it be good to have some checks on shmem_segment values? That
it's valid with respect to defined segments, etc. An assert, maybe?
What about some asserts on the Mapping/Segment elements? To check
that the element is sensible, and that the arrays "match" (if we
need both).
14) Some of the lines got pretty long, e.g. in pg_get_shmem_allocations.
I suggest we define some macros to make this shorter, or something
like that.
15) I'd maybe rename ShmemSegment to PGShmemSegment, for consistency
with PGShmemHeader?
16) Is MAIN_SHMEM_SEGMENT something we want to expose in a public header
file? Seems very much like an internal thing, people should access
it only through APIs ...
v5-0005-Address-space-reservation-for-shared-memory.patch
1) Shouldn't reserved_offset and huge_pages_on really be in the segment
info? Or maybe even in mapping info? (again, maybe I'm confused
about what these structs store)
2) CreateSharedMemoryAndSemaphores comment is rather light on what it
does, considering it now reserves space and then carves is into
segments.
3) So ReserveAnonymousMemory is what makes decisions about huge pages,
for the whole reserved space / all segments in it. That's a bit
unfortunate with respect to the desirability of some segments
benefiting from huge pages and others not. Maybe we should have two
"reserved" areas, one with huge pages, one without?
I guess we don't want too many segments, because that might make
fork() more expensive, etc. Just guessing, though. Also, how would
this work with threading?
4) Any particular reason to define max_available_memory as
GUC_UNIT_BLOCKS and not GUC_UNIT_MB? Of course, if we change this
to have "max shared buffers limit" then it'd make sense to use
blocks, but "total limit" is not in blocks.
5) The general approach seems sound to me, but I'm not expert on this.
I wonder how portable this behavior is. I mean, will it work on other
Unix systems / Windows? Is it POSIX or Linux extension?
6) It might be a good idea to have Assert procedures to chech mappings
and segments (that it doesn't overflow reserved space, etc.). It
took me ages to realize I can change shared_buffers to >60% of the
limit, it'll happily oblige and then just crash with OOM when
calling mprotect().
v5-0006-Introduce-multiple-shmem-segments-for-shared-buff.patch
1) I suspect the SHMEM_RESIZE_RATIO is the wrong direction, because it
entirely ignores relationships between the parts. See the earlier
comment about this.
2) In fact, what happens if the user tries to resize to a value that is
too large for one of the segments? How would the system know before
starting the resize (and failing)?
3) It seems wrong to modify the BufferManagerShmemSize like this. It's
probably better to have a "...SegmentSize" function for individual
segments, and let BufferManagerShmemSize() to still return a sum of
all segments.
4) I think MaxAvailableMemory is the wrong abstraction, because that's
not what people specify. See earlier comment.
5) Let's say we change the shared memory size (ALTER SYSTEM), trigger
the config reload (pg_reload_conf). But then we find that we can't
actually shrink the buffers, for some unpredictable reason (e.g.
there's pinned buffers). How do we "undo" the change? We can't
really undo the ALTER SYSTEM, that's already written in the .conf
and we don't know the old value, IIRC. Is it reasonable to start
killing backends from the assign_hook or something? Seems weird.
v5-0007-Allow-to-resize-shared-memory-without-restart.patch
1) Why would AdjustShmemSize be needed? Isn't that a sign of a bug
somewhere in the resizing?
2) Isn't the pg_memory_barrier() in CoordinateShmemResize a bit weird?
Why is it needed, exactly? If it's to flush stuff for processes
consuming EmitProcSignalBarrier, it's that too late? What if a
process consumes the barrier between the emit and memory barrier?
3) WaitOnShmemBarrier seem a bit under-documented.
4) Is this actually adding buffers to the freelist? I see buf_init only
links the new buffers by seeting freeNext, but where are the new
buffers added to the existing freelist?
5) The issue with a new backend seeing an old NBuffers value reminds me
of the "support enabling checksums online" thread, where we ran into
similar race conditions. See message [1], the part about race #2
(the other race might be relevant too, not sure). It's been a while,
but I think our conclusion ini that thread was that the "best" fix
would be to change the order of steps in InitPostgres(), i.e. setup
the ProcSignal stuff first, and only then "copy" the NBuffers value.
And handle the possibility that we receive a "duplicate" barriers.
6) In fact, the online checksums thread seems like a possible source of
inspiration for some of the issues, because it needs to do similar
stuff (e.g. make sure all backends follow steps in a synchronized
way, etc.). And it didn't need new types of Barrier to do that.
7) Also, this seems like a perfect match for testing using injection
points. In fact, there's not a single test in the whole patch series.
Or a single line of .sgml docs, for that matter. It took me a while
to realize I'm supposed to change the size by ALTER SYSTEM + reload
the config.
v5-0008-Support-shrinking-shared-buffers.patch
1) Why is ShmemCtrl->evictor_pid reset in AnonymousShmemResize? Isn't
there a place starting it and waiting for it to complete? Why
shouldn't it do EvictExtraBuffers itself?
2) Isn't the change to BufferManagerShmemInit wrong? How do we know the
last buffer is still at the end of the freelist? Seems unlikely.
3) Seems a bit strange to do it from a random backend. Shouldn't it
be the responsibility of a process like checkpointer/bgwriter, or
maybe a dedicated dynamic bgworker? Can we even rely on a backend
to be available?
4) Unsolved issues with buffers pinned for a long time. Could be an
issue if the buffer is pinned indefinitely (e.g. cursor in idle
connection), and the resizing blocks some activity (new connections
or stuff like that).
5) Funny that "AI suggests" something, but doesn't the block fail to
reset nextVictimBuffer of the clocksweep? It may point to a buffer
we're removing, and it'll be invalid, no?
6) It's not clear to me in what situations this triggers (in the call
to BufferManagerShmemInit)
if (FirstBufferToInit < NBuffers) ...
v5-0009-Reinitialize-StrategyControl-after-resizing-buffe.patch
1) IMHO this should be included in the earlier resize/shrink patches,
I don't see a reason to keep it separate (assuming this is the
correct way, and the "init" is not).
2) Doesn't StrategyPurgeFreeList already do some of this for the case
of shrinking memory?
3) Not great adding a bunch of static variables to bufmgr.c. Why do we
need to make "everything" static global? Isn't it enough to make
only the "valid" flag global? The rest can stay local, no?
If everything needs to be global for some reason, could we at least
make it a struct, to group the fields, not just separate random
variables? And maybe at the top, not half-way throught the file?
4) Isn't the name BgBufferSyncAdjust misleading? It's not adjusting
anything, it's just invalidating the info about past runs.
5) I don't quite understand why BufferSync needs to do the dance with
delay_shmem_resize. I mean, we certainly should not run BufferSync
from the code that resizes buffers, right? Certainly not after the
eviction, from the part that actually rebuilds shmem structs etc.
So perhaps something could trigger resize while we're running the
BufferSync()? Isn't that a bit strange? If this flag is needed, it
seems more like a band-aid for some issue in the architecture.
6) Also, why should it be fine to get into situation that some of the
buffers might not be valid, during shrinking? I mean, why should
this check (pg_atomic_read_u32(&ShmemCtrl->NSharedBuffers) != NBuffers).
It seems better to ensure we never get into "sync" in a way that
might lead some of the buffers invalid. Seems way too lowlevel to
care about whether resize is happening.
7) I don't understand the new condition for "Execute the LRU scan".
Won't this stop LRU scan even in cases when we want it to happen?
Don't we want to scan the buffers in the remaining part (after
shrinking), for example? Also, we already checked this shmem flag at
the beginning of the function - sure, it could change (if some other
process modifies it), but does that make sense? Wouldn't it cause
problems if it can change at an arbitrary point while running the
BufferSync? IMHO just another sign it may not make sense to allow
this, i.e. buffer sync should not run during the "actual" resize.
v5-0010-Additional-validation-for-buffer-in-the-ring.patch
1) So the problem is we might create a ring before shrinking shared
buffers, and then GetBufferFromRing will see bogus buffers? OK, but
we should be more careful with these checks, otherwise we'll miss
real issues when we incorrectly get an invalid buffer. Can't the
backends do this only when they for sure know we did shrink the
shared buffers? Or maybe even handle that during the barrier?
2) IMHO a sign there's the "transitions" between different NBuffers
values may not be clear enough, and we're allowing stuff to happen
in the "blurry" area. I think that's likely to cause bugs (it did
cause issues for the online checksums patch, I think).
[1]
https://www.postgresql.org/message-id/3372a09c-d1f6-4974-ad60-eec15ee0c734%40vondra.me
[3]
https://www.postgresql.org/message-id/12add41a-7625-4639-a394-a5563e349322%40eisentraut.org
[7] https://www.postgresql.org/message-id/397218.1732844567%40sss.pgh.pa.us
[9]
https://lore.kernel.org/linux-mm/pr7zggtdgjqjwyrfqzusih2suofszxvlfxdptbo2smneixkp7i(at)nrmtbhemy3is/
[12]
https://www.postgresql.org/message-id/94B56B9C-025A-463F-BC57-DF5B15B8E808%40anarazel.de
--
Tomas Vondra
Attachment | Content-Type | Size |
---|---|---|
resize-crash.txt | text/plain | 4.5 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Fujii Masao | 2025-07-04 00:10:31 | Re: Speedup truncations of temporary relation forks |
Previous Message | Tom Lane | 2025-07-03 21:45:42 | Re: COALESCE with single argument looks like identity function |