PostgreSQL Weekly News - March 28, 2021

From: PWN via PostgreSQL Announce <announce-noreply(at)postgresql(dot)org>
To: PostgreSQL Announce <pgsql-announce(at)lists(dot)postgresql(dot)org>
Subject: PostgreSQL Weekly News - March 28, 2021
Date: 2021-03-29 09:24:38
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-announce

# PostgreSQL Weekly News - March 28, 2021

Person of the week: [](

# PostgreSQL Product News

pspg 4.5.0 a pager designed for PostgreSQL, released.

pgAdmin4 5.1, a web- and native GUI control center for PostgreSQL, released.

# PostgreSQL Jobs for March


# PostgreSQL in the News

Planet PostgreSQL: [](

PostgreSQL Weekly News is brought to you this week by David Fetter

Submit news and announcements by Sunday at 3:00pm PST8PDT to david(at)fetter(dot)org(dot)

# Applied Patches

Andrew Dunstan pushed:

- Don't run recover crash_temp_files test in Windows perl. This reverts commit
677271a3a125e294b33b891669f594a2c8cb36ce. "Unbreak recovery test on Windows"
The test hangs on Windows, and attempts to remedy the problem have proved
fragile at best. So we simply disable the test on Windows perl. (Msys perl
seems perfectly happy). Discussion:

- Allow for installation-aware instances of PostgresNode. Currently instances of
PostgresNode find their Postgres executables in the PATH of the caller. This
modification allows for instances that know the installation path they are
supposed to use, and the module adjusts the environment of methods that call
Postgres executables appropriately. This facility is activated by passing the
installation path to the constructor: my $node =
PostgresNode->get_new_node('mynode', installation_path =>
'/path/to/installation'); This makes a number of things substantially easier,
including . testing third party modules . testing different versions of
postgres together . testing different builds of postgres together Discussion:
Reviewed-By: Alvaro Herrera, Michael Paquier, Dagfinn Ilmari Mannsåker

Tom Lane pushed:

- Make compression.sql regression test independent of default. This test will
fail in "make installcheck" if the installation's default_toast_compression
setting is not 'pglz'. Make it robust against that situation. Dilip Kumar

- Bring configure support for LZ4 up to snuff. It's not okay to just shove the
pkg_config results right into our build flags, for a couple different reasons:
* This fails to maintain the separation between CPPFLAGS and CFLAGS, as well
as that between LDFLAGS and LIBS. (The CPPFLAGS angle is, I believe, the
reason for warning messages reported when building with MacPorts' liblz4.) *
If pkg_config emits anything other than -I/-D/-L/-l switches, it's highly
unlikely that we want to absorb those. That'd be more likely to break the
build than do anything helpful. (Even the -D case is questionable; but we're
doing that for libxml2, so I kept it.) Also, it's not okay to skip doing an
AC_CHECK_LIB probe, as evidenced by recent build failure on topminnow; that
should have been caught at configure time. Model fixes for this on
configure's libxml2 support. It appears that somebody overlooked an
autoheader run, too. Discussion:

- Fix assorted silliness in ATExecSetCompression(). It's not okay to scribble
directly on a syscache entry. Nor to continue accessing said entry after
releasing it. Also get rid of not-used local variables. Per valgrind

- Remove useless configure probe for <lz4/lz4.h>. This seems to have been just
copied-and-pasted from some other header checks. But our C code is entirely
unprepared to support such a header name, so it's only wasting cycles to look
for it. If we did need to support it, some #ifdefs would be required. (A
quick trawl at finds some packages that reference
lz4/lz4.h; but they use *only* that spelling, and appear to be intending to
reference their own copy rather than a system-level installation of liblz4.
There's no evidence of freestanding installations that require this spelling.)

- Mostly-cosmetic adjustments of TOAST-related macros. The authors of bbe0a81db
hadn't quite got the idea that macros named like SOMETHING_4B_C were only
meant for internal endianness-related details in postgres.h. Choose more
legible names for macros that are intended to be used elsewhere. Rearrange
postgres.h a bit to clarify the separation between those internal macros and
ones intended for wider use. Also, avoid using the term "rawsize" for true
decompressed size; we've used "extsize" for that, because "rawsize" generally
denotes total Datum size including header. This choice seemed particularly
unfortunate in tests that were comparing one of these meanings to the other.
This patch includes a couple of not-purely-cosmetic changes: be sure that the
shifts aligning compression methods are unsigned (not critical today, but will
be when compression method 2 exists), and fix broken definition of
whose callers worked only accidentally. Discussion:

- Short-circuit slice requests that are for more than the object's size.
substring(), and perhaps other callers, isn't careful to pass a slice length
that is no more than the datum's true size. Since
toast_decompress_datum_slice's children will palloc the requested slice
length, this can waste memory. Also, close study of the liblz4 documentation
suggests that it is dependent on the caller to not ask for more than the
correct amount of decompressed data; this squares with observed misbehavior
with liblz4 1.8.3. Avoid these problems by switching to the normal
full-decompression code path if the slice request is >= datum's decompressed
size. Tom Lane and Dilip Kumar Discussion:

- Avoid possible crash while finishing up a heap rewrite. end_heap_rewrite was
not careful to ensure that the target relation is open at the smgr level
before performing its final smgrimmedsync. In ordinary cases this is no
problem, because it would have been opened earlier during the rewrite.
However a crash can be reproduced by re-clustering an empty table with
CLOBBER_CACHE_ALWAYS enabled. Although that exact scenario does not crash in
v13, I think that's a chance result of unrelated planner changes, and the
problem is likely still reachable with other test cases. The true proximate
cause of this failure is commit c6b92041d, which replaced a call to heap_sync
(which was careful about opening smgr) with a direct call to smgrimmedsync.
Hence, back-patch to v13. Amul Sul, per report from Neha Sharma; cosmetic
changes and test case by me. Discussion:

- Fix psql's \connect command some more. Jasen Betts reported yet another
unintended side effect of commit 85c54287a: reconnecting with "\c
service=whatever" did not have the expected results. The reason is that
starting from the output of PQconndefaults() effectively allows environment
variables (such as PGPORT) to override entries in the service file, whereas
the normal priority is the other way around. Not using PQconndefaults at all
would require yet a third main code path in do_connect's parameter setup, so I
don't really want to fix it that way. But we can have the logic effectively
ignore all the default values for just a couple more lines of code. This
patch doesn't change the behavior for "\c -reuse-previous=on
service=whatever". That remains significantly different from before
85c54287a, because many more parameters will be re-used, and thus not be
possible for service entries to replace. But I think this is (mostly?)
intentional. In any case, since libpq does not report where it got parameter
values from, it's hard to do differently. Per bug #16936 from Jasen Betts.
As with the previous patches, back-patch to all supported branches. (9.5 is
unfortunately now out of support, so this won't get fixed there.) Discussion:

Peter Geoghegan pushed:

- Recycle nbtree pages deleted during same VACUUM. Maintain a simple array of
metadata about pages that were deleted during nbtree VACUUM's current
btvacuumscan() call. Use this metadata at the end of btvacuumscan() to
attempt to place newly deleted pages in the FSM without further delay. It
might not yet be safe to place any of the pages in the FSM by then (they may
not be deemed recyclable), but we have little to lose and plenty to gain by
trying. In practice there is a very good chance that this will work out when
vacuuming larger indexes, where scanning the index naturally takes quite a
while. This commit doesn't change the page recycling invariants; it merely
improves the efficiency of page recycling within the confines of the existing
design. Recycle safety is a part of nbtree's implementation of what Lanin &
Shasha call "the drain technique". The design happens to use transaction IDs
(they're stored in deleted pages), but that in itself doesn't align the cutoff
for recycle safety to any of the XID-based cutoffs used by VACUUM (e.g.,
OldestXmin). All that matters is whether or not _other_ backends might be
able to observe various inconsistencies in the tree structure (that they
cannot just detect and recover from by moving right). Recycle safety is
purely a question of maintaining the consistency (or the apparent consistency)
of a physical data structure. Note that running a simple serial test case
involving a large range DELETE followed by a VACUUM VERBOSE will probably show
that any newly deleted nbtree pages are not yet reusable/recyclable. This is
expected in the absence of even one concurrent XID assignment. It is an old
implementation restriction. In practice it's unlikely to be the thing that
makes recycling remain unsafe, at least with larger indexes, where recycling
newly deleted pages during the same VACUUM actually matters. An important
high-level goal of this commit (as well as related recent commits e5d8a999 and
9f3665fb) is to make expensive deferred cleanup operations in index AMs rare
in general. If index vacuuming frequently depends on the next VACUUM
operation finishing off work that the current operation started, then the
general behavior of index vacuuming is hard to predict. This is relevant to
ongoing work that adds a vacuumlazy.c mechanism to skip index vacuuming in
certain cases. Anything that makes the real world behavior of index vacuuming
simpler and more linear will also make top-down modeling in vacuumlazy.c more
robust. Author: Peter Geoghegan <pg(at)bowt(dot)ie> Reviewed-By: Masahiko Sawada
<sawada(dot)mshk(at)gmail(dot)com> Discussion:

- nbtree VACUUM: Cope with buggy opclasses. Teach nbtree VACUUM to press on with
vacuuming in the event of a page deletion attempt that fails to "re-find" a
downlink for its child/target page. There is no good reason to treat this as
an irrecoverable error. But there is a good reason not to: pressing on at
this point removes any question of VACUUM not making progress solely due to
misbehavior from user-defined operator class code. Discussion:

Michaël Paquier pushed:

- Fix timeline assignment in checkpoints with 2PC transactions. Any transactions
found as still prepared by a checkpoint have their state data read from the
WAL records generated by PREPARE TRANSACTION before being moved into their new
location within pg_twophase/. While reading such records, the WAL reader uses
the callback read_local_xlog_page() to read a page, that is shared across
various parts of the system. This callback, since 1148e22a, has introduced an
update of ThisTimeLineID when reading a record while in recovery, which is
potentially helpful in the context of cascading WAL senders. This update of
ThisTimeLineID interacts badly with the checkpointer if a promotion happens
while some 2PC data is read from its record, as, by changing ThisTimeLineID,
any follow-up WAL records would be written to an timeline older than the
promoted one. This results in consistency issues. For instance, a subsequent
server restart would cause a failure in finding a valid checkpoint record,
resulting in a PANIC, for instance. This commit changes the code reading the
2PC data to reset the timeline once the 2PC record has been read, to prevent
messing up with the static state of the checkpointer. It would be tempting to
do the same thing directly in read_local_xlog_page(). However, based on the
discussion that has led to 1148e22a, users may rely on the updates of
ThisTimeLineID when a WAL record page is read in recovery, so changing this
callback could break some cases that are working currently. A TAP test
reproducing the issue is added, relying on a PITR to precisely trigger a
promotion with a prepared transaction still tracked. Per discussion with
Heikki Linnakangas, Kyotaro Horiguchi, Fujii Masao and myself. Author:
Soumyadeep Chakraborty, Jimmy Yih, Kevin Yeap Discussion:
Backpatch-through: 10

- Simplify TAP tests of kerberos with expected log file contents. The TAP tests
of kerberos rely on the logs generated by the backend to check various
connection scenarios. In order to make sure that a given test does not
overlap with the log contents generated by a previous test, the test suite
relied on a logic with the logging collector and a rotation of the log files
to ensure the uniqueness of the log generated with a wait phase. Parsing the
log contents for expected patterns is a problem that has been solved in a
simpler way by PostgresNode::issues_sql_like() where the log file is truncated
before checking for the contents generated, with the backend sending its
output to a log file given by pg_ctl instead. This commit switches the
kerberos test suite to use such a method, removing any wait phase and
simplifying the whole logic, resulting in less code. If a failure happens in
the tests, the contents of the logs are still showed to the user at the moment
of the failure thanks to like(), so this has no impact on debugging
capabilities. I have bumped into this issue while reviewing a different patch
set aiming at extending the kerberos test suite to check for multiple log
patterns instead of one now. Author: Michael Paquier Reviewed-by: Stephen
Frost, Bharath Rupireddy Discussion:

- Fix new TAP test for 2PC transactions and PITRs on Windows. The test added by
595b9cb forgot that on Windows it is necessary to set up pg_hba.conf (see
PostgresNode::set_replication_conf) with a specific entry or base backups
fail. Any node that requires to support replication just needs to pass down
allows_streaming at initialization. This updates the test to do so. Simplify
things a bit while on it. Per buildfarm member fairywren. Any Windows hosts
running this test would have failed, and I have reproduced the problem as
well. Backpatch-through: 10

- Fix concurrency issues with WAL segment recycling on Windows. This commit is
mostly a revert of aaa3aed, that switched the routine doing the internal
renaming of recycled WAL segments to use on Windows a combination of
CreateHardLinkA() plus unlink() instead of rename(). As reported by several
users of Postgres 13, this is causing concurrency issues when manipulating WAL
segments, mostly in the shape of the following error: LOG: could not rename
file "pg_wal/000000XX000000YY000000ZZ": Permission denied This moves back to
a logic where a single rename() (well, pgrename() for Windows) is used. This
issue has proved to be hard to hit when I tested it, facing it only once with
an archive_command that was not able to do its work, so it is
environment-sensitive. The reporters of this issue have been able to confirm
that the situation improved once we switched back to a single rename(). In
order to check things, I have provided to the reporters a patched build based
on 13.2 with aaa3aed reverted, to test if the error goes away, and an
unpatched build of 13.2 to test if the error still showed up (just to make
sure that I did not mess up my build process). Extra thanks to Fujii Masao
for pointing out what looked like the culprit commit, and to all the reporters
for taking the time to test what I have sent them. Reported-by: Andrus, Guy
Burgess, Yaroslav Pashinsky, Thomas Trenz Reviewed-by: Tom Lane, Andres Freund
Backpatch-through: 13

- Add per-index stats information in verbose logs of autovacuum. Once a
relation's autovacuum is completed, the logs include more information about
this relation state if the threshold of log_autovacuum_min_duration (or its
relation option) is reached, with for example contents about the statistics of
the VACUUM operation for the relation, WAL and system usage. This commit adds
more information about the statistics of the relation's indexes, with one line
of logs generated for each index. The index stats were already calculated,
but not printed in the context of autovacuum yet. While on it, some
refactoring is done to keep track of the index statistics directly within
LVRelStats, simplifying some routines related to parallel VACUUMs. Author:
Masahiko Sawada Reviewed-by: Michael Paquier, Euler Taveira Discussion:

- Reword slightly logs generated for index stats in autovacuum. Using "remain"
is confusing, as it implies that the index file can shrink. Instead, use "in
total". Per discussion with Peter Geoghegan. Discussion:

- Sanitize the term "combo CID" in code comments. Combo CIDs were referred in
the code comments using different terms across various places of the code, so
unify a bit the term used with what is currently in use in some of the
READMEs. Author: "Hou, Zhijie" Discussion:

Noah Misch pushed:

- Make a test endure log_error_verbosity=verbose. Back-patch to v13, which
introduced the test code in question.

- Merge similar algorithms into roles_is_member_of(). The next commit would have
complicated two or three algorithms, so take this opportunity to consolidate.
No functional changes. Reviewed by John Naylor. Discussion:

- Add "pg_database_owner" default role. Membership consists, implicitly, of the
current database owner. Expect use in template databases. Once
pg_database_owner has rights within a template, each owner of a database
instantiated from that template will exercise those rights. Reviewed by John
Naylor. Discussion:

Fujii Masao pushed:

- pgbench: Improve error-handling in \sleep command. This commit improves
pgbench \sleep command so that it handles the following three cases more
properly. (1) When only one argument was specified in \sleep command and
it's not a number, previously pgbench reported a confusing error message
like "unrecognized time unit, must be us, ms or s". This commit fixes this
so that more proper error message like "invalid sleep time, must be an
integer" is reported. (2) When two arguments were specified in \sleep command
and the first argument was not a number, previously pgbench treated
that argument as the sleep time 0. No error was reported in this case.
This commit fixes this so that an error is thrown in this case. (3) When
a variable was specified as the first argument in \sleep command and the
variable stored non-digit value, previously pgbench treated that argument
as the sleep time 0. No error was reported in this case. This commit fixes
this so that an error is thrown in this case. Author: Kota Miyake
Reviewed-by: Hayato Kuroda, Alvaro Herrera, Fujii Masao Discussion:

- pg_waldump: Fix bug in per-record statistics. pg_waldump --stats=record
identifies a record by a combination of the RmgrId and the four bits of the
xl_info field of the record. But XACT records use the first bit of those four
bits for an optional flag variable, and the following three bits for the
opcode to identify a record. So previously the same type of XACT record could
have different four bits (three bits are the same but the first one bit is
different), and which could cause pg_waldump --stats=record to show two lines
of per-record statistics for the same XACT record. This is a bug. This commit
changes pg_waldump --stats=record so that it processes only XACT record
differently, i.e., filters the opcode out of xl_info and uses a combination of
the RmgrId and those three bits as the identifier of a record, only for XACT
record. For other records, the four bits of the xl_info field are still used.
Back-patch to all supported branches. Author: Kyotaro Horiguchi Reviewed-by:
Shinya Kato, Fujii Masao Discussion:

- Change the type of WalReceiverWaitStart wait event from Client to IPC.
Previously the type of this wait event was Client. But while this wait event
is being reported, walreceiver process is waiting for the startup process to
set initial data for streaming replication. It's not waiting for any activity
on a socket connected to a user application or walsender. So this commit
changes the type for WalReceiverWaitStart wait event to IPC. Author: Fujii
Masao Reviewed-by: Kyotaro Horiguchi Discussion:

- Log when GetNewOidWithIndex() fails to find unused OID many times.
GetNewOidWithIndex() generates a new OID one by one until it finds one not in
the relation. If there are very long runs of consecutive existing OIDs,
GetNewOidWithIndex() needs to iterate many times in the loop to find unused
OID. Since TOAST table can have a large number of entries and there can be
such long runs of OIDs, there is the case where it takes so many iterations to
find new OID not in TOAST table. Furthermore if all (i.e., 2^32) OIDs are
already used, GetNewOidWithIndex() enters something like busy loop and repeats
the iterations until at least one OID is marked as unused. There are some
reported troubles caused by a large number of iterations in
GetNewOidWithIndex(). For example, when inserting a billion of records into
the table, all the backends doing that insertion operation got hang with 100%
CPU usage at some point. Previously there was no easy way to detect that
GetNewOidWithIndex() failed to find unused OID many times. So, for example,
gdb full backtrace of hanged backends needed to be taken, in order to
investigate that trouble. This is inconvenient and may not be available in
some production environments. To provide easy way for that, this commit makes
GetNewOidWithIndex() log that it iterates more than GETNEWOID_LOG_THRESHOLD
but have not yet found OID unused in the relation. Also this commit makes it
repeat logging with exponentially increasing intervals until it iterates more
than GETNEWOID_LOG_MAX_INTERVAL, and makes it finally repeat logging every
GETNEWOID_LOG_MAX_INTERVAL unless an unused OID is found. Those macro
variables are used not to fill up the server log with the similar messages.
In the discusion at pgsql-hackers, there was another idea to report the lots
of iterations in GetNewOidWithIndex() via wait event. But since
GetNewOidWithIndex() traverses indexes to find unused OID and which will do
I/O, acquire locks, etc, which will overwrite the wait event and reset it to
nothing once done. So that idea doesn't work well, and we didn't adopt it.
Author: Tomohiro Hiramitsu Reviewed-by: Tatsuhito Kasahara, Kyotaro Horiguchi,
Tom Lane, Fujii Masao Discussion:

- Rename wait event WalrcvExit to WalReceiverExit. Commit de829ddf23 added wait
event WalrcvExit. But its name is not consistent with other wait events like
WalReceiverMain or WalReceiverWaitStart, etc. So this commit renames
WalrcvExit to WalReceiverExit. Author: Fujii Masao Reviewed-by: Thomas Munro

- Improve connection denied error message during recovery. Previously when an
archive recovery or a standby was starting and reached the consistent recovery
state but hot_standby was configured to off, the error message when a client
connectted was "the database system is starting up", which was needless
confusing and not really all that accurate either. This commit improves the
connection denied error message during recovery, as follows, so that the users
immediately know that their servers are configured to deny those connections.
- If hot_standby is disabled, the error message "the database system is not
accepting connections" and the detail message "Hot standby mode is
disabled." are output when clients connect while an archive recovery or a
standby is running. - If hot_standby is enabled, the error message "the
database system is not yet accepting connections" and the detail message
"Consistent recovery state has not been yet reached." are output when
clients connect until the consistent recovery state is reached and
postmaster starts accepting read only connections. This commit doesn't change
the connection denied error message of "the database system is starting up"
during normal server startup and crash recovery. Because it's still suitable
for those situations. Author: James Coleman Reviewed-by: Alvaro Herrera,
Andres Freund, David Zhang, Tom Lane, Fujii Masao Discussion:

- Fix bug in WAL replay of COMMIT_TS_SETTS record. Previously the WAL replay of
COMMIT_TS_SETTS record called TransactionTreeSetCommitTsData() with the
argument write_xlog=true, which generated and wrote new COMMIT_TS_SETTS
record. This should not be acceptable because it's during recovery. This
commit fixes the WAL replay of COMMIT_TS_SETTS record so that it calls
TransactionTreeSetCommitTsData() with write_xlog=false and doesn't generate
new WAL during recovery. Back-patch to all supported branches. Reported-by:
lx zou <zoulx1982(at)163(dot)com> Author: Fujii Masao Reviewed-by: Alvaro Herrera

Robert Haas pushed:

- More code cleanup for configurable TOAST compression. Remove unused macro. Fix
confusion about whether a TOAST compression method is identified by an OID or
a char. Justin Pryzby Discussion:

- docs: Fix omissions related to configurable TOAST compression. Previously, the
default_toast_compression GUC was not documented, and neither was pg_dump's
new --no-toast-compression option. Justin Pryzby and Robert Haas Discussion:

- Error on invalid TOAST compression in CREATE or ALTER TABLE. The previous
coding treated an invalid compression method name as equivalent to the
default, which is certainly not right. Justin Pryzby Discussion:

- Improve pg_amcheck's TAP test Disable autovacuum, because we
don't want it to run against intentionally corrupted tables. Also, before
corrupting the tables, run pg_amcheck and ensure that it passes. Otherwise, if
something unexpected happens when we check the corrupted tables, it's not so
clear whether it would have also happened before we corrupted them. Mark
Dilger Discussion:

- Tidy up more loose ends related to configurable TOAST compression. Change the
default_toast_compression GUC to be an enum rather than a string. Earlier,
uncommitted versions of the patch supported using CREATE ACCESS METHOD to add
new compression methods to a running system, but that idea was dropped before
commit. So, we can simplify the GUC handling as well, which has the nice side
effect of improving the error messages. While updating the documentation to
reflect the new GUC type, also move it back to the right place in the list. I
moved this while revising what became commit
24f0e395ac5892cd12e8914646fe921fac5ba23d, but apparently the intended ordering
is "alphabetical" rather than "whatever Robert thinks looks nice." Rejigger
things to avoid having access/toast_compression.h depend on utils/guc.h, so
that we don't end up with every file that includes it also depending on
something largely unrelated. Move a few inline functions back into the C
source file partly to help reduce dependencies and partly just to avoid
clutter. A few very minor cosmetic fixes. Original patch by Justin Pryzby,
but very heavily edited by me, and reverse reviewed by him and also reviewed
by by Tom Lane. Discussion:

- Fix interaction of TOAST compression with expression indexes. Before, trying
to compress a value for insertion into an expression index would crash. Dilip
Kumar, with some editing by me. Report by Jaime Casanova. Discussion:

Tomáš Vondra pushed:

- Move bsearch_arg to src/port. Until now the bsearch_arg function was used only
in extended statistics code, so it was defined in that code. But we already
have qsort_arg in src/port, so let's move it next to it.

- Pass all scan keys to BRIN consistent function at once. This commit changes
how we pass scan keys to BRIN consistent function. Instead of passing them one
by one, we now pass all scan keys for a given attribute at once. That makes
the consistent function a bit more complex, as it has to loop through the
keys, but it does allow more elaborate opclasses that can use multiple keys to
eliminate ranges much more effectively. The existing BRIN opclasses (minmax,
inclusion) don't really benefit from this change. The primary purpose is to
allow future opclases to benefit from seeing all keys at once. This does
change the BRIN API, because the signature of the consistent function changes
(a new parameter with number of scan keys). So this breaks existing opclasses,
and will require supporting two variants of the code for different PostgreSQL
versions. We've considered supporting two variants of the consistent, but
we've decided not to do that. Firstly, there's another patch that moves
handling of NULL values from the opclass, which means the opclasses need to be
updated anyway. Secondly, we're not aware of any out-of-core BRIN opclasses,
so it does not seem worth the extra complexity. Bump catversion, because of
pg_proc changes. Author: Tomas Vondra <tomas(dot)vondra(at)postgresql(dot)org>
Reviewed-by: Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org> Reviewed-by: Mark Dilger
<hornschnorter(at)gmail(dot)com> Reviewed-by: Alexander Korotkov
<aekorotkov(at)gmail(dot)com> Reviewed-by: John Naylor <john(dot)naylor(at)enterprisedb(dot)com>
Reviewed-by: Nikita Glukhov <n(dot)gluhov(at)postgrespro(dot)ru> Discussion:

- Move IS [NOT] NULL handling from BRIN support functions. The handling of IS
[NOT] NULL clauses is independent of an opclass, and most of the code was
exactly the same in both minmax and inclusion. So instead move the code from
support procedures to the AM. This simplifies the code - especially the
support procedures - quite a bit, as they don't need to care about NULL values
and flags at all. It also means the IS [NOT] NULL clauses can be evaluated
without invoking the support procedure. Author: Tomas Vondra
<tomas(dot)vondra(at)postgresql(dot)org> Author: Nikita Glukhov <n(dot)gluhov(at)postgrespro(dot)ru>
Reviewed-by: Nikita Glukhov <n(dot)gluhov(at)postgrespro(dot)ru> Reviewed-by: Mark Dilger
<hornschnorter(at)gmail(dot)com> Reviewed-by: Alexander Korotkov
<aekorotkov(at)gmail(dot)com> Reviewed-by: Masahiko Sawada
<masahiko(dot)sawada(at)enterprisedb(dot)com> Reviewed-by: John Naylor
<john(dot)naylor(at)enterprisedb(dot)com> Discussion:

- Optimize allocations in bringetbitmap. The bringetbitmap function allocates
memory for various purposes, which may be quite expensive, depending on the
number of scan keys. Instead of allocating them separately, allocate one bit
chunk of memory an carve it into smaller pieces as needed - all the pieces
have the same lifespan, and it saves quite a bit of CPU and memory overhead.
Author: Tomas Vondra <tomas(dot)vondra(at)postgresql(dot)org> Reviewed-by: Alvaro Herrera
<alvherre(at)alvh(dot)no-ip(dot)org> Reviewed-by: Mark Dilger <hornschnorter(at)gmail(dot)com>
Reviewed-by: Alexander Korotkov <aekorotkov(at)gmail(dot)com> Reviewed-by: Masahiko
Sawada <masahiko(dot)sawada(at)enterprisedb(dot)com> Reviewed-by: John Naylor
<john(dot)naylor(at)enterprisedb(dot)com> Discussion:

- Use correct spelling of statistics kind. A couple error messages and comments
used 'statistic kind', not the correct 'statistics kind'. Fix and backpatch
all the way back to 10, where extended statistics were introduced.
Backpatch-through: 10

- Convert Typ from array to list in bootstrap. It's a bit easier and more
convenient to free and reload a List, compared to a plain array. This will be
helpful when allowing catalogs to contain composite types. Author: Justin
Pryzby Reviewed-by: Dean Rasheed, Tomas Vondra Discussion:

- Allow composite types in catalog bootstrap. When resolving types during
catalog bootstrap, try to reload the pg_type contents if a type is not found.
That allows catalogs to contain composite types, e.g. row types for other
catalogs. Author: Justin Pryzby Reviewed-by: Dean Rasheed, Tomas Vondra

- Remove unnecessary pg_amproc BRIN minmax entries. The BRIN minmax opclasses
included amproc entries with mismatching left and right types, but those
happen to be unnecessary. The opclasses only need cross-type operators, not
cross-type support procedures. Discovered when trying to define equivalent
BRIN operator families in an extension. Catversion bump, because of pg_amproc
changes. Author: Tomas Vondra Reviewed-by: Alvaro Herrera Discussion:

- Support the old signature of BRIN consistent function. Commit a1c649d889
changed the signature of the BRIN consistent function by adding a new required
parameter. Treating the parameter as optional, which would make the change
backwards incompatibile, was rejected with the justification that there are
few out-of-core extensions, so it's not worth adding making the code more
complex, and it's better to deal with that in the extension. But after
further thought, that would be rather problematic, because pg_upgrade simply
dumps catalog contents and the same version of an extension needs to work on
both PostgreSQL versions. Supporting both variants of the consistent function
(with 3 or 4 arguments) makes that possible. The signature is not the only
thing that changed, as commit 72ccf55cb9 moved handling of IS [NOT] NULL keys
from the support procedures. But this change is backward compatible - handling
the keys in exension is unnecessary, but harmless. The consistent function
will do a bit of unnecessary work, but it should be very cheap. This also
undoes most of the changes to the existing opclasses (minmax and inclusion),
making them use the old signature again. This should make backpatching
simpler. Catversion bump, because of changes in pg_amproc. Author: Tomas
Vondra <tomas(dot)vondra(at)postgresql(dot)org> Author: Nikita Glukhov
<n(dot)gluhov(at)postgrespro(dot)ru> Reviewed-by: Mark Dilger <hornschnorter(at)gmail(dot)com>
Reviewed-by: Alexander Korotkov <aekorotkov(at)gmail(dot)com> Reviewed-by: Masahiko
Sawada <masahiko(dot)sawada(at)enterprisedb(dot)com> Reviewed-by: John Naylor
<john(dot)naylor(at)enterprisedb(dot)com> Discussion:

- BRIN bloom indexes. Adds a BRIN opclass using a Bloom filter to summarize the
range. Indexes using the new opclasses allow only equality queries (similar to
hash indexes), but that works fine for data like UUID, MAC addresses etc. for
which range queries are not very common. This also means the indexes work for
data that is not well correlated to physical location within the table, or
perhaps even entirely random (which is a common issue with existing BRIN
minmax opclasses). It's possible to specify opclass parameters with the usual
Bloom filter parameters, i.e. the desired false-positive rate and the expected
number of distinct values per page range. CREATE TABLE t (a int); CREATE
INDEX ON t USING brin (a int4_bloom_ops(false_positive_rate = 0.05,
n_distinct_per_range = 100)); The opclasses do not operate on the indexed
values directly, but compute a 32-bit hash first, and the Bloom filter is
built on the hash value. Collisions should not be a huge issue though, as the
number of distinct values in a page ranges is usually fairly small. Bump
catversion, due to various catalog changes. Author: Tomas Vondra
<tomas(dot)vondra(at)postgresql(dot)org> Reviewed-by: Alvaro Herrera
<alvherre(at)alvh(dot)no-ip(dot)org> Reviewed-by: Alexander Korotkov
<aekorotkov(at)gmail(dot)com> Reviewed-by: Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>
Reviewed-by: Nico Williams <nico(at)cryptonector(dot)com> Reviewed-by: John Naylor
<john(dot)naylor(at)enterprisedb(dot)com> Discussion:

- BRIN minmax-multi indexes. Adds BRIN opclasses similar to the existing minmax,
except that instead of summarizing the page range into a single [min,max]
range, the summary consists of multiple ranges and/or points, allowing gaps.
This allows more efficient handling of data with poor correlation to physical
location within the table and/or outlier values, for which the regular minmax
opclassed tend to work poorly. It's possible to specify the number of values
kept for each page range, either as a single point or an interval boundary.
int4_minmax_multi_ops(values_per_range=16)); When building the summary, the
values are combined into intervals with the goal to minimize the "covering"
(sum of interval lengths), using a support procedure computing distance
between two values. Bump catversion, due to various catalog changes. Author:
Tomas Vondra <tomas(dot)vondra(at)postgresql(dot)org> Reviewed-by: Alvaro Herrera
<alvherre(at)alvh(dot)no-ip(dot)org> Reviewed-by: Alexander Korotkov
<aekorotkov(at)gmail(dot)com> Reviewed-by: Sokolov Yura <y(dot)sokolov(at)postgrespro(dot)ru>
Reviewed-by: John Naylor <john(dot)naylor(at)enterprisedb(dot)com> Discussion:

- Fix alignment in BRIN minmax-multi deserialization. The deserialization failed
to ensure correct alignment, as it assumed it can simply point into the
serialized value. The serialization however ignores alignment and copies just
the significant bytes in order to make the result as small as possible. This
caused failures on systems that are sensitive to mialigned addresses, like
sparc, or with address sanitizer enabled. Fixed by copying the serialized
data to ensure proper alignment. While at it, fix an issue with serialization
on big endian machines, using the same store_att_byval/fetch_att trick as
extended statistics. Discussion:

- Fix ndistinct estimates with system attributes. When estimating the number of
groups using extended statistics, the code was discarding information about
system attributes. This led to strange situation that SELECT 1 FROM t
GROUP BY ctid; could have produced higher estimate (equal to
pg_class.reltuples) than SELECT 1 FROM t GROUP BY a, b, ctid; with
extended statistics on (a,b). Fixed by retaining information about the system
attribute. Backpatch all the way to 10, where extended statistics were
introduced. Author: Tomas Vondra Backpatch-through: 10

- Reduce duration of stats_ext regression tests. The regression tests of
extended statistics were taking a fair amount of time, due to using fairly
large data sets with a couple thousand rows. So far this was fine, but with
tests for statistics on expressions the duration would get a bit excessive.
So reduce the size of some of the tests that will be used to test expressions,
to keep the duration under control. Done in a separate commit before adding
the statistics on expressions, to make it clear which estimates are expected
to change. Author: Tomas Vondra Discussion:

- Extended statistics on expressions. Allow defining extended statistics on
expressions, not just just on simple column references. With this commit,
expressions are supported by all existing extended statistics kinds, improving
the same types of estimates. A simple example may look like this: CREATE
TABLE t (a int); CREATE STATISTICS s ON mod(a,10), mod(a,20) FROM t;
ANALYZE t; The collected statistics are useful e.g. to estimate queries with
those expressions in WHERE or GROUP BY clauses: `SELECT * FROM t WHERE
mod(a,10) = 0 AND mod(a,20) = 0;` SELECT 1 FROM t GROUP BY mod(a,10),
mod(a,20); This introduces new internal statistics kind 'e' (expressions)
which is built automatically when the statistics object definition includes
any expressions. This represents single-expression statistics, as if there was
an expression index (but without the index maintenance overhead). The
statistics is stored in pg_statistics_ext_data as an array of composite types,
which is possible thanks to 79f6a942bd. CREATE STATISTICS allows building
statistics on a single expression, in which case in which case it's not
possible to specify statistics kinds. A new system view pg_stats_ext_exprs
can be used to display expression statistics, similarly to pg_stats and
pg_stats_ext views. ALTER TABLE ... ALTER COLUMN ... TYPE now treats indexes
the same way it treats indexes, i.e. it drops and recreates the statistics.
This means all statistics are reset, and we no longer try to preserve at least
the functional dependencies. This should not be a major issue in practice, as
the functional dependencies actually rely on per-column statistics, which were
always reset anyway. Author: Tomas Vondra Reviewed-by: Justin Pryzby, Dean
Rasheed, Zhihong Yu Discussion:

- Stabilize stats_ext test with other collations. The tests used string
concatenation to test statistics on expressions, but that made the tests
locale-dependent, e.g. because the ordering of '11' and '1X' depends on the
collation. This affected both the estimated and actual row couts, breaking
some of the tests. Fixed by replacing the string concatenation with upper()
function call, so that the text values contain only digits. Discussion:

Bruce Momjian pushed:

- Add macro RelationIsPermanent() to report relation permanence. Previously, to
check relation permanence, the Relation's Form_pg_class structure member
relpersistence was compared to the value RELPERSISTENCE_PERMANENT ("p"). This
commit adds the macro RelationIsPermanent() and is used in appropirate places
to simplify the code. This matches other `RelationIs*` macros. This macro will
be used in more places in future cluster file encryption patches. Discussion:

Amit Kapila pushed:

- Fix dangling pointer reference in stream_cleanup_files. We can't access the
entry after it is removed from dynahash. Author: Peter Smith Discussion:

- Revert "Enable parallel SELECT for "INSERT INTO ... SELECT ...".". To allow
inserts in parallel-mode this feature has to ensure that all the constraints,
triggers, etc. are parallel-safe for the partition hierarchy which is costly
and we need to find a better way to do that. Additionally, we could have used
existing cached information in some cases like indexes, domains, etc. to
determine the parallel-safety. List of commits reverted, in reverse
chronological order: ed62d3737c Doc: Update description for parallel insert
reloption. c8f78b6161 Add a new GUC and a reloption to enable inserts in
parallel-mode. c5be48f092 Improve FK trigger parallel-safety check added by
05c8482f7f. e2cda3c20a Fix use of relcache TriggerDesc field introduced by
commit 05c8482f7f. e4e87a32cc Fix valgrind issue in commit 05c8482f7f.
05c8482f7f Enable parallel SELECT for "INSERT INTO ... SELECT ...".

Peter Eisentraut pushed:

- Add bit_count SQL function. This function for bit and bytea counts the set
bits in the bit or byte string. Internally, we use the existing popcount
functionality. For the name, after some discussion, we settled on bit_count,
which also exists with this meaning in MySQL, Java, and Python. Author: David
Fetter <david(at)fetter(dot)org> Discussion:

- pgcrypto: Check for error return of px_cipher_decrypt(). This has previously
not been a problem (that anyone ever reported), but in future OpenSSL versions
(3.0.0), where legacy ciphers are/can be disabled, this is the place where
this is reported. So we need to catch the error here, otherwise the
higher-level functions would return garbage. The nearby encryption code
already handled errors similarly. Reviewed-by: Daniel Gustafsson
<daniel(at)yesql(dot)se> Discussion:

- Improve an error message. Make it the same as another nearby message.

- Add date_bin function. Similar to date_trunc, but allows binning by an
arbitrary interval rather than just full units. Author: John Naylor
<john(dot)naylor(at)enterprisedb(dot)com> Reviewed-by: David Fetter <david(at)fetter(dot)org>
Reviewed-by: Isaac Morland <isaac(dot)morland(at)gmail(dot)com> Reviewed-by: Tom Lane
<tgl(at)sss(dot)pgh(dot)pa(dot)us> Reviewed-by: Artur Zakirov <zaartur(at)gmail(dot)com> Discussion:

- Fix stray double semicolons. Reported-by: John Naylor

- doc: Fix typo. Reported-by: Erik Rijkers <er(at)xs4all(dot)nl>

- Rename a parse node to be more general. A WHERE clause will be used for row
filtering in logical replication. We already have a similar node: 'WHERE
(condition here)'. Let's rename the node to a generic name and use it for row
filtering too. Author: Euler Taveira <euler(dot)taveira(at)enterprisedb(dot)com>

- Trim some extra whitespace in parser file.

- Improve consistency of SQL code capitalization.

Stephen Frost pushed:

- Change checkpoint_completion_target default to 0.9. Common recommendations are
that the checkpoint should be spread out as much as possible, provided we
avoid having it take too long. This change updates the default to 0.9 (from
0.5) to match that recommendation. There was some debate about possibly
removing the option entirely but it seems there may be some corner-cases where
having it set much lower to try to force the checkpoint to be as fast as
possible could result in fewer periods of time of reduced performance due to
kernel flushing. General agreement is that the "spread more" is the preferred
approach though and those who need to tune away from that value are much less
common. Reviewed-By: Michael Paquier, Peter Eisentraut, Tom Lane, David
Steele, Nathan Bossart Discussion:

- doc: Define TLS as an acronym. Commit c6763156589 added an acronym reference
for "TLS" but the definition was never added. Author: Daniel Gustafsson
Reviewed-by: Michael Paquier Backpatch-through: 9.6 Discussion:

Michael Meskes pushed:

- Add DECLARE STATEMENT command to ECPG. This command declares a SQL identifier
for a SQL statement to be used in other embedded SQL statements. The
identifier is linked to a connection. Author: Hayato Kuroda
<kuroda(dot)hayato(at)fujitsu(dot)com> Reviewed-by: Shawn Wang <shawn(dot)wang(dot)pg(at)gmail(dot)com>

- Need to step forward in the loop to get to an end.

Álvaro Herrera pushed:

- Remove StoreSingleInheritance reimplementation. I introduced this duplicate
code in commit 8b08f7d4820f for no good reason. Remove it, and backpatch to
11 where it was introduced. Author: Álvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>

- Rework HeapTupleHeader macros to reuse itemptr.h. The original definitions
pointlessly disregarded existing ItemPointer macros that do the same thing.
Reported-by: Michael Paquier <michael(at)paquier(dot)xyz> Discussion:

- Let ALTER TABLE Phase 2 routines manage the relation pointer. Struct
AlteredRelationInfo gains a new Relation member, to be used only by Phase 2
(ATRewriteCatalogs); this allows ATExecCmd() subroutines open and close the
relation internally. A future commit will use this facility to implement an
ALTER TABLE subcommand that closes and reopens the relation across transaction
boundaries. (It is possible to keep the relation open past phase 2 to be used
by phase 3 instead of having to reopen it that point, but there are some minor
complications with that; it's not clear that there is much to be won from
doing that, though.) Author: Álvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>

- Add comments for AlteredTableInfo->rel. The prior commit which introduced it
was pretty squalid in terms of code documentation, so add some comments.

- Document lock obtained during partition detach. On partition detach, we
acquire a SHARE lock on all tables that reference the partitioned table that
we're detaching a partition from, but failed to document this fact. My
oversight in commit f56f8f8da6af. Repair. Backpatch to 12. Author: Álvaro
Herrera <alvherre(at)alvh(dot)no-ip(dot)org> Discussion:

detached from its partitioned table without blocking concurrent queries, by
running in two transactions and only requiring ShareUpdateExclusive in the
partitioned table. Because it runs in two transactions, it cannot be used in
a transaction block. This is the main reason to use dedicated syntax: so that
users can choose to use the original mode if they need it. But also, it
doesn't work when a default partition exists (because an exclusive lock would
still need to be obtained on it, in order to change its partition constraint.)
In case the second transaction is cancelled or a crash occurs, there's ALTER
TABLE .. DETACH PARTITION .. FINALIZE, which executes the final steps. The
main trick to make this work is the addition of column
pg_inherits.inhdetachpending, initially false; can only be set true in the
first part of this command. Once that is committed, concurrent transactions
that use a PartitionDirectory will include or ignore partitions so marked: in
optimizer they are ignored if the row is marked committed for the snapshot; in
executor they are always included. As a result, and because of the way
PartitionDirectory caches partition descriptors, queries that were planned
before the detach will see the rows in the detached partition and queries that
are planned after the detach, won't. A CHECK constraint is created that
duplicates the partition constraint. This is probably not strictly necessary,
and some users will prefer to remove it afterwards, but if the partition is
re-attached to a partitioned table, the constraint needn't be rechecked.
Author: Álvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org> Reviewed-by: Amit Langote
<amitlangote09(at)gmail(dot)com> Reviewed-by: Justin Pryzby <pryzby(at)telsasoft(dot)com>

# Pending Patches

Kyotaro HORIGUCHI sent in another revision of a patch to make async replica wait
for the lsn to be replayed.

Bharath Rupireddy sent in a patch to remove extra memset calls in BloomInitPage,
GinInitPage, and SpGistInitPage.

Hou Zhijie sent in another revision of a patch to avoid CommandCounterIncrement
in RI trigger when INSERT INTO referencing table.

Amul Sul sent in another revision of a patch to build the infrastructure that

Amit Langote sent in another revision of a patch to fix an infelicity between
UPDATE ... RETURNING and moving tuples between partitions.

Greg Nancarrow sent in another revision of a patch to enable parallel INSERT and

Tang sent in another revision of a patch to support tab completion with a query
result for upper case character inputs in psql.

Tom Lane sent in another revision of a patch to allow an alias to be attached
directly to a JOIN ... USING per the SQL standard.

David Oksman sent in a patch to implement ALTER TABLE ... RENAME COLUMN IF

Andrei Zubkov sent in two revisions of a patch to add statement entry timestamps
to pg_stat_statements.

Thomas Munro sent in another revision of a patch to add PSQL_WATCH_PAGER for
psql's \watch command.

Thomas Munro sent in four more revisions of a patch to detect dropped
connections while running queries.

Fujii Masao sent in a patch intended to fix a bug that manifested as Failed
assertion on standby while shutdown by making the startup process
call ShutdownRecoveryTransactionEnvironment() when it exits.

Peter Eisentraut sent in another revision of a patch to add a
result_format_auto_binary_types setting.

Jan Wieck sent in three more revisions of a patch to fix pg_upgrade to preserve

Bertrand Drouvot sent in four more revisions of a patch to implement minimal
logical decoding on standbys.

Pavel Stěhule sent in two more revisions of a patch to implement schema

Marcus Wanner sent in two more revisions of a patch to add an xid argument to
the filter_prepare callback for output plugins.

Amul Sul sent in two more revisions of a patch to add an RelationGetSmgr()
inline function.

Peter Smith and Amit Kapila traded patches to add logical decoding of two-phase

Euler Taveira de Oliveira and Peter Eisentraut traded patches to add row
filtering for logical replication.

Kyotaro HORIGUCHI sent in another revision of a patch to change the stats
collector's temporary storage from files to shared memory.

Masahiro Ikeda and Fujii Masao traded patches to make the WAL receiver report
WAL statistics.

Bruce Momjian and Julien Rouhaud traded patches to expose queryid in
pg_stat_activity, log_line_prefix, and verbose explain.

Atsushi Torikoshi sent in four more revisions of a patch to add a function,
pg_get_backend_memory_contexts(), which does what it says on the label.

Daniel Gustafsson sent in two more revisions of a patch to support NSS as a
libpq TLS backend.

Michaël Paquier and Jeevan Chalke traded patches to log authenticated identity
from all auth backends.

Stephen Frost sent in another revision of a patch to use a WaitLatch for
vacuum/autovacuum sleeping.

Stephen Frost sent in three more revisions of a patch to add a documentation
stub for the now obsolete recovery.conf.

Justin Pryzby sent in another revision of a patch to add an optional ACCESS

Takayuki Tsunakawa sent in two more revisions of a patch to speed up COPY FROM
for the case of remote partitions.

Amit Langote sent in another revision of a patch to create foreign key triggers
in partitioned tables, and use this to enforce foreign key correctly during
cross-partition updates.

David Rowley sent sent in two more revisions of a patch to add a Result Cache
executor node.

Li Japin sent in another revision of a patch to implement ALTER SUBSCRIPTION ...

Tomáš Vondra sent in a patch to fix up an opclass storage type.

Fujii Masao sent in another revision of a patch to rename WalrcvExit wait_event
to WalReceiverExit.

Andrey V. Lepikhov sent in another revision of a patch to implement global

Atsushi Torikoshi sent in two more revisions of a patch to add plan type to

Denis Hirn sent in a patch to allow multiple recursive self-references in WITH

Masahiro Ikeda and Fujii Masao traded patches to get pgstat to avoid writing on

Kyotaro HORIGUCHI sent in another revision of a patch to protect syscache from
bloating with negative cache entries.

Hou Zhijie sent in another revision of a patch to add a nullif case for

Mark Dilger and Robert Haas traded patches to add a pg_amcheck contrib

Daniel Gustafsson sent in another revision of a patch to refactor the SSL test
harness to allow for multiple libraries.

Pavel Stěhule sent in two more revisions of a patch to add routine labels.

Thomas Munro sent in four more revisions of a patch to make all SLRU buffer
sizes configurable.

Peter Geoghegan and Masahiko Sawada traded patches to centralize state for each
VACUUM, break lazy_scan_heap() up into functions, remove the tupgone special
case from vacuumlazy.c, and skip index vacuuming in some cases.

Kyotaro HORIGUCHI sent in another revision of a patch to implement in-place
table persistence change and add a new command, ALTER TABLE ALL IN TABLESPACE
SET LOGGED/UNLOGGED, to go with it.

Ashutosh Bapat sent in another revision of a patch to fix a memory leak in
decoding speculative inserts with TOAST.

Ekaterina Sokolova sent in another revision of a patch to add extra statistics
to explain for Nested Loop.

Pavel Borisov sent in two more revisions of a patch to implement covering
SP-GiST indexes, i.e. support for INCLUDE columns.

Marcus Wanner add a concurrent_abort callback for the output plugin.

Joel Jacobson sent in another revision of a patch to add views pg_permissions
and pg_ownerships.

Bharath Rupireddy sent in another revision of a patch to make the error messages
while adding tables to publications a bit more informative and consistent.

Kyotaro HORIGUCHI sent in another revision of a intended to fix a bug that
manifested as Walsender may fail to send wal to the end.

Jim Finnerty sent in another revision of a patch to add a capability to have
64-bit GUCs, use XID_FMT to format xids, and use ClogPageNumber in place of int
for type safety.

Sven Klemm sent in a patch to allow CustomScan nodes to signal whether they
support projection.

Andrew Dunstan and Nikita Glukhov traded patches to implement the JSON_TABLE
part of SQL/JSON.

Andrew Dunstan and Nikita Glukhov traded patches to implement the functions part

Amit Langote and Tom Lane traded patches to make updates in inheritance trees
scale better by overhauling how updates compute new tuples, and revise how
inherited update/delete are handled.

David Steele sent in two revisions of a patch to document the fact that backup
labels may need to be opened in binary mode on Windows.

Cai Mengjuan sent in a patch to update walrcv->flushedUpto each time when
requesting xlog streaming.

Andrew Dunstan sent in another revision of a patch to allow matching the whole
DN from a client certificate.

Masahiro Ikeda sent in a patch to improve the performance of reporting wal

Tomáš Vondra sent in a patch to show applied extended statistics in explain.

Noah Misch sent in another revision of a patch to add a public schema default

Lætitia Avrot sent in two revisions of a patch to make it possible to dump only
functions using pg_dump.

Noah Misch sent in another revision of a patch to accept slightly-filled pages
for tuples larger than fillfactor.

Álvaro Herrera sent in two more revisions of a patch to add tracing capability
to libpq.

Kazutaka Onishi sent in another revision of a patch to make TRUNCATE on foreign
tables work.

Andrew Dunstan sent in another revision of a patch to implement global temporary

Yoan SULTAN sent in a patch to make it possible for pg_stat_statements to track
the most recent statement.

David Rowley sent in another revision of a patch to get better results from
valgrind leak tracking.

Browse pgsql-announce by date

  From Date Subject
Next Message Softbuilder via PostgreSQL Announce 2021-04-01 06:07:28 Generate realistic test Data for PostgreSQL with SB Data Generator, a New Tool from Softbuilder
Previous Message PostgreSQL Code of Conduct Committee via PostgreSQL Announce 2021-03-26 06:12:54 Translations of Code of Conduct Posted: Japanese, Hebrew, and Russian