PostgreSQL Weekly News - March 14, 2021

From: PWN via PostgreSQL Announce <announce-noreply(at)postgresql(dot)org>
To: PostgreSQL Announce <pgsql-announce(at)lists(dot)postgresql(dot)org>
Subject: PostgreSQL Weekly News - March 14, 2021
Date: 2021-03-15 08:20:01
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-announce

# PostgreSQL Weekly News - March 14, 2021

Person of the week: [](

# PostgreSQL Jobs for March


# PostgreSQL in the News

Planet PostgreSQL: [](

PostgreSQL Weekly News is brought to you this week by David Fetter

Submit news and announcements by Sunday at 3:00pm PST8PDT to david(at)fetter(dot)org(dot)

# Applied Patches

Amit Kapila pushed:

- Track replication origin progress for rollbacks. Commit 1eb6d6527a allowed to
track replica origin replay progress for 2PC but it was not complete. It
misses to properly track the progress for rollback prepared especially it
missed updating the code for recovery. Additionally, we need to allow tracking
it on subscriber nodes where wal_level might not be logical. It is required
to track decoding of 2PC which is committed in PG14 (a271a1b50e) and also
nobody complained about this till now so not backpatching it. Author: Amit
Kapila Reviewed-by: Michael Paquier and Ajin Cherian Discussion:

- Enable parallel SELECT for "INSERT INTO ... SELECT ...". Parallel SELECT can't
be utilized for INSERT in the following cases: - INSERT statement uses the ON
CONFLICT DO UPDATE clause - Target table has a parallel-unsafe: trigger, index
expression or predicate, column default expression or check constraint -
Target table has a parallel-unsafe domain constraint on any column - Target
table is a partitioned table with a parallel-unsafe partition key expression
or support function The planner is updated to perform additional
parallel-safety checks for the cases listed above, for determining whether it
is safe to run INSERT in parallel-mode with an underlying parallel SELECT. The
planner will consider using parallel SELECT for "INSERT INTO ... SELECT ...",
provided nothing unsafe is found from the additional parallel-safety checks,
or from the existing parallel-safety checks for SELECT. While checking
parallel-safety, we need to check it for all the partitions on the table which
can be costly especially when we decide not to use a parallel plan. So, in a
separate patch, we will introduce a GUC and or a reloption to enable/disable
parallelism for Insert statements. Prior to entering parallel-mode for the
execution of INSERT with parallel SELECT, a TransactionId is acquired and
assigned to the current transaction state. This is necessary to prevent the
INSERT from attempting to assign the TransactionId whilst in parallel-mode,
which is not allowed. This approach has a disadvantage in that if the
underlying SELECT does not return any rows, then the TransactionId is not
used, however that shouldn't happen in practice in many cases. Author: Greg
Nancarrow, Amit Langote, Amit Kapila Reviewed-by: Amit Langote, Hou Zhijie,
Takayuki Tsunakawa, Antonin Houska, Bharath Rupireddy, Dilip Kumar, Vignesh C,
Zhihong Yu, Amit Kapila Tested-by: Tang, Haiying Discussion:

- Fix valgrind issue in commit 05c8482f7f. Initialize other newly added
variables in max_parallel_hazard_context via is_parallel_safe() because we
don't check the parallel-safety of target relations in that function.
Reported-by: Tom Lane as per buildfarm Author: Amit Kapila Discussion:

- Fix use of relcache TriggerDesc field introduced by commit 05c8482f7f. The
commit added code which used a relcache TriggerDesc field across another cache
access, which it shouldn't because the relcache doesn't guarantee it won't get
moved. Diagnosed-by: Tom Lane Author: Greg Nancarrow Reviewed-by: Hou Zhijie,
Amit Kapila Discussion:

- Fix size overflow in calculation introduced by commits d6ad34f3 and bea449c6.
Reported-by: Thomas Munro Author: Takayuki Tsunakawa Reviewed-by: Kyotaro
Horiguchi Discussion:

- Improve FK trigger parallel-safety check added by 05c8482f7f. Commit
05c8482f7f added special logic related to parallel-safety of FK triggers. This
is a bit of a hack and should have instead been done by simply setting
appropriate proparallel values on those trigger functions themselves.
Suggested-by: Tom Lane Author: Greg Nancarrow Reviewed-by: Amit Kapila

Tom Lane pushed:

- Further tweak memory management for regex DFAs. Coverity is still unhappy
after commit 190c79884, and after looking closer I think it might be onto
something. The callers of newdfa() typically drop out if v->err has been set
nonzero, which newdfa() is faithfully doing if it fails. However, what if
v->err was already nonzero before we entered newdfa()? Then newdfa() could
succeed and the caller would promptly leak its result. I don't think this
scenario can actually happen, but the predicate "v->err is always zero when
newdfa() is called" seems difficult to be entirely sure of; there's a good
deal of code that potentially could get that wrong. It seems better to adjust
the callers to directly check for a null result instead of relying on ISERR()
tests. This is slightly cheaper than the previous coding anyway. Lacking
evidence that there's any real bug, no back-patch.

- Validate the OID argument of pg_import_system_collations(). "SELECT
pg_import_system_collations(0)" caused an assertion failure. With a random
nonzero argument --- or indeed with zero, in non-assert builds --- it would
happily make pg_collation entries with garbage values of collnamespace. These
are harmless as far as I can tell (unless maybe the OID happens to become used
for a schema, later on?). In any case this isn't a security issue, since the
function is superuser-only. But it seems like a gotcha for unwary DBAs, so
let's add a check that the given OID belongs to some schema. Back-patch to
v10 where this function was introduced.

- Properly mark pg_stat_get_subscription() as returning a set. The initial
catalog data for this function failed to set proretset or provide a prorows
estimate. It accidentally worked anyway when invoked in the FROM clause,
because the executor isn't too picky about this; but the planner didn't expect
the function to return multiple rows, which could lead to bad plans. Also the
function would fail if invoked in the SELECT list. We can't easily back-patch
this fix, but fortunately the bug's consequences aren't awful in most cases.
Getting this right is mainly an exercise in future-proofing. Discussion:

- Complain if a function-in-FROM returns a set when it shouldn't. Throw a
"function protocol violation" error if a function in FROM tries to return a
set though it wasn't marked proretset. Although such cases work at the
moment, it doesn't seem like something we want to guarantee will keep working.
Besides, there are other negative consequences of not setting the proretset
flag, such as potentially bad plans. No back-patch, since if there is any
third-party code violating this expectation, people wouldn't appreciate us
breaking it in a minor release. Discussion:

- Doc: fix missing mention of procedure OUT parameters. Small oversight in
commit 2453ea142.

- Doc: improve introductory information about procedures. Clarify the discussion
in "User-Defined Procedures", by laying out the key differences between
functions and procedures in a bulleted list. Notably, this avoids burying the
lede about procedures being able to do transaction control. Make the
back-link in the CREATE FUNCTION reference page more prominent, and add one in
CREATE PROCEDURE. Per gripe from Guyren Howe. Thanks to David Johnston for
discussion. Discussion:

- Doc: get rid of <foreignphrase> tags. We italicized some, but not all,
instances of "per se", "pro forma", and "ad hoc". These phrases are widespread
in formal registers of English, so it"s debatable whether they even qualify as
foreign. We could instead try to be more consistent in the use of
<foreignphrase>, but that"s difficult to enforce, so let"s just remove the
tags for those words. The one case that seems to deserve the tag is "voilà".
Instead of keeping just one instance of the tag, change that to a more
standard phrase. John Naylor Discussion:

- Avoid creating duplicate cached plans for inherited FK constraints. When a
foreign key constraint is applied to a partitioned table, each leaf partition
inherits a similar FK constraint. We were processing all of those constraints
independently, meaning that in large partitioning trees we'd build up large
collections of cached FK-checking query plans. However, in all cases but one,
the generated queries are actually identical for all members of the
inheritance tree (because, in most cases, the query only mentions the topmost
table of the other side of the FK relationship). So we can share a single
cached plan among all the partitions, saving memory, not to mention time to
build and maintain the cached plans. Keisuke Kuroda and Amit Langote

- Re-simplify management of inStart in pqParseInput3's subroutines. Commit
92785dac2 copied some logic related to advancement of inStart from
pqParseInput3 into getRowDescriptions and getAnotherTuple, because it wanted
to allow user-defined row processor callbacks to potentially longjmp out of
the library, and inStart would have to be updated before that happened to
avoid an infinite loop. We later decided that that API was impossibly fragile
and reverted it, but we didn't undo all of the related code changes, and this
bit of messiness survived. Undo it now so that there's just one place in
pqParseInput3's processing where inStart is advanced; this will simplify
addition of better tracing support. getParamDescriptions had grown similar
processing somewhere along the way (not in 92785dac2; I didn't track down just
when), but it's actually buggy because its handling of corrupt-message cases
seems to have been copied from the v2 logic where we lacked a known message
length. The cases where we "goto not_enough_data" should not simply return
EOF, because then we won't consume the message, potentially creating an
infinite loop. That situation now represents a definitively corrupt message,
and we should report it as such. Although no field reports of
getParamDescriptions getting stuck in a loop have been seen, it seems
appropriate to back-patch that fix. I chose to back-patch all of this to keep
the logic looking more alike in supported branches. Discussion:

- Forbid marking an identity column as nullable. GENERATED ALWAYS AS IDENTITY
implies NOT NULL, but the code failed to complain if you overrode that with
"GENERATED ALWAYS AS IDENTITY NULL". One might think the old behavior was a
feature, but it was inconsistent because the outcome varied depending on the
order of the clauses, so it seems to have been just an oversight. Per bug
#16913 from Pavel Boev. Back-patch to v10 where identity columns were
introduced. Vik Fearing (minor tweaks by me) Discussion:

- Fix race condition in psql \e's detection of file modification. psql's editing
commands decide whether the user has edited the file by checking for change of
modification timestamp. This is probably fine for a pre-existing file, but
with a temporary file that is created within the command, it's possible for a
fast typist to save-and-exit in less than the one-second granularity of
stat(2) timestamps. On Windows FAT filesystems the granularity is even worse,
2 seconds, making the race a bit easier to hit. To fix, try to set the temp
file's mod time to be two seconds ago. It's unlikely this would fail, but then
again the race condition itself is unlikely, so just ignore any error. Also,
we might as well check the file size as well as its mod time. While this is a
difficult bug to hit, it still seems worth back-patching, to ensure that
users' edits aren't lost. Laurenz Albe, per gripe from Jacob Champion; based
on fix suggestions from Jacob and myself Discussion:

- pg_amcheck: Keep trying to fix the tests. Fix another example of non-portable
option ordering in the tests. Oversight in 24189277f. Mark Dilger

- Doc: add note about how to run the pg_amcheck regression tests. It's not
immediately obvious what you have to do to get "make installcheck" to work
here, so document that along the same lines as we've used elsewhere.

Michaël Paquier pushed:

- Remove support for SSL compression. PostgreSQL disabled compression as of
e3bdb2d and the documentation recommends against using it since.
Additionally, SSL compression has been disabled in OpenSSL since version
1.1.0, and was disabled in many distributions long before that. The most
recent TLS version, TLSv1.3, disallows compression at the protocol level.
This commit removes the feature itself, removing support for the libpq
parameter sslcompression (parameter still listed for compatibility reasons
with existing connection strings, just ignored), and removes the equivalent
field in pg_stat_ssl and de facto PgBackendSSLStatus. Note that, on top of
removing the ability to activate compression by configuration, compression is
actively disabled in both frontend and backend to avoid overrides from local
configurations. A TAP test is added for deprecated SSL parameters to check
after backwards compatibility. Bump catalog version. Author: Daniel
Gustafsson Reviewed-by: Peter Eisentraut, Magnus Hagander, Michael Paquier

- Add support for more progress reporting in COPY. The command (TO or FROM), its
type (file, pipe, program or callback), and the number of tuples excluded by a
WHERE clause in COPY FROM are added to the progress reporting already
available. The column "lines_processed" is renamed to "tuples_processed" to
disambiguate the meaning of this column in the cases of CSV and BINARY COPY
and to be more consistent with the other catalog progress views. Bump catalog
version, again. Author: Matthias van de Meent Reviewed-by: Michael Paquier,
Justin Pryzby, Bharath Rupireddy, Josef Šimánek, Tomas Vondra Discussion:

- Switch back sslcompression to be a normal input field in libpq. Per buildfarm
member crake, any servers including a postgres_fdw server with this option set
would fail to do a pg_upgrade properly as the option got hidden in f9264d1 by
becoming a debug option, making the restore of the FDW server fail. This
changes back the option in libpq to be visible, but still inactive to fix this
upgrade issue. Discussion:

- Revert changes for SSL compression in libpq. This partially reverts 096bbf7
and 9d2d457, undoing the libpq changes as it could cause breakages in
distributions that share one single libpq version across multiple major
versions of Postgres for extensions and applications linking to that. Note
that the backend is unchanged here, and it still disables SSL compression
while simplifying the underlying catalogs that tracked if compression was
enabled or not for a SSL connection. Per discussion with Tom Lane and Daniel
Gustafsson. Discussion:

- Move tablespace path re-creation from the makefiles to pg_regress. Moving this
logic into pg_regress fixes a potential failure with parallel tests when
pg_upgrade and the main regression test suite both trigger the makefile rule
that cleaned up testtablespace/ under src/test/regress. Even if pg_upgrade
was triggering this rule, it has no need to do so as it uses a different
tablespace path. So if pg_upgrade triggered the makefile rule for the
tablespace setup while the main regression test suite ran the tablespace
cases, it would fail. 61be85a was a similar attempt at achieving that, but
that broke cases where the regression tests require to run under an
Administrator account, like with Appveyor. Reported-by: Andres Freund,
Kyotaro Horiguchi Reviewed-by: Peter Eisentraut Discussion:

- Set libcrypto callbacks for all connection threads in libpq. Based on an
analysis of the OpenSSL code with Jacob, moving to EVP for the cryptohash
computations makes necessary the setup of the libcrypto callbacks that were
getting set only for SSL connections, but not for connections without SSL.
Not setting the callbacks makes the use of threads potentially unsafe for
connections calling cryptohashes during authentication, like MD5 or SCRAM, if
a failure happens during a cryptohash computation. The logic setting the
libssl and libcrypto states is then split into two parts, both using the same
locking, with libcrypto being set up for SSL and non-SSL connections, while
SSL connections set any libssl state afterwards as needed. Prior to this
commit, only SSL connections would have set libcrypto callbacks that are
necessary to ensure a proper thread locking when using multiple concurrent
threads in libpq (ENABLE_THREAD_SAFETY). Note that this is only required for
OpenSSL 1.0.2 and 1.0.1 (oldest version supported on HEAD), as 1.1.0 has its
own internal locking and it has dropped support for
CRYPTO_set_locking_callback(). Tests with up to 300 threads with OpenSSL
1.0.1 and 1.0.2, mixing SSL and non-SSL connection threads did not show any
performance impact after some micro-benchmarking. pgbench can be used here
with -C and a mostly-empty script (with one \set meta-command for example) to
stress authentication requests, and we have mixed that with some custom
programs for testing. Reported-by: Jacob Champion Author: Michael Paquier
Reviewed-by: Jacob Champion Discussion:

Fujii Masao pushed:

- Track total amounts of times spent writing and syncing WAL data to disk. This
commit adds new GUC track_wal_io_timing. When this is enabled, the total
amounts of time XLogWrite writes and issue_xlog_fsync syncs WAL data to disk
are counted in pg_stat_wal. This information would be useful to check how much
WAL write and sync affect the performance. Enabling track_wal_io_timing will
make the server query the operating system for the current time every time WAL
is written or synced, which may cause significant overhead on some platforms.
To avoid such additional overhead in the server with track_io_timing enabled,
this commit introduces track_wal_io_timing as a separate parameter from
track_io_timing. Note that WAL write and sync activity by walreceiver has not
been tracked yet. This commit makes the server also track the numbers of
times XLogWrite writes and issue_xlog_fsync syncs WAL data to disk, in
pg_stat_wal, regardless of the setting of track_wal_io_timing. This counters
can be used to calculate the WAL write and sync time per request, for example.
Bump PGSTAT_FILE_FORMAT_ID. Bump catalog version. Author: Masahiro Ikeda
Reviewed-By: Japin Li, Hayato Kuroda, Masahiko Sawada, David Johnston, Fujii
Masao Discussion:

- Force to send remaining WAL stats to the stats collector at walwriter exit. In
walwriter's main loop, WAL stats message is only sent if enough time has
passed since last one was sent to reach PGSTAT_STAT_INTERVAL msecs. This is
necessary to avoid overloading to the stats collector. But this can cause
recent WAL stats to be unsent when walwriter exits. To ensure that all the
WAL stats are sent, this commit makes walwriter force to send remaining WAL
stats to the collector when it exits because of shutdown request. Note that
those remaining WAL stats can still be unsent when walwriter exits with
non-zero exit code (e.g., FATAL error). This is OK because that walwriter exit
leads to server crash and subsequent recovery discards all the stats. So there
is no need to send remaining stats in that case. Author: Masahiro Ikeda
Reviewed-by: Fujii Masao Discussion:

- Send statistics collected during shutdown checkpoint to the stats collector.
When shutdown is requested, checkpointer performs checkpoint or restartpoint,
and updates the statistics, before it exits. But previously checkpointer
didn't send those statistics to the stats collector. Shutdown checkpoint and
restartpoint are treated as requested ones instead of scheduled ones, so the
number of them are counted in pg_stat_bgwriter.checkpoints_req column.
Author: Masahiro Ikeda Reviewed-by: Fujii Masao Discussion:

Peter Eisentraut pushed:

- libpq: Remove deprecated connection parameters authtype and tty. The authtype
parameter was deprecated and made inactive in commit d5bbe2aca55bc8, but the
environment variable was left defined and thus tested with a getenv call even
though the value is of no use. Also, if it would exist it would be copied but
never freed as the cleanup code had been removed. tty was deprecated in
commit cb7fb3ca958ec8bd5a14e7 but most of the infrastructure around it
remained in place. Author: Daniel Gustafsson <daniel(at)yesql(dot)se> Discussion:

- Small debug message tweak. This makes the wording of the delete case match the
update case.

- Add bound check before bsearch() for performance. In the current lazy vacuum
implementation, some index AMs such as btree indexes call lazy_tid_reaped()
for each index tuple during ambulkdelete to check if the index tuple points to
the (collected) garbage tuple. In that function, we simply call bsearch(),
but we should be able to know the result without bsearch() if the index tuple
points to the heap tuple that is out of range of the collected garbage tuples.
Therefore, add a simple bound check before resorting to bsearch(). Testing
has shown that this can give significant performance benefits. Author:
Masahiko Sawada <masahiko(dot)sawada(at)2ndquadrant(dot)com> Discussion:

Alexander Korotkov pushed:

- Fix vague comment in jsonb documentation. The sample query fails because of an
attempt to update the key of a numeric. But the comment says it's just because
of the missing object key. That's not correct because jsonb subscription
automatically adds missing keys. Reported-by: Nikita Konev

Thomas Munro pushed:

- pgbench: Refactor thread portability support. Instead of maintaining an
incomplete emulation of POSIX threads for Windows, let's use an extremely
minimalist macro-based abstraction for now. A later patch will extend this,
without the need to supply more complicated pthread emulation code. (There
may be a need for a more serious portable thread abstraction in later
projects, but this is not it.) Minor incidental problems fixed: it wasn't OK
to use (pthread_t) 0 as a special value, it wasn't OK to compare thread_t
values with ==, and we incorrectly assumed that pthread functions set errno.

- pgbench: Improve time logic. Instead of instr_time (struct timespec) and the
INSTR_XXX macros, introduce pg_time_usec_t and use integer arithmetic. Don't
include the connection time in TPS unless using -C mode, but report it
separately. Author: Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr> Reviewed-by: Kyotaro
Horiguchi <horikyota(dot)ntt(at)gmail(dot)com> Reviewed-by: Hayato Kuroda
<kuroda(dot)hayato(at)fujitsu(dot)com> Discussion:

- Add missing pthread_barrier_t. Supply a simple implementation of the missing
pthread_barrier_t type and functions, for macOS. Discussion:

- pgbench: Synchronize client threads. Wait until all pgbench threads are
connected before benchmarking begins. This fixes a problem where some
connections could take a very long time to be established because of lock
contention from earlier connections, making results unstable and bogus with
high connection counts. Author: Andres Freund <andres(at)anarazel(dot)de> Author:
Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr> Reviewed-by: Marina Polyakova
<m(dot)polyakova(at)postgrespro(dot)ru> Reviewed-by: Kyotaro Horiguchi
<horikyota(dot)ntt(at)gmail(dot)com> Reviewed-by: Hayato Kuroda
<kuroda(dot)hayato(at)fujitsu(dot)com> Reviewed-by: David Rowley <dgrowleyml(at)gmail(dot)com>

- Try to fix portability bugs in recent pgbench commits. 1. pg_time_usec_t
needs to be printed with INT64_FORMAT, not %ld, or 32 bit systems complain,
per lapwing. 2. Some Windows compilers didn't like a thread function not
marked with `__stdcall`, per whelk; let's see if this fixes the problem.

- Fix another portability bug in recent pgbench commit. Commit 547f04e7 produced
errors on AIX/xlc while building plpython. The new code appears to be
incompatible with the hack installed by commit a11cf433. Without access to an
AIX system to check, my guess is that `_POSIX_C_SOURCE` may be required for
<time.h> to declare the things the header needs to see, but plpython.h
undefines it. For now, to unbreak build farm animal hoverfly, just move the
new pg_time_usec_t support into pgbench.c. Perhaps later we could figure out
what to rearrange to put it back into a header for wider use. Discussion:

- Replace buffer I/O locks with condition variables. 1. Backends waiting for
buffer I/O are now interruptible. 2. If something goes wrong in a backend
that is currently performing I/O, waiting backends no longer wake up until
that backend reaches AbortBufferIO() and broadcasts on the CV. Previously,
any waiters would wake up (because the I/O lock was automatically released)
and then busy-loop until AbortBufferIO() cleared BM_IO_IN_PROGRESS. 3.
LWLockMinimallyPadded is removed, as it would now be unused. Author: Robert
Haas <robertmhaas(at)gmail(dot)com> Reviewed-by: Thomas Munro
<thomas(dot)munro(at)gmail(dot)com> Reviewed-by: Julien Rouhaud <rjuju123(at)gmail(dot)com>
Reviewed-by: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> (earlier version, 2016) Discussion:

- Improve comment for struct BufferDesc. Add a note that per-buffer I/O
condition variables currently live outside the BufferDesc struct. Follow-up
for commit d8725104. Reported-by: Julien Rouhaud <rjuju123(at)gmail(dot)com>

- Minor modernization for README.barrier. Itanium is very uncommon and being
discontinued. ARM is everywhere. Prefer ARM as an example of an architecture
with weak memory ordering.

- Add condition variable for recovery resume. Replace a sleep loop with a CV, to
get a fast reaction time when recovery is resumed or the postmaster exits via
standard infrastructure. Unfortunately we still need to wake up every second
to perform extra polling during the recovery pause loop. Discussion:

- Add condition variable for walreceiver shutdown. Use this new CV to wait for
walreceiver shutdown without a sleep/poll loop, while also benefiting from
standard postmaster death handling. Discussion:

- Poll postmaster less frequently in recovery. Since commits 9f095299 and
f98b8476 we don't poll the postmaster pipe at all during crash recovery on
Linux and FreeBSD, but on other operating systems we were still doing it for
every WAL record. Do it less frequently on operating systems where system
calls are required, at the cost of delaying exit a bit after postmaster death.
This avoids expensive system calls reported to slow down CPU-bound recovery by
as much as 10-30%. Reviewed-by: Heikki Linnakangas <hlinnaka(at)iki(dot)fi>
Reviewed-by: Fujii Masao <masao(dot)fujii(at)oss(dot)nttdata(dot)com> Reviewed-by: Michael
Paquier <michael(at)paquier(dot)xyz> Discussion:

- Specialize checkpointer sort functions. When sorting a potentially large
number of dirty buffers, the checkpointer can benefit from a faster sort
routine. One reported improvement on a large buffer pool system was 1.4s ->
0.6s. Reviewed-by: Andres Freund <andres(at)anarazel(dot)de> Discussion:

- Fix new pthread code to respect --disable-thread-safety. Don't try to compile
src/port/pthread_barrier_wait.c if we opted out of threads at configure time.
Revealed by build farm member gaur, which can't compile this code because of
problems with its pthread implementation. It shouldn't be trying to, because
it's using --disable-thread-safety. Defect in commit 44bf3d50. Reported-by:
Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> Discussion:

Bruce Momjian pushed:

- C comments: improve description of GiST NSN and GistBuildLSN. GiST indexes
are complex, so adding more details in the code might help someone.

- tutorial: land height is "elevation", not "altitude". This is a follow-on
patch to 92c12e46d5. In that patch, we renamed "altitude" to "elevation" in
the docs, based on these details:
This renames the tutorial SQL files to match the documentation. Reported-by:
max1(at)inbox(dot)ru Discussion:
Backpatch-through: 9.6

Peter Geoghegan pushed:

- Don't consider newly inserted tuples in nbtree VACUUM. Remove the entire idea
of "stale stats" within nbtree VACUUM (stop caring about stats involving the
number of inserted tuples). Also remove the vacuum_cleanup_index_scale_factor
GUC/param on the master branch (though just disable them on postgres 13). The
vacuum_cleanup_index_scale_factor/stats interface made the nbtree AM partially
responsible for deciding when pg_class.reltuples stats needed to be updated.
This seems contrary to the spirit of the index AM API, though -- it is not
actually necessary for an index AM's bulk delete and cleanup callbacks to
provide accurate stats when it happens to be inconvenient. The core code owns
that. (Index AMs have the authority to perform or not perform certain kinds
of deferred cleanup based on their own considerations, such as page deletion
and recycling, but that has little to do with
pg_class.reltuples/num_index_tuples.) This issue was fairly harmless until
the introduction of the autovacuum_vacuum_insert_threshold feature by commit
b07642db, which had an undesirable interaction with the
vacuum_cleanup_index_scale_factor mechanism: it made insert-driven autovacuums
perform full index scans, even though there is no real benefit to doing so.
This has been tied to a regression with an append-only insert benchmark [1].
Also have remaining cases that perform a full scan of an index during a
cleanup-only nbtree VACUUM indicate that the final tuple count is only an
estimate. This prevents vacuumlazy.c from setting the index's
pg_class.reltuples in those cases (it will now only update pg_class when
vacuumlazy.c had TIDs for nbtree to bulk delete). This arguably fixes an
oversight in deduplication-related bugfix commit 48e12913. [1]
Author: Peter Geoghegan <pg(at)bowt(dot)ie> Reviewed-By: Masahiko Sawada
<sawada(dot)mshk(at)gmail(dot)com> Discussion:
Backpatch: 13-, where autovacuum_vacuum_insert_threshold was added.

- VACUUM ANALYZE: Always update pg_class.reltuples. vacuumlazy.c sometimes fails
to update pg_class entries for each index (to ensure that pg_class.reltuples
is current), even though analyze.c assumed that that must have happened during
VACUUM ANALYZE. There are at least a couple of reasons for this. For
example, vacuumlazy.c could fail to update pg_class when the index AM
indicated that its statistics are merely an estimate, per the contract for
amvacuumcleanup() routines established by commit e57345975cf back in 2006.
Stop assuming that pg_class must have been updated with accurate statistics
within VACUUM ANALYZE -- update pg_class for indexes at the same time as the
table relation in all cases. That way VACUUM ANALYZE will never fail to keep
pg_class.reltuples reasonably accurate. The only downside of this approach
(compared to the old approach) is that it might inaccurately set
pg_class.reltuples for indexes whose heap relation ends up with the same
inaccurate value anyway. This doesn't seem too bad. We already consistently
called vac_update_relstats() (to update pg_class) for the heap/table relation
twice during any VACUUM ANALYZE -- once in vacuumlazy.c, and once in
analyze.c. We now make sure that we call vac_update_relstats() at least once
(though often twice) for each index. This is follow up work to commit
9f3665fb, which dealt with issues in btvacuumcleanup(). Technically this
fixes an unrelated issue, though. btvacuumcleanup() no longer provides an
accurate num_index_tuples value following commit 9f3665fb (when there was no
btbulkdelete() call during the VACUUM operation in question), but
hashvacuumcleanup() has worked in the same way for many years now. Author:
Peter Geoghegan <pg(at)bowt(dot)ie> Reviewed-By: Masahiko Sawada
<sawada(dot)mshk(at)gmail(dot)com> Discussion:
Backpatch: 13-, just like commit 9f3665fb.

- Doc: B-Tree only has one additional parameter. Oversight in commit 9f3665fb.
Backpatch: 13-, just like commit 9f3665fb.

- Add back vacuum_cleanup_index_scale_factor parameter. Commit 9f3665fb removed
the vacuum_cleanup_index_scale_factor storage parameter. However, that
creates dump/reload hazards when moving across major versions. Add back the
vacuum_cleanup_index_scale_factor parameter (though not the GUC of the same
name) purely to avoid problems when using tools like pg_upgrade. The
parameter remains disabled and undocumented. No backpatch to Postgres 13,
since vacuum_cleanup_index_scale_factor was only disabled by REL_13_STABLE's
version of master branch commit 9f3665fb in the first place -- the parameter
already looks like this on REL_13_STABLE. Discussion:

- Save a few cycles during nbtree VACUUM. Avoid calling
RelationGetNumberOfBlocks() unnecessarily in the common case where there are
no deleted but not yet recycled pages to recycle during a cleanup-only nbtree
VACUUM operation. Follow-up to commit e5d8a999, which (among other things)
taught the "skip full scan" nbtree VACUUM mechanism to only trigger a full
index scan when the absolute number of deleted pages in the index is
considered excessive.

- Consolidate nbtree VACUUM metapage routines. Simplify
`_bt_vacuum_needs_cleanup()` functions's signature (it only needs a single 'rel'
argument now), and move it next to its sibling function in nbtpage.c. I
believe that `_bt_vacuum_needs_cleanup()` was originally located in nbtree.c due
to an include dependency issue. That's no longer an issue. Follow-up to
commit 9f3665fb.

Robert Haas pushed:

- Refactor and generalize the ParallelSlot machinery. Create a wrapper object,
ParallelSlotArray, to encapsulate the number of slots and the slot array
itself, plus some other relevant bits of information. This reduces the number
of parameters we have to pass around all over the place. Allow for a
ParallelSlotArray to contain slots connected to different databases within a
single cluster. The current clients of this mechanism don't need this, but it
is expected to be used by future patches. Defer connecting to databases until
we actually need the connection for something. This is a slight behavior
change for vacuumdb and reindexdb. If you specify a number of jobs that is
larger than the number of objects, the extra connections will now not be used.
But, on the other hand, if you specify a number of jobs that is so large that
it's going to fail, the failure would previously have happened before any
operations were actually started, and now it won't. Mark Dilger, reviewed by
me. Discussion:

- Be clear about whether a recovery pause has taken effect. Previously, the code
and documentation seem to have essentially assumed than a call to
pg_wal_replay_pause() would take place immediately, but that's not the case,
because we only check for a pause in certain places. This means that a tool
that uses this function and then wants to do something else afterward that is
dependent on the pause having taken effect doesn't know how long it needs to
wait to be sure that no more WAL is going to be replayed. To avoid that, add
a new function pg_get_wal_replay_pause_state() which returns either 'not
paused', 'paused requested', or 'paused'. After calling pg_wal_replay_pause()
the status will immediate change from 'not paused' to 'pause requested'; when
the startup process has noticed this, the status will change to 'pause'. For
backward compatibility, pg_is_wal_replay_paused() still exists and returns the
same thing as before: true if a pause has been requested, whether or not it
has taken effect yet; and false if not. The documentation is updated to
clarify. To improve the changes that a pause request is quickly confirmed
effective, adjust things so that WaitForWALToBecomeAvailable will swiftly
reach a call to recoveryPausesHere() when a pause request is made. Dilip
Kumar, reviewed by Simon Riggs, Kyotaro Horiguchi, Yugo Nagata, Masahiko
Sawada, and Bharath Rupireddy. Discussion:

- Add pg_amcheck, a CLI for contrib/amcheck. This makes it a lot easier to run
the corruption checks that are implemented by contrib/amcheck against lots of
relations and get the result in an easily understandable format. It has a wide
variety of options for choosing which relations to check and which checks to
perform, and it can run checks in parallel if you want. Mark Dilger, reviewed
by Peter Geoghegan and by me. Discussion:

- Try to fix compiler warnings. Per report from Peter Geoghegan. Discussion:

- Adjust perl style. Per buildfarm member crake.

- Move PG_USED_FOR_ASSERTS_ONLY before initializer. Erik Rijkers reported a
compile failure, and I think this is probably the reason.

- Fix portability issues in pg_amcheck's Test #12
overwrote a 1-byte varlena header to make it look like the initial byte of a
4-byte varlena header, but the results were endian-dependent. Also, the byte
"abc" that followed the overwritten byte would be interpreted differently
depending on endian-ness. Overwrite 4 bytes instead, in an endian-aware
manner. Test #13 accidentally managed to depend on TOAST_MAX_CHUNK_SIZE,
which varies slightly depending on MAXIMUM_ALIGNOF. That's not the point
anyway, so make the regexp insensitive to the expected number of chunks. Mark
Dilger Discussion:

- Try to avoid apparent platform-dependency in IPC::Run. It's hard to believe,
but buildfarm results from the new pg_amcheck suggest that
command_checks_all() perform shell expansion on some machines but not others,
apparently due to an underlying behavior difference in IPC::Run. Let's see if
we can work around that - and confirm that it is the real problem - by passing
'-S*' as a single argument rather than '-S' and '*' as two separate ones.
Failures were observed on jacana and hoverfly. Mark Dilger Discussion:

- pg_amcheck: Try to fix still more test failures. Avoid use of non-portable
option ordering in command_checks_all(). The use of bare command line
arguments before switches doesn't work everywhere. Per buildfarm members
drongo and hoverfly. Avoid testing for the message "role \"%s\" does not
exist", because some buildfarm machines report a different error. fairywren
complains about "SSPI authentication failed for user \"%s\"", for example.
Mark Dilger Discussion:

- pg_amcheck: Keep trying to fix the tests. Commit
24189277f6ff3169b15c7bc82926a372ca7f2dbf managed to remove one of the two
places where we were checking for a "no such user" error while leaving the
other one right next to it. So remove that too. In fact, remove the entire
test, because the whole point of this test was to see which message we got on
a failure.

- In pg_amcheck tests, don't depend on perl's Q/q pack code. It does not work on
all versions of perl across all platforms. To avoid endian-ness issues, pick
a new value for column a that has the same upper 4 bytes as lower 4 bytes. Try
to make it something that isn't likely to occur anywhere nearby in the page.

# Pending Patches

Etsuro Fujita sent in another revision of a patch to implement asynchronous
append on postgres_fdw nodes.

Kyotaro HORIGUCHI sent in a patch to Run with
wal_level=minimal, and make sure published XIDs are persistent.

Ibrar Ahmed sent in another revision of a patch to pg_rewind which fix the way
the TLI is determined when server was just promoted by looking at the
minRecoveryPointTLI in the control file in addition to the ThisTimeLineID on the

Ibrar Ahmed sent in another revision of a patch to surface popcount to SQL.

Kota Miyake sent in another revision of a patch to fix the way pgbench's \sleep
command works by requiring that it take an integer argument.

Michael Banck sent in another revision of a patch to add a new PGC_ADMINSET guc
context and pg_change_role_settings default role.

Andrey Borodin sent in another revision of a patch to add Sortsupport for
sorting GiST build for gist_btree types.

Pavel Stěhule sent in a patch to add operators ? and ->> for type record, and
functions record_keys and record_each_text.

Ajin Cherian sent in a patch to make the stream_prepare_cb optional, which
allows plugins to not allow the enabling of streaming and two_phase at the same
time in logical replication.

Ibrar Ahmed sent in another revision of a patch to evaluate expressions at
planning time for two more cases.

Joel Jacobson sent in four more revisions of a patch to add views pg_permissions
and pg_ownerships, which greatly simplify the queries establishing those things.

Ibrar Ahmed sent in another revision of a patch to implement system-versioned
temporal tables.

Tomáš Vondra sent in two more revisions of a patch to implement BRIN multi-range

Tomáš Vondra and Stephen Frost traded patches to improve the logging of
auto-vacuum and auto-analyze by including the I/O timing if track_io_timing is
enabled. Also, for auto-analyze, add the read rate and the dirty rate, similar
to how that information has historically been logged for auto-vacuum

Masahiro Ikeda and Fujii Masao traded patches to send stats for both walwriter
and checkpointer when shutting down.

Dilip Kumar and Justin Pryzby traded patches to implement custom compression
methods for tables.

Bharath Rupireddy sent in another revision of a patch to implement EXPLAIN

Greg Nancarrow and Hou Zhijie traded patches to implement parallel INSERT (INTO
... SELECT ...).

Ibrar Ahmed and Kazutaka Onishi traded patches to implement TRUNCATE on foreign

Peter Smith and Ajin Cherian traded patches to implement logical decoding of
two-phase transactions.

Tatsuo Ishii sent in two revisions of a patch to make it possible to use COPY
FREEZE in pgbench.

Peter Smith sent in a patch To Give the tablesync worker an opportunity to see
if it can exit immediately (because it has already caught-up) without it needing
to process a message first before discovering that.

Yugo Nagata sent in another revision of a patch to implement incremental
materialized views.

Arseny Sher sent in a patch to increase the vm cache used for index-only scans.

Peter Eisentraut sent in another revision of a patch to add support for writing
CREATE FUNCTION and CREATE PROCEDURE statements for language SQL with a function
body that conforms to the SQL standard and is portable to other implementations.

Heikki Linnakangas sent in another revision of a patch to move a few
ResourceOwnerEnlarge() calls for safety and clarity, make resowners more easily
extensible, and use a 64-bit murmur hash if available to make this go faster.

Tomáš Vondra sent in another revision of a patch to control the removal
temporary files after crash with a new GUC, remove_temp_files_after_crash.

Jacob Champion sent in two more revisions of a patch to store the client's DN in
port->peer_dn, and use this to log the authenticated identity from all auth

Joel Jacobson sent in another revision of a patch to implement regexp_positions.

Justin Pryzby sent in a patch to make pg_dump use a cursor in getBlobs to
mitigate huge memory use in the case of millions of large objects.

Tomáš Vondra sent in another revision of a patch to make GROUP BY more

Erica Zhang sent in two revisions of a patch to add some tests for
pg_stat_statements compatibility verification under contrib.

Matthias van de Meent sent in three revisions of a patch to truncate a pages'
line pointer array when it has trailing unused ItemIds. This will allow reuse
of what is effectively free space for data as well as new line pointers, instead
of keeping it reserved for line pointers only. An additional benefit is that
the HasFreeLinePointers hint-bit optimization now doesn't hint for free line
pointers at the end of the array, slightly increasing the specificity of where
the free lines are; and saving us from needing to search to the end of the array
if all other entries are already filled.

Magnus Hagander sent in another revision of a patch to add PROXY protocol

Floris Van Nee sent in another revision of a patch to allow inserting tuples
into almost-empty pages.

Marcus Wanner sent in a patch to Add an xid argument to the filter_prepare
callback for the output plugin.

Kyotaro HORIGUCHI and Fujii Masao traded patches to change the temporary storage
used by the stats collector from files to shared memory.

Amul Sul and Ibrar Ahmed traded patches to put in some machinery to make it
possible to implement ALTER SYSTEM READ {ONLY|WRITE}.

Masahiko Sawada sent in another revision of a patch to make autovacuum logs a
little more informative.

Bharath Rupireddy sent in another revision of a patch to add new table AMs for
multi- and single inserts, and use same for CTAS, REFRESH MATERIALIZED VIEW, and

Yuzuko Hosoya sent in another revision of a patch to release SPI plans for
referential integrity with DISCARD ALL.

John Naylor sent in a patch to get rid of the `<foreignphrase>` tags in the

John Naylor sent in a patch to make popcount xor try indirection at the buffer

Masahiko Sawada sent in a patch to remove an unneeded bsearch from vacuumlazy.c

Craig Ringer sent in another revision of a patch to pass the target `LWLock*` and
tranche ID to LWLock tracepoints, add to the tracepoints in LWLock routines, and
add SDT tracepoints for backend type and postmaster pid on startup.

Bharath Rupireddy sent in two revisions of a patch to make the error messages in
check_publication_add_relation a bit more informative and consistent.

Iwata Aya, Álvaro Herrera, and Tom Lane traded patches to add tracing capability
to libpq.

David Rowley sent in two more revisions of a patch to cache PathTarget and
RestrictInfo's volatility, allow estimate_num_groups() to pass back further
details about the estimation, allow users of simplehash.h to perform direct
deletions, add a Result Cache executor node, and remove code duplication in

Nathan Bossart and Laurenz Albe traded patches to document ALTER ROLE ... SET

Mark Dilger and Robert Haas traded patches to add a frontend utility program,

Peter Geoghegan sent in another revision of a patch to recycle pages deleted
during same VACUUM.

Thomas Munro sent in another revision of a patch to track relation sizes in
shared memory, provide a lock-free fast path for smgrnblocks(), and update fifo
to lru to sweep a valid cache.

Thomas Munro sent in a patch to make ProcSendSignal() more efficient by using
pg_procno instead of scanning the ProcArray and keeping track of the startup

Hou Zhijie sent in another revision of a patch to avoid CommandCounterIncrement
in RI triggers when performing an INSERT INTO referencing table.

Álvaro Herrera sent in another revision of a patch to implement ALTER TABLE ...

Álvaro Herrera sent in two more revisions of a patch to implement
batch/pipelining support for libpq.

Fabien COELHO and Dean Rasheed traded patches to add a pseudo-random permutation
function to pgbench.

Bruce Momjian sent in another revision of a patch to add key management.

Takayuki Tsunakawa sent in another revision of a patch to speed up COPY FROM
into tables with foreign partitions.

Hao Wu sent in a patch intended to fix a bug where HotStandbyActive() returns
true after the node is promoted.

Takayuki Tsunakawa sent in two revisions of a patch to fix an overflow when
counting the number of buffers to invalidate.

Andrey Borodin sent in another revision of a patch to make all SLRU buffer sizes

Tom Lane sent in two revisions of a patch to clean up the documentation of
variable usage in PL/pgsql.

Thomas Munro sent in another revision of a patch to run the checkpointer and
bgworker in crash recovery, log buffer stats after crash recovery, and make it
possible not to wait for the end-of-recovery checkpoint.

Justin Pryzby sent in two more revisions of a patch to allow alternate
compression methods for wal_compression.

Mark Rofail and Justin Pryzby traded patches to implement foreign key arrays.

Pavel Stěhule sent in another revision of a patch to implement schema variables.

Michaël Paquier and Soumyadeep Chakraborty traded patches to fix an infelicity
between PITR and 2PC.

Pavel Stěhule sent in two more revisions of a patch to add a #routine_label
pragma to PL/pgsql.

Thomas Munro sent in a patch to add bsearch and unique templates to
sort_template.h, supply sort/search specializations for some common scalar
types, use qsort_oid() and friends in obvious places, supply specialized
sort/search routines for ItemPtrData, use qsort_itemptr() and friends in various
places, specialize the HeapTuple sort routine for ANALYZE by use "encoded"
format, based on 48 bit integers that can be compared by subtraction, avoiding a
branch, specialize the pagetable sort routines in tidbitmap.c, specialize some
sort/search routines in nbtree code, and specialize the sort routine used by

Julien Rouhaud sent in another revision of a patch to move pg_stat_statements
query jumbling to core, expose queryid in pg_stat_activity and log_line_prefix,
and expose query identifier in verbose explain.

Dmitry Dolgov sent in another revision of a patch to introduce skip scans.

Julien Rouhaud sent in another revision of a patch to add a new OUTDATED
filtering facility for REINDEX command, and add a corresponding --outdated
option to reindexdb.

Alexander Lakhin sent in a patch to use microsoft_native_stat in pgwin32_open.

Thomas Munro sent in two more revisions of a patch to optionally use syncfs()
for SyncDataDirectory() on Linux.

Browse pgsql-announce by date

  From Date Subject
Next Message PostgreSQL Code of Conduct Committee via PostgreSQL Announce 2021-03-16 01:51:58 PostgreSQL Community Code of Conduct Committee Annual Report for 2020
Previous Message PostgreSQL Global Development Group 2021-03-13 11:21:43 PostgreSQL Accepted for Google Summer of Code 2021!