PostgreSQL Weekly News - March 21, 2021

Posted on 2021-03-22 by PWN

PostgreSQL Weekly News - March 21, 2021

Person of the week:

PostgreSQL Product News

pspg 4.4.0 a pager designed for PostgreSQL, released.

pg_activity 2.1.2, a top-like application for PostgreSQL server activity monitoring, released.

PostgreSQL Jobs for March

PostgreSQL in the News

Planet PostgreSQL:

PostgreSQL Weekly News is brought to you this week by David Fetter

Submit news and announcements by Sunday at 3:00pm PST8PDT to

Applied Patches

Peter Geoghegan pushed:

Fujii Masao pushed:

  • Make archiver process an auxiliary process. This commit changes WAL archiver process so that it's treated as an auxiliary process and can use shared memory. This is an infrastructure patch required for upcoming shared-memory based stats collector patch series. These patch series basically need any processes including archiver that can report the statistics to access to shared memory. Since this patch itself is useful to simplify the code and when users monitor the status of archiver, it's committed separately in advance. This commit simplifies the code for WAL archiving. For example, previously backends need to signal to archiver via postmaster when they notify archiver that there are some WAL files to archive. On the other hand, this commit removes that signal to postmaster and enables backends to notify archier directly using shared latch. Also, as the side of this change, the information about archiver process becomes viewable at pg_stat_activity view. Author: Kyotaro Horiguchi Reviewed-by: Andres Freund, Álvaro Herrera, Julien Rouhaud, Tomas Vondra, Arthur Zakirov, Fujii Masao Discussion:

  • Fix comments in postmaster.c. Commit 86c23a6eb2 changed the option to specify that postgres will stop all other server processes by sending the signal SIGSTOP, from -s to -T. But previously there were comments incorrectly explaining that SIGSTOP behavior is set by -s option. This commit fixes them. Author: Kyotaro Horiguchi Reviewed-by: Fujii Masao Discussion:

Thomas Munro pushed:

Tom Lane pushed:

  • Work around issues in MinGW-64's setjmp/longjmp support. It's hard to avoid the conclusion that there is something wrong with setjmp/longjmp on MinGW-64, as we have seen failures come and go after entirely-unrelated-looking changes in our own code. Other projects such as Ruby have given up and started using gcc's setjmp/longjmp builtins on that platform; this patch just follows that lead. Note that this is a pretty fundamental ABI break for functions containining either setjmp or longjmp, so we can't really consider a back-patch. Per reports from Regina Obe and Heath Lord, as well as recent failures on buildfarm member walleye, and less-recent failures on fairywren. Juan José Santamaría Flecha Discussion:$1ed0fc70$5c72f550$ Discussion: Discussion:

  • Improve logging of bad parameter values in BIND messages. Since commit ba79cb5dc, values of bind parameters have been logged during errors in extended query mode. However, we only did that after we'd collected and converted all the parameter values, thus failing to offer any useful localization of invalid-parameter problems. Add a separate callback that's used during parameter collection, and have it print the parameter number, along with the input string if text input format is used. Justin Pryzby and Tom Lane Discussion: Discussion:

  • Avoid corner-case memory leak in SSL parameter processing. After reading the root cert list from the ssl_ca_file, immediately install it as client CA list of the new SSL context. That gives the SSL context ownership of the list, so that SSL_CTX_free will free it. This avoids a permanent memory leak if we fail further down in be_tls_init(), which could happen if bogus CRL data is offered. The leak could only amount to something if the CRL parameters get broken after server start (else we'd just quit) and then the server is SIGHUP'd many times without fixing the CRL data. That's rather unlikely perhaps, but it seems worth fixing, if only because the code is clearer this way. While we're here, add some comments about the memory management aspects of this logic. Noted by Jelte Fennema and independently by Andres Freund. Back-patch to v10; before commit de41869b6 it doesn't matter, since we'd not re-execute this code during SIGHUP. Discussion:

  • Doc: improve discussion of variable substitution in PL/pgSQL. This was a bit disjointed, partly because of a not-well-considered decision to document SQL commands that don't return result rows as though they had nothing in common with commands that do. Rearrange so that we have one discussion of variable substitution that clearly applies to all types of SQL commands, and then handle the question of processing command output separately. Clarify that EXPLAIN, CREATE TABLE AS SELECT, and similar commands that incorporate an optimizable statement will act like optimizable statements for the purposes of variable substitution. Do a bunch of minor wordsmithing in the same area. David Johnston and Tom Lane, reviewed by Pavel Stehule and David Steele Discussion:

  • Add end-to-end testing of pg_basebackup's tar-format output. The existing test script does run pg_basebackup with the -Ft option, but it makes no real attempt to verify the sanity of the results. We wouldn't know if the output is incompatible with standard "tar" programs, nor if the server fails to start from the restored output. Notably, this means that xlog.c's read_tablespace_map() is not being meaningfully tested, since that code is used only in the tar-format case. (We do have reasonable coverage of restoring from plain-format output, though it's over in src/test/recovery not here.) Hence, attempt to untar the output and start a server from it, rather just hoping it's OK. This test assumes that the local "tar" has the "-C directory" switch. Although that's not promised by POSIX, my research suggests that all non-extinct tar implementations have it. Should the buildfarm's opinion differ, we can complicate the test a bit to avoid requiring that. Possibly this should be back-patched, but I'm unsure about whether it could work on Windows before d66b23b03.

  • Prevent buffer overrun in read_tablespace_map(). Robert Foggia of Trustwave reported that read_tablespace_map() fails to prevent an overrun of its on-stack input buffer. Since the tablespace map file is presumed trustworthy, this does not seem like an interesting security vulnerability, but still we should fix it just in the name of robustness. While here, document that pg_basebackup's --tablespace-mapping option doesn't work with tar-format output, because it doesn't. To make it work, we'd have to modify the tablespace_map file within the tarball sent by the server, which might be possible but I'm not volunteering. (Less-painful solutions would require changing the basebackup protocol so that the source server could adjust the map. That's not very appetizing either.)

  • Code review for server's handling of "tablespace map" files. While looking at Robert Foggia's report, I noticed a passel of other issues in the same area:

  • The scheme for backslash-quoting newlines in pathnames is just wrong; it will misbehave if the last ordinary character in a pathname is a backslash. I'm not sure why we're bothering to allow newlines in tablespace paths, but if we're going to do it we should do it without introducing other problems. Hence, backslashes themselves have to be backslashed too. * The author hadn't read the sscanf man page very carefully, because this code would drop any leading whitespace from the path. (I doubt that a tablespace path with leading whitespace could happen in practice; but if we're bothering to allow newlines in the path, it sure seems like leading whitespace is little less implausible.) Using sscanf for the task of finding the first space is overkill anyway. * While I'm not 100% sure what the rationale for escaping both \r and \n is, if the idea is to allow Windows newlines in the file then this code failed, because it'd throw an error if it saw \r followed by \n. * There's no cross-check for an incomplete final line in the map file, which would be a likely apparent symptom of the improper-escaping bug. On the generation end, aside from the escaping issue we have: * If needtblspcmapfile is true then do_pg_start_backup will pass back escaped strings in tablespaceinfo->path values, which no caller wants or is prepared to deal with. I'm not sure if there's a live bug from that, but it looks like there might be (given the dubious assumption that anyone actually has newlines in their tablespace paths). * It's not being very paranoid about the possibility of random stuff in the pg_tblspc directory. IMO we should ignore anything without an OID-like name. The escaping rule change doesn't seem back-patchable: it'll require doubling of backslashes in the tablespace_map file, which is basically a basebackup format change. The odds of that causing trouble are considerably more than the odds of the existing bug causing trouble. The rest of this seems somewhat unlikely to cause problems too, so no back-patch.

  • Doc: remove duplicated step in RLS example. Seems to have been a copy-and-paste mistake in 093129c9d. Per report from Discussion:

  • Fix misuse of foreach_delete_current(). Our coding convention requires this macro's result to be assigned back to the original List variable. In this usage, since the List could not become empty, there was no actual bug --- but some compilers warned about it. Oversight in be45be9c3. Discussion:

  • Don't leak rd_statlist when a relcache entry is dropped. Although these lists are usually NIL, and even when not empty are unlikely to be large, constant relcache update traffic could eventually result in visible bloat of CacheMemoryContext. Found via valgrind testing. Back-patch to v10 where this field was added. Discussion:

  • Don't leak malloc'd error string in libpqrcv_check_conninfo(). We leaked the error report from PQconninfoParse, when there was one. It seems unlikely that real usage patterns would repeat the failure often enough to create serious bloat, but let's back-patch anyway to keep the code similar in all branches. Found via valgrind testing. Back-patch to v10 where this code was added. Discussion:

  • Don't leak compiled regex(es) when an ispell cache entry is dropped. The text search cache mechanisms assume that we can clean up an invalidated dictionary cache entry simply by resetting the associated long-lived memory context. However, that does not work for ispell affixes that make use of regular expressions, because the regex library deals in plain old malloc. Hence, we leaked compiled regex(es) any time we dropped such a cache entry. That could quickly add up, since even a fairly trivial regex can use up tens of kB, and a large one can eat megabytes. Add a memory context callback to ensure that a regex gets freed when its owning cache entry is cleared. Found via valgrind testing. This problem is ancient, so back-patch to all supported branches. Discussion:

  • Don't run RelationInitTableAccessMethod in a long-lived context. Some code paths in this function perform syscache lookups, which can lead to table accesses and possibly leakage of cruft into the caller's context. If said context is CacheMemoryContext, we eventually will have visible bloat. But fixing this is no harder than moving one memory context switch step. (The other callers don't have a problem.) Andres Freund and I independently found this via valgrind testing. Back-patch to v12 where this code was added. Discussion: Discussion:

  • Don't leak malloc'd strings when a GUC setting is rejected. Because guc.c prefers to keep all its string values in malloc'd not palloc'd storage, it has to be more careful than usual to avoid leaks. Error exits out of string GUC hook checks failed to clear the proposed value string, and error exits out of ProcessGUCArray() failed to clear the malloc'd results of ParseLongOption(). Found via valgrind testing. This problem is ancient, so back-patch to all supported branches. Discussion:

  • Blindly try to fix test script's tar invocation for MSYS. Buildfarm member fairywren doesn't like the test case I added in commit 081876d75. I'm guessing the reason is that I shouldn't be using a perl2host-ified path in the tar command line.

  • Avoid leaking memory in RestoreGUCState(), and improve comments. RestoreGUCState applied InitializeOneGUCOption to already-live GUC entries, causing any malloc'd subsidiary data to be forgotten. We do want the effect of resetting the GUC to its compiled-in default, and InitializeOneGUCOption seems like the best way to do that, so add code to free any existing subsidiary data beforehand. The interaction between can_skip_gucvar, SerializeGUCState, and RestoreGUCState is way more subtle than their opaque comments would suggest to an unwary reader. Rewrite and enlarge the comments to try to make it clearer what's happening. Remove a long-obsolete assertion in read_nondefault_variables: the behavior of set_config_option hasn't depended on IsInitProcessingMode since f5d9698a8 installed a better way of controlling it. Although this is fixing a clear memory leak, the leak is quite unlikely to involve any large amount of data, and it can only happen once in the lifetime of a worker process. So it seems unnecessary to take any risk of back-patching. Discussion:

  • Fix memory leak when rejecting bogus DH parameters. While back-patching e0e569e1d, I noted that there were some other places where we ought to be applying DH_free(); namely, where we load some DH parameters from a file and then reject them as not being sufficiently secure. While it seems really unlikely that anybody would hit these code paths in production, let alone do so repeatedly, let's fix it for consistency. Back-patch to v10 where this code was introduced. Discussion:

  • Fix up pg_dump's handling of per-attribute compression options. The approach used in commit bbe0a81db would've been disastrous for portability of dumps. Instead handle non-default compression options in separate ALTER TABLE commands. This reduces chatter for the common case where most columns are compressed the same way, and it makes it possible to restore the dump to a server that lacks any knowledge of per-attribute compression options (so long as you're willing to ignore syntax errors from the ALTER TABLE commands). There's a whole lot left to do to mop up after bbe0a81db, but I'm fast-tracking this part because we need to see if it's enough to make the buildfarm's cross-version-upgrade tests happy. Justin Pryzby and Tom Lane Discussion:

  • Suppress various new compiler warnings. Compilers that don't understand that elog(ERROR) doesn't return issued warnings here. In the cases in libpq_pipeline.c, we were not exactly helping things by failing to mark pg_fatal() as noreturn. Per buildfarm.

Álvaro Herrera pushed:

Michaël Paquier pushed:

Stephen Frost pushed:

Robert Haas pushed:

  • Fix a confusing amcheck corruption message. Don't complain about the last TOAST chunk number being different from what we expected if there are no TOAST chunks at all. In such a case, saying that the final chunk number is 0 is not really accurate, and the fact the value is missing from the TOAST table is reported separately anyway. Mark Dilger Discussion:

  • Allow configurable LZ4 TOAST compression. There is now a per-column COMPRESSION option which can be set to pglz (the default, and the only option in up until now) or lz4. Or, if you like, you can set the new default_toast_compression GUC to lz4, and then that will be the default for new table columns for which no value is specified. We don't have lz4 support in the PostgreSQL code, so to use lz4 compression, PostgreSQL must be built --with-lz4. In general, TOAST compression means compression of individual column values, not the whole tuple, and those values can either be compressed inline within the tuple or compressed and then stored externally in the TOAST table, so those properties also apply to this feature. Prior to this commit, a TOAST pointer has two unused bits as part of the va_extsize field, and a compessed datum has two unused bits as part of the va_rawsize field. These bits are unused because the length of a varlena is limited to 1GB; we now use them to indicate the compression type that was used. This means we only have bit space for 2 more built-in compresison types, but we could work around that problem, if necessary, by introducing a new vartag_external value for any further types we end up wanting to add. Hopefully, it won't be too important to offer a wide selection of algorithms here, since each one we add not only takes more coding but also adds a build dependency for every packager. Nevertheless, it seems worth doing at least this much, because LZ4 gets better compression than PGLZ with less CPU usage. It's possible for LZ4-compressed datums to leak into composite type values stored on disk, just as it is for PGLZ. It's also possible for LZ4-compressed attributes to be copied into a different table via SQL commands such as CREATE TABLE AS or INSERT .. SELECT. It would be expensive to force such values to be decompressed, so PostgreSQL has never done so. For the same reasons, we also don't force recompression of already-compressed values even if the target table prefers a different compression method than was used for the source data. These architectural decisions are perhaps arguable but revisiting them is well beyond the scope of what seemed possible to do as part of this project. However, it's relatively cheap to recompress as part of VACUUM FULL or CLUSTER, so this commit adjusts those commands to do so, if the configured compression method of the table happens not to match what was used for some column value stored therein. Dilip Kumar. The original patches on which this work was based were written by Ildus Kurbangaliev, and those were patches were based on even earlier work by Nikita Glukhov, but the design has since changed very substantially, since allow a potentially large number of compression methods that could be added and dropped on a running system proved too problematic given some of the architectural issues mentioned above; the choice of which specific compression method to add first is now different; and a lot of the code has been heavily refactored. More recently, Justin Przyby helped quite a bit with testing and reviewing and this version also includes some code contributions from him. Other design input and review from Tomas Vondra, Álvaro Herrera, Andres Freund, Oleg Bartunov, Alexander Korotkov, and me. Discussion: Discussion:

  • Fix use-after-ReleaseSysCache problem in ATExecAlterColumnType. Introduced by commit bbe0a81db69bd10bd166907c3701492a29aca294. Per buildfarm member prion.

Amit Kapila pushed:

Peter Eisentraut pushed:

Andres Freund pushed:

Magnus Hagander pushed:

Tomáš Vondra pushed:

  • Remove temporary files after backend crash. After a crash of a backend using temporary files, the files used to be left behind, on the basis that it might be useful for debugging. But we don't have any reports of anyone actually doing that, and it means the disk usage may grow over time due to repeated backend failures (possibly even hitting ENOSPC). So this behavior is a bit unfortunate, and fixing it required either manual cleanup (deleting files, which is error-prone) or restart of the instance (i.e. service disruption). This implements automatic cleanup of temporary files, controled by a new GUC remove_temp_files_after_crash. By default the files are removed, but it can be disabled to restore the old behavior if needed. Author: Euler Taveira Reviewed-by: Tomas Vondra, Michael Paquier, Anastasia Lubennikova, Thomas Munro Discussion:

  • Implement GROUP BY DISTINCT. With grouping sets, it's possible that some of the grouping sets are duplicate. This is especially common with CUBE and ROLLUP clauses. For example GROUP BY CUBE (a,b), CUBE (b,c) is equivalent to GROUP BY GROUPING SETS ( (a, b, c), (a, b, c), (a, b, c), (a, b), (a, b), (a, b), (a), (a), (a), (c, a), (c, a), (c, a), (c), (b, c), (b), () ) Some of the grouping sets are calculated multiple times, which is mostly unnecessary. This commit implements a new GROUP BY DISTINCT feature, as defined in the SQL standard, which eliminates the duplicate sets. Author: Vik Fearing Reviewed-by: Erik Rijkers, Georgios Kokolatos, Tomas Vondra Discussion:

  • Fix TAP test for remove_temp_files_after_crash. The test included in cd91de0d17 had two simple flaws. Firstly, the number of rows was low and on some platforms (e.g. 32-bit) the sort did not require on-disk sort, so on those machines it was not testing the automatic removal. The test was however failing, because without any temporary files the base/pgsql_tmp directory was not even created. Fixed by increasing the rowcount to 5000, which should be high engough on any platform. Secondly, the test used a simple sleep to wait for the temporary file to be created. This is obviously problematic, because on slow machines (or with valgrind, CLOBBER_CACHE_ALWAYS etc.) it may take a while to create the temporary file. But we also want the tests run reasonably fast. Fixed by instead relying on a UNIQUE constraint, blocking the query that created the temporary file. Author: Euler Taveira Reviewed-by: Tomas Vondra Discussion:

  • Fix race condition in remove_temp_files_after_crash TAP test. The TAP test was written so that it was not waiting for the correct SQL command, but for output from the preceding one. This resulted in race conditions, allowing the commands to run in a different order, not block as expected and so on. This fixes it by inverting the order of commands where possible, so that observing the output guarantees the data was inserted properly, and waiting for a lock to appear in pg_locks. Discussion:

  • Use lfirst_int in cmp_list_len_contents_asc. The function added in be45be9c33 is comparing integer lists (IntList) by length and contents, but there were two bugs. Firstly, it used intVal() to extract the value, but that's for Value nodes, not for extracting int values from IntList. Secondly, it called it directly on the ListCell, without doing lfirst(). So just do lfirst_int() instead. Interestingly enough, this did not cause any crashes on the buildfarm, but valgrind rightfully complained about it. Discussion:

  • Use valid compression method in brin_form_tuple. When compressing the BRIN summary, we can't simply use the compression method from the indexed attribute. The summary may use a different data type, e.g. fixed-length attribute may have varlena summary, leading to compression failures. For the built-in BRIN opclasses this happens to work, because the summary uses the same data type as the attribute. When the data types match, we can inherit use the compression method specified for the attribute (it's copied into the index descriptor). Otherwise we don't have much choice and have to use the default one. Author: Tomas Vondra Reviewed-by: Justin Pryzby Discussion:

Andrew Dunstan pushed:

Pending Patches

Erica Zhang sent in another revision of a patch to add some tests for pg_stat_statements compatibility verification under contrib.

Jürgen Purtz sent in another revision of a patch to change the JOIN tutorial to focus more on explicit joins.

Noah Misch sent in a patch to merge similar algorithms into roles_is_member_of().

David Rowley sent in another revision of a patch to cache PathTarget and RestrictInfo's volatility, allow estimate_num_groups() to pass back further details about the estimation, allow the users of simplehash.h to perform direct deletions, add a Result Cache executor node, and remove code duplication in nodeResultCache.c.

Vigneshwaran C sent in a patch to fail prepared transaction if transaction has locked system tables/user catalog tables.

Laurenz Albe sent in another revision of a patch to improve \e, \ef and \ev if the editor is quit without saving by erasing the query buffer rather than executing the previous statement.

Nathan Bossart sent in another revision of a patch to avoid creating archive status ".ready" files too early, and keep track of notified-ready-for-archive position through crashes.

Mike Insoo sent in a patch to check whether, when decoding a catalog modifying commit record, it’s part of the RUNNING_XACT xid’s processed @ the restart_lsn. If so, then add its xid & subxacts in the committed txns list in the logical decoding snapshot.

Justin Pryzby sent in three more revisions of a patch to make it possible to use different compression methods for FPI.

Jim Mlodgenski sent in another revision of a patch to add a parser hook and a test that uses it.

Julien Rouhaud sent in another revision of a patch to add a new OUTDATED filtering facility for REINDEX command, with a corresponding option for the reindexdb utility. When used, REINDEX will only process indexes that have an outdated dependency. For now, only dependency on collations are supported but we'll likely support other kind of dependency in the future.

Georgios Kokolatos and Michaël Paquier traded patches to make dbsize a bit more consistent.

Amit Kapila, Justin Pryzby, and Greg Nancarrow traded patches to make it possible to execute INSERT .... SELECT in parallel.

Bruce Momjian sent in two more revisions of a patch to implement key management.

Bharath Rupireddy sent in four more revisions of a patch to add a version of pg_terminate_backend() with wait and timeout.

Masahiro Ikeda sent in another revision of a patch to make the WAL receiver report WAL statistics.

Iwata Aya sent in four more revisions of a patch to implement a libpq debug log.

Amul Sul sent in two more revisions of a patch to add the infrastructure ALTER SYSTEM READ {ONLY | WRITE} will use.

Jacob Champion and Michaël Paquier traded patches to log authenticated identity from all auth backends.

Masahiro Ikeda sent in four revisions of a patch to make the stats collector shutdown without writing the statsfiles if the immediate shutdown is requested.

Mark Rofail and Andreas Karlsson traded patches to implement foreign key arrays.

Ajin Cherian, Peter Smith, and Takamichi Osumi traded patches to implement logical decoding of two-phase transactions.

Álvaro Herrera sent in three more revisions of a patch to implement ALTER TABLE .. DETACH PARTITION CONCURRENTLY.

Amit Langote sent in another revision of a patch to allow batching of inserts during cross-partition updates.

IIvan Panchenko sent in another revision of a patch to add a login trigger.

Peter Eisentraut sent in another revision of a patch to make it possible to get dynamic result sets from procedures. Declaring a cursor WITH RETURN in a procedure makes the cursor's data be returned as a result of the CALL invocation. The procedure needs to be declared with the DYNAMIC RESULT SETS attribute.

Amit Kapila sent in a doc patch to add a description of substream in pg_subscription.

Amit Kapila sent in a patch to prevent cancel/die interrupts while creating tablesync slot.

Bharath Rupireddy sent in another revision of a patch to refactor the code that implements REFRESH MATERIALIZED VIEW for comprehensibility.

Ranier Vilela sent in a patch to cover a corner case in renaming files on Windows.

Hou Zhijie sent in another revision of a patch to extend the safety check to support parallel insert into fk relations, and skip the CommandCounterIncrement.

Fujii Masao sent in two more revisions of a patch to improve pgbench's \sleep command.

Andres Freund and Kyotaro HORIGUCHI traded patches to make the stats collector use shared memory instead of files for temporary storage in the stats collector.

Masahiko Sawada sent in another revision of a patch to fix transactions involving multiple postgres foreign servers.

Bharath Rupireddy sent in another revision of a patch to mprove PID XXXX is not a PostgreSQL server process message. Instead of the message "is not a server process" just say "is not a backend process". This would make things clear that the given PID is not a backend process but it can be some other process's pid or some other postmaster child process's pid or an invalid pid.

Zeng Wenjing sent in another revision of a patch to implement global temporary tables.

Tomáš Vondra sent in another revision of a patch to implement index skip scans.

Atsushi Torikoshi sent in another revision of a patch to add memorycontext elog print.

John Naylor sent in two more revisions of a patch to fix bug in heap space management that was overly cautious about fillfactor. Previously, if there were leftover ununused line pointers in a page, a table with a low fill factor would not be available for new tuples that were large enough to exceed fillfactor, even though the page contained no tuples at all. This lead to unnecessary relation extension. Fix by allowing some slack space equal to 1/8 the maximum space that could be taken up by line pointers. This is somewhat arbitrary, but should allow many cases to succeed where they didn't before. Per report from Floris van Nee.

Pavel Stěhule sent in another revision of a patch to add routine labels to PL/pgsql.

Michaël Paquier sent in another revision of a patch to fix an infelicity between PITR and 2PC.

Bertrand Drouvot sent in another revision of a patch to implement minimal logical decoding on standbys.

Amit Langote sent in a patch to tweak handling of serialization failures during cascaded RI update/delete.

Peter Eisentraut sent in another revision of a patch to ibpq: Set Server Name Indication (SNI) for SSL connections By default, have libpq set the TLS extension "Server Name Indication" (SNI). This allows an SNI-aware SSL proxy to route connections. (This requires a proxy that is aware of the PostgreSQL protocol, not just any SSL proxy.) In the future, this could also allow the server to use different SSL certificates for different host specifications. (That would require new server functionality. This would be the client-side functionality for that.) Since SNI makes the host name appear in cleartext in the network traffic, this might be undesirable in some cases. Therefore, also add a libpq connection option "sslsni" to turn it off.

Amit Langote sent in another revision of a patch to mark fully grouped relations partitioned if input relation is, and allow setting parallel_workers on partitioned tables.

Ibrar Ahmed sent in another revision of a patch to make it possible for INSERT SELECT to use BulkInsertState and multi_insert.

John Naylor sent in another revision of a patch to add a 'noError' argument to encoding conversion functions, and use same to do COPY FROM encoding conversion/verification in larger chunks.

Thomas Munro sent in another revision of a patch to use prefetching for WALs.

Peter Geoghegan sent in a patch to add a way to ignore pgindent runs and similar when using git blame.

Peter Eisentraut sent in another revision of a patch to add a result_format_auto_binary_types setting.

Fujii Masao sent in another revision of a patch to change the type of WalReceiverWaitStart wait event from Client to IPC.

Marina Polyakova sent in a patch to reduce the time required to recover database from archive by allowing concurrent loading of WAL files.

Peter Eisentraut sent in another revision of a patch to implement SQL-standard function bodies.

Andrei Zubkov sent in a patch to correct deallocation events accounting in pg_stat_statements.

James Coleman sent in another revision of a patch to Improve standby connection denied error message. Currently when a standby is finished starting up but hot_standby is configured to off, the error message when a client connects is "the database system is starting up", which is needless confusing (and not really all that accurate either). Instead send a helpful error message so that the user immediately knows that their server is configured to deny these connections.

John Naylor sent in another revision of a patch to add a 'noError' argument to encoding conversion functions, then use same to replace pg_utf8_verifystr() with two faster implementations: one using SSE-4.1 instructions on Intel, the other using a bespoke fallback function, rather than one that relies on pg_utf8_verifychar() and pg_utf8_isvalid(). This one is loosely based on the fallback that is part of the simdjson library.

Peter Eisentraut sent in another revision of a patch to change the return type of EXTRACT to numeric.

James Coleman sent in another revision of a patch to hash lookup const arrays in OR'd ScalarArrayOps.

Amit Khandekar sent in another revision of a patch to speed up xor'ing of two gist index signatures for tsvectors by using 64-bit chunks instead of char values, and avoid function pointer dereferencing for the pg_popcount32/64() call.

Etsuro Fujita sent in another revision of a patch to implement asynchronous append on postgres_fdw nodes.

Konstantin Knizhnik sent in another revision of a patch to add two statistics GUCs to auto_explain: add_statistics_threshold, and add_statistics_suggest_only.

Gilles Darold sent in another revision of a patch to add hooks at the XactCommand level.

Tatsuo Ishii sent in three more revisions of a patch to make it possible to use COPY FREEZE in pgbench.

Julien Rouhaud sent in another revision of a patch to add queryId to the pg_catalog.pg_stat_activity view.

Amit Langote sent in another revision of a patch to export get_partition_for_tuple() to make it visible to things other than execPartition.c can use it, and use it elsewhere to avoid using SPI for some RI checks.

Peter Smith sent in a patch to fixed a bug where the code was de-referencing the hash entry returned by HASH_REMOVE in stream_cleanup_files.

Thomas Munro sent in another revision of a patch to support PSQL_WATCH_PAGER for psql's \watch command, and try sigwait() to make it work.

Michaël Paquier sent in a patch to clean up the Kerberos auth test.

Thomas Munro sent in another revision of a patch to provide recovery_init_sync_method=wal.

Gilles Darold and Erik Rijkers traded patches to add some new functions for counting and otherwise characterizing regexp matches.

David Fetter sent in another revision of a patch to implement popcount (now called bit_count).

Justin Pryzby sent in another revision of a patch to add a way to use different compression methods for FPI.

Matthias van de Meent sent in a patch to ensure that when [+] is an option for a psql metacommand, \? output mentions that it is, and not otherwise.

Jan Wieck and Tom Lane traded patches to ensure that pg_upgrade preserves datdba.

Konstantin Knizhnik sent in another revision of a patch to implement a built-in connection pooler.