PostgreSQL Weekly News - February 28, 2021

From: PWN via PostgreSQL Announce <announce-noreply(at)postgresql(dot)org>
To: PostgreSQL Announce <pgsql-announce(at)lists(dot)postgresql(dot)org>
Subject: PostgreSQL Weekly News - February 28, 2021
Date: 2021-03-01 00:31:13
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-announce

# PostgreSQL Weekly News - February 28, 2021

Database Lab 2.2.1, a tool for fast cloning of large PostgreSQL databases to
build non-production environments, released:

dbMigration .NET v13.4, a database migration and sync tool, released.

Joe 0.9.0, a Slack chatbot that helps backend developers and DBAs troubleshoot
and optimize PostgreSQL queries, releaesed.

pgAdmin4 5.0, a web- and native GUI control center for PostgreSQL, released.

pgagroal 1.2.0, a high-performance protocol-native connection pool for
PostgreSQL, released.

Person of the week: [](

# PostgreSQL Product News

# PostgreSQL Jobs for February


# PostgreSQL in the News

Planet PostgreSQL: [](

PostgreSQL Weekly News is brought to you this week by David Fetter

Submit news and announcements by Sunday at 3:00pm PST8PDT to david(at)fetter(dot)org(dot)

# Applied Patches

Tom Lane pushed:

- Fix invalid array access in trgm_regexp.c. Brown-paper-bag bug in 08c0d6ad6: I
missed one place that needed to guard against RAINBOW arc colors. Remarkably,
nothing noticed the invalid array access except buildfarm member thorntail.
Thanks to Noah Misch for assistance with tracking this down.

- Simplify memory management for regex DFAs a little. Coverity complained that
functions in regexec.c might leak DFA storage. It's wrong, but this logic is
confusing enough that it's not so surprising Coverity couldn't make sense of
it. Rewrite in hopes of making it more legible to humans as well as machines.

- Suppress compiler warning in new regex match-all detection code. gcc 10 is
smart enough to notice that control could reach this "hasmatch[depth]"
assignment with depth < 0, but not smart enough to know that that would
require a badly broken NFA graph. Change the assert() to a plain runtime test
to shut it up. Per report from Andres Freund. Discussion:

- Allow complemented character class escapes within regex brackets. The
complement-class escapes \D, \S, \W are now allowed within bracket
expressions. There is no semantic difficulty with doing that, but the rather
hokey macro-expansion-based implementation previously used here couldn't cope.
Also, invent "word" as an allowed character class name, thus "\w" is now
equivalent to "[[:word:]]" outside brackets, or "[:word:]" within brackets.
POSIX allows such implementation-specific extensions, and the same name is
used in e.g. bash. One surprising compatibility issue this raises is that
constructs such as `"[\w-_]"` are now disallowed, as our documentation has
always said they should be: character classes can't be endpoints of a range.
Previously, because \w was just a macro for `"[:alnum:]_"`, such a construct was
read as `"[[:alnum:]_-_]"`, so it was accepted so long as the character after
"-" was numerically greater than or equal to `"_"`. Some implementation cleanup
along the way: * Remove the lexnest() hack, and in consequence clean up
wordchrs() to not interact with the lexer. * Fix colorcomplement() to not be
O(N^2) in the number of colors involved. * Get rid of
useless-as-far-as-I-can-see calls of element() on single-character character
element names in brackpart(). element() always maps these to the character
itself, and things would be quite broken if it didn't --- should "[a]" match
something different than "a" does? Besides, the shortcut path in brackpart()
wasn't doing this anyway, making it even more inconsistent. Discussion:

- Change regex \D and \W shorthands to always match newlines. Newline is
certainly not a digit, nor a word character, so it is sensible that it should
match these complemented character classes. Previously, \D and \W acted that
way by default, but in newline-sensitive mode ('n' or 'p' flag) they did not
match newlines. This behavior was previously forced because explicit
complemented character classes don't match newlines in newline-sensitive mode;
but as of the previous commit that implementation constraint no longer exists.
It seems useful to change this because the primary real-world use for
newline-sensitive mode seems to be to match the default behavior of other
regex engines such as Perl and Javascript ... and their default behavior is
that these match newlines. The old behavior can be kept by writing an
explicit complemented character class, i.e. [^[:digit:]] or [^[:word:]].
(This means that \D and \W are not exactly equivalent to those strings, but
they weren't anyway.) Discussion:

- Doc: remove src/backend/regex/re_syntax.n. We aren't publishing this file as
documentation, and it's been much more haphazardly maintained than the real
docs in func.sgml, so let's just drop it. I think the only reason I included
it in commit 7bcc6d98f was that the Berkeley-era sources had had a man page in
this directory. Discussion:

- Fix list-manipulation bug in WITH RECURSIVE processing.
makeDependencyGraphWalker and checkWellFormedRecursionWalker thought they
could hold onto a pointer to a list's first cons cell while the list was
modified by recursive calls. That was okay when the cons cell was actually
separately palloc'd ... but since commit 1cff1b95a, it's quite unsafe, leading
to core dumps or incorrect complaints of faulty WITH nesting. In the field
this'd require at least a seven-deep WITH nest to cause an issue, but enabling
DEBUG_LIST_MEMORY_USAGE allows the bug to be seen with lesser nesting depths.
Per bug #16801 from Alexander Lakhin. Back-patch to v13. Michael Paquier and
Tom Lane Discussion:

- Improve memory management in regex compiler. The previous logic here created a
separate pool of arcs for each state, so that the out-arcs of each state were
physically stored within it. Perhaps this choice was driven by trying to not
include a "from" pointer within each arc; but Spencer gave up on that idea
long ago, and it's hard to see what the value is now. The approach turns out
to be fairly disastrous in terms of memory consumption, though. In the first
place, NFAs built by this engine seem to have about 4 arcs per state on
average, with a majority having only one or two out-arcs. So pre-allocating
10 out-arcs for each state is already cause for a factor of two or more bloat.
Worse, the NFA optimization phase moves arcs around with abandon. In a large
NFA, some of the states will have hundreds of out-arcs, so towards the end of
the optimization phase we have a significant number of states whose arc pools
have room for hundreds of arcs each, even though only a few of those arcs are
in use. We have seen real-world regexes in which this effect bloats the
memory requirement by 25X or even more. Hence, get rid of the per-state arc
pools in favor of a single arc pool for the whole NFA, with variable-sized
allocation batches instead of always asking for 10 at a time. While we're at
it, let's batch the allocations of state structs too, to further reduce the
malloc traffic. This incidentally allows moveouts() to be optimized in a
similar way to moveins(): when moving an arc to another state, it's now valid
to just re-link the same arc struct into a different outchain, where before
the code invariants required us to make a physically new arc and then free the
old one. These changes reduce the regex compiler's typical space consumption
for average-size regexes by about a factor of two, and much more for large or
complicated regexes. In a large test set of real-world regexes, we formerly
had half a dozen cases that failed with "regular expression too complex" due
to exceeding the REG_MAX_COMPILE_SPACE limit (about 150MB); we would have had
to raise that limit to something close to 400MB to make them work with the old
code. Now, none of those cases need more than 13MB to compile. Furthermore,
the test set is about 10% faster overall due to less malloc traffic.

- Doc: further clarify libpq's description of connection string URIs. Break the
synopsis into named parts to make it less confusing. Make more than zero
effort at applying SGML markup. Do a bit of copy-editing of nearby text. The
synopsis revision is by Alvaro Herrera and Paul Förster, the rest is my fault.
Back-patch to v10 where multi-host connection strings appeared. Discussion:

Thomas Munro pushed:

- Remove outdated reference to RAID spindles. Commit b09ff536 left behind some
outdated advice in the long_desc field of the GUC "effective_io_concurrency".
Remove it. Back-patch to 13. Reported-by: Andrew Gierth
<andrew(at)tao11(dot)riddles(dot)org(dot)uk> Reviewed-by: Julien Rouhaud <rjuju123(at)gmail(dot)com>

- Hide internal error for pg_collation_actual_version(<bad OID>). Instead of an
unsightly internal "cache lookup failed" message, just return NULL for bad
OIDs, as is the convention for other similar things. Reported-by: Justin
Pryzby <pryzby(at)telsasoft(dot)com> Reviewed-by: Michael Paquier
<michael(at)paquier(dot)xyz> Discussion:

- pg_collation_actual_version() -> pg_collation_current_version(). The new name
seems a bit more natural. Discussion:

- Refactor get_collation_current_version(). The code paths for three different
OSes finished up with three different ways of excluding C[.xxx] and POSIX from
consideration. Merge them. Reviewed-by: Michael Paquier
<michael(at)paquier(dot)xyz> Discussion:

- Tab-complete CREATE COLLATION. Reviewed-by: Michael Paquier
<michael(at)paquier(dot)xyz> Discussion:

- Revert "pg_collation_actual_version() -> pg_collation_current_version().".
This reverts commit 9cf184cc0599b6e65e7e5ecd9d91cd42e278bcd8. Name change
less well received than anticipated. Discussion:

Michaël Paquier pushed:

- Use pgstat_progress_update_multi_param() where possible. This commit changes
one code path in REINDEX INDEX and one code path in CREATE INDEX CONCURRENTLY
to report the progress of each operation using
pgstat_progress_update_multi_param() rather than multiple calls to
pgstat_progress_update_param(). This has the advantage to make the progress
report more consistent to the end-user without impacting the amount of
information provided. Author: Bharath Rupireddy Discussion:

- doc: Improve description of wal_receiver_status_interval. This parameter
description was previously confusing, telling that a value of 0 disabled
completely status updates. This is not true as there are cases where an
update is sent while ignoring this parameter value. The documentation is
improved to outline the difference of treatment for scheduled status messages
and when these are forced. Reported-by: Dmitriy Kuzmin Author: Michael
Paquier Reviewed-by: Euler Taveira Discussion:

- Fix some typos, grammar and style in docs and comments. The portions fixing
the documentation are backpatched where needed. Author: Justin Pryzby
backpatch-through: 9.6

- doc: Mention PGDATABASE as supported by pgbench. PGHOST, PGPORT and PGUSER
were already mentioned, but not PGDATABASE. Like 5aaa584, backpatch down to
12. Reported-by: Christophe Courtois Discussion:
Backpatch-through: 12

- doc: Improve `{archive,restore}_command` for compressed logs. The commands
mentioned in the docs with gzip and gunzip did not prefix the archives with
".gz" and used inconsistent paths for the archives, which can be confusing.
Reported-by: Philipp Gramzow Reviewed-by: Fujii Masao Discussion:

- pgbench: Remove now-dead `CState->ecnt`. The last use of ecnt was in 12788ae.
It was getting incremented after a backend error without any purpose since
then, so let's get rid of it. Author: Kota Miyake Reviewed-by: Álvaro Herrera

Peter Eisentraut pushed:

- Improve new hash partition bound check error messages. For the error message
"every hash partition modulus must be a factor of the next larger modulus",
add a detail message that shows the particular numbers and existing partition
involved. Also comment the code more. Reviewed-by: Amit Langote
`<amitlangote09(at)gmail(dot)com>` Reviewed-by: Heikki Linnakangas `<hlinnaka(at)iki(dot)fi>`

- Simplify printing of LSNs. Add a macro `LSN_FORMAT_ARGS` for use in printf-style
printing of LSNs. Convert all applicable code to use it. Reviewed-by:
Ashutosh Bapat `<ashutosh(dot)bapat(at)enterprisedb(dot)com>` Reviewed-by: Kyotaro
Horiguchi `<horikyota(dot)ntt(at)gmail(dot)com>` Reviewed-by: Michael Paquier
<michael(at)paquier(dot)xyz> Discussion:

- Message style fix. Don't quote type name placeholders.

- Extend a test case a little. This will possibly help a subsequent patch by
making sure the notice messages are distinct so that it's clear that they come
out in the right order. Author: Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr>

- Enhanced cycle mark values. Per SQL:202x draft, in the CYCLE clause of a
recursive query, the cycle mark values can be of type boolean and can be
omitted, in which case they default to TRUE and FALSE. Reviewed-by: Vik
Fearing <vik(at)postgresfriends(dot)org> Discussion:

Fujii Masao pushed:

- Initialize atomic variable waitStart in PGPROC, at postmaster startup. Commit
46d6e5f567 added the atomic variable "waitStart" into PGPROC struct, to store
the time at which wait for lock acquisition started. Previously this variable
was initialized every time each backend started. Instead, this commit makes
postmaster initialize it at the startup, to ensure that the variable should be
initialized before any use of it. This commit also moves the code to
initialize "waitStart" variable for prepare transaction, from
TwoPhaseGetDummyProc() to MarkAsPreparingGuts(). Because MarkAsPreparingGuts()
is more proper place to do that since it initializes other PGPROC variables.
Author: Fujii Masao Reviewed-by: Atsushi Torikoshi Discussion:

- Improve tab-completion for TRUNCATE. Author: Kota Miyake Reviewed-by: Muhammad
Usama Discussion:

Magnus Hagander pushed:

- Fix docs build for website styles. Building the docs with STYLE=website
referenced a stylesheet that long longer exists on the website, since we
changed it to use versioned references. To make it less likely for this to
happen again, point to a single stylesheet on the website which will in turn
import the required one. That puts the process entirely within the scope of
the website repository, so next time a version is switched that's the only
place changes have to be made, making them less likely to be missed. Per
(off-list) discussion with Peter Geoghegan and Jonathan Katz.

Álvaro Herrera pushed:

- Fix typo.

- Remove pointless HeapTupleHeaderIndicatesMovedPartitions calls. Pavan Deolasee
recently noted that a few of the HeapTupleHeaderIndicatesMovedPartitions calls
added by commit 5db6df0c0117 are useless, since they are done after comparing
t_self with t_ctid. But because t_self can never be set to the magical values
that indicate that the tuple moved partition, this can never succeed: if the
first test fails (so we know t_self equals t_ctid), necessarily the second
test will also fail. So these checks can be removed and no harm is done.
There's no bug here, just a code legibility issue. Reported-by: Pavan
Deolasee <pavan(dot)deolasee(at)gmail(dot)com> Discussion:

- VACUUM: ignore indexing operations with CONCURRENTLY. As envisioned in commit
c98763bf51bf, it is possible for VACUUM to ignore certain transactions that
purposes of computing Xmin; that's because we know those transactions are not
going to examine any other tables, and are not going to execute anything else
in the same transaction. (Only operations on "safe" indexes can be ignored:
those on indexes that are neither partial nor expressional). This is
extremely useful in cases where CIC/RC can run for a very long time, because
that used to be a significant headache for concurrent vacuuming of other
tables. Reviewed-by: Matthias van de Meent <boekewurm+postgres(at)gmail(dot)com>
Reviewed-by: Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> Discussion:

- Reinstate HEAP_XMAX_LOCK_ONLY|HEAP_KEYS_UPDATED as allowed. Commit
866e24d47db1 added an assert that HEAP_XMAX_LOCK_ONLY and HEAP_KEYS_UPDATED
cannot appear together, on the faulty assumption that the latter necessarily
referred to an update and not a tuple lock; but that's wrong, because SELECT
FOR UPDATE can use precisely that combination, as evidenced by the amcheck
test case added here. Remove the Assert(), and also patch amcheck's
verify_heapam.c to not complain if the combination is found. Also, out of
overabundance of caution, update (across all branches) README.tuplock to be
more explicit about this. Author: Julien Rouhaud <rjuju123(at)gmail(dot)com>
Reviewed-by: Mahendra Singh Thalor <mahi6run(at)gmail(dot)com> Reviewed-by: Dilip
Kumar <dilipbalaut(at)gmail(dot)com> Discussion:

- Fix confusion in comments about generate_gather_paths. d2d8a229bc58 introduced
a new function generate_useful_gather_paths to be used as a replacement for
generate_gather_paths, but forgot to update a couple of places that referenced
the older function. This is possibly not 100% complete (ref.
create_ordered_paths), but it's better than not changing anything. Author:
"Hou, Zhijie" <houzj(dot)fnst(at)cn(dot)fujitsu(dot)com> Reviewed-by: Tomas Vondra
<tomas(dot)vondra(at)enterprisedb(dot)com> Discussion:

- Fix use-after-free bug with AfterTriggersTableData.storeslot.
AfterTriggerSaveEvent() wrongly allocates the slot in execution-span memory
context, whereas the correct thing is to allocate it in a transaction-span
context, because that's where the enclosing AfterTriggersTableData instance
belongs into. Backpatch to 12 (the test back to 11, where it works well with
no code changes, and it's good to have to confirm that the case was previously
well supported); this bug seems introduced by commit ff11e7f4b9ae.
Reported-by: Bertrand Drouvot <bdrouvot(at)amazon(dot)com> Author: Amit Langote
<amitlangote09(at)gmail(dot)com> Discussion:

Amit Kapila pushed:

- Change the error message for logical replication authentication failure. The
authentication failure error message wasn't distinguishing whether it is a
physical replication or logical replication connection failure and was giving
incomplete information on what led to failure in case of logical replication
connection. Author: Paul Martinez and Amit Kapila Reviewed-by: Euler Taveira
and Amit Kapila Discussion:

- Fix an oversight in ReorderBufferFinishPrepared. We don't have anything to
decode in a transaction if ReorderBufferTXN doesn't exist by the time we
decode the commit prepared. So don't create a new ReorderBufferTXN here. This
is an oversight in commit a271a1b5. Reported-by: Markus Wanner Discussion:

- Fix relcache reference leak introduced by ce0fdbfe97. Author: Sawada Masahiko
Reviewed-by: Amit Kapila Discussion:

Peter Geoghegan pushed:

- Use full 64-bit XIDs in deleted nbtree pages. Otherwise we risk "leaking"
deleted pages by making them non-recyclable indefinitely. Commit 6655a729 did
the same thing for deleted pages in GiST indexes. That work was used as a
starting point here. Stop storing an XID indicating the oldest bpto.xact
across all deleted though unrecycled pages in nbtree metapages. There is no
longer any reason to care about that condition/the oldest XID. It only ever
made sense when wraparound was something `_bt_vacuum_needs_cleanup()` had to
consider. The btm_oldest_btpo_xact metapage field has been repurposed and
renamed. It is now btm_last_cleanup_num_delpages, which is used to remember
how many non-recycled deleted pages remain from the last VACUUM (in practice
its value is usually the precise number of pages that were `_newly deleted_`
during the specific VACUUM operation that last set the field). The general
idea behind storing btm_last_cleanup_num_delpages is to use it to give `_some_`
consideration to non-recycled deleted pages inside `_bt_vacuum_needs_cleanup()`
-- though never too much. We only really need to avoid leaving a truly
excessive number of deleted pages in an unrecycled state forever. We only do
this to cover certain narrow cases where no other factor makes VACUUM do a
full scan, and yet the index continues to grow (and so actually misses out on
recycling existing deleted pages). These metapage changes result in a clear
user-visible benefit: We no longer trigger full index scans during VACUUM
operations solely due to the presence of only 1 or 2 known deleted (though
unrecycled) blocks from a very large index. All that matters now is keeping
the costs and benefits in balance over time. Fix an issue that has been
around since commit 857f9c36, which added the "skip full scan of index"
mechanism (i.e. the `_bt_vacuum_needs_cleanup()` logic). The accuracy of
btm_last_cleanup_num_heap_tuples accidentally hinged upon _when_ the source
value gets stored. We now always store btm_last_cleanup_num_heap_tuples in
btvacuumcleanup(). This fixes the issue because
IndexVacuumInfo.num_heap_tuples (the source field) is expected to accurately
indicate the state of the table `_after_` the VACUUM completes inside
btvacuumcleanup(). A backpatchable fix cannot easily be extracted from this
commit. A targeted fix for the issue will follow in a later commit, though
that won't happen today. I (pgeoghegan) have chosen to remove any mention of
deleted pages in the documentation of the vacuum_cleanup_index_scale_factor
GUC/param, since the presence of deleted (though unrecycled) pages is no
longer of much concern to users. The vacuum_cleanup_index_scale_factor
description in the docs now seems rather unclear in any case, and it should
probably be rewritten in the near future. Perhaps some passing mention of
page deletion will be added back at the same time. Bump XLOG_PAGE_MAGIC due
to nbtree WAL records using full XIDs now. Author: Peter Geoghegan
<pg(at)bowt(dot)ie> Reviewed-By: Masahiko Sawada `<sawada(dot)mshk(at)gmail(dot)com>` Discussion:

- VACUUM VERBOSE: Count "newly deleted" index pages. Teach VACUUM VERBOSE to
report on pages deleted by the _current_ VACUUM operation -- these are newly
deleted pages. VACUUM VERBOSE continues to report on the total number of
deleted pages in the entire index (no change there). The former is a subset
of the latter. The distinction between each category of deleted index page
only arises with index AMs where page deletion is supported and is decoupled
from page recycling for performance reasons. This is follow-up work to commit
e5d8a999, which made nbtree store 64-bit XIDs (not 32-bit XIDs) in pages at
the point at which they're deleted. Note that the
btm_last_cleanup_num_delpages metapage field added by that commit usually gets
set to pages_newly_deleted. The exceptions (the scenarios in which they're
not equal) all seem to be tricky cases for the implementation (of page
deletion and recycling) in general. Author: Peter Geoghegan <pg(at)bowt(dot)ie>

David Rowley pushed:

- Add TID Range Scans to support efficient scanning ranges of TIDs. This adds a
new executor node named TID Range Scan. The query planner will generate paths
for TID Range scans when quals are discovered on base relations which search
for ranges on the table's ctid column. These ranges may be open at either
end. For example, WHERE ctid >= '(10,0)'; will return all tuples on page 10
and over. To support this, two new optional callback functions have been
added to table AM. scan_set_tidrange is used to set the scan range to just
the given range of TIDs. scan_getnextslot_tidrange fetches the next tuple in
the given range. For AMs were scanning ranges of TIDs would not make sense,
these functions can be set to NULL in the TableAmRoutine. The query planner
won't generate TID Range Scan Paths in that case. Author: Edmund Horner,
David Rowley Reviewed-by: David Rowley, Tomas Vondra, Tom Lane, Andres Freund,
Zhihong Yu Discussion:

- Add missing TidRangeScan readfunc. Mistakenly forgotten in bb437f995

Noah Misch pushed:

- Raise a timeout to 180s, in contrib/test_decoding. Per buildfarm member
hornet. The test is new in v14, so no back-patch.

# Pending Patches

Justin Pryzby sent in another revision of a patch to make INSERT SELECT use
BulkInsertState and multi_insert, check for volatile defaults to ensure that any
dependencies on them not be lost, make COPY flush the multi-insert buffer based
on accumulated size of tuples, rather than line length, and check tuple size for
a more accurate measure of chunk size when computing when to flush the buffer.

Hou Zhijie sent in another revision of a patch to add one GUC and one
per-table option, both named enable_parallel_dml, to control whether DMLs
include an option to execute in parallel.

Bharath Rupireddy sent in another revision of a patch to add GUCs both at the
FDW level and at the foreign server level called keep_connections.

Masahiko Sawada sent in a patch to add a check whether or not to do index vacuum
(and heap vacuum) based on whether or not 1% of all heap pages have an LP_DEAD
line pointer.

Shenhao Wang sent in a patch to make --enable-coverage succeed without finding
lcov, as the actual coverage tests can run without it.

Jim Mlodgenski sent in a patch to add a parser hook.

Mats Kindahl sent in a patch to a callback to TableAccessMethod that is called
when the table should be scheduled for unlinking, and implements the method for
the heap access method.

Justin Pryzby sent in three more revisions of a patch to report text parameters
during errors in typinput, and exercise parameter output on error with binary

Daniel Gustafsson sent in two more revisions of a patch to make it possible to
use NSS for libpq's TLS backend.

Jan Wieck sent in another revision of a patch to make the wire protocol
pluggable and use same to answer via telnet.

Justin Pryzby sent in another revision of a patch to touch up the documentation
for the upcoming release.

Iwata Aya and Álvaro Herrera traded patches to improve libpq tracing

Amit Kapila sent in a patch to update the docs and comments for decoding of
prepared xacts to match the current behavior.

Daniel Gustafsson sent in another revision of a patch to check the version of
target cluster binaries in pg_upgrade.

Mark Rofail sent in another revision of a patch to implement foreign key arrays.

Matthias van de Meent sent in another revision of a patch to add
progress-reported components for COPY progress reporting including a new view,
pg_stat_progress_copy, add backlinks to progress reporting documentation, and
add regression tests for same.

Dilip Kumar sent in three more revisions of a patch to provide a new interface
to get the recovery pause status, pg_get_wal_replay_pause_state, that returns
the actual status of the recovery pause i.e.'not paused' if pause is not
requested, 'pause requested' if pause is requested but recovery is not yet
paused and 'paused' if recovery is actually paused.

KaiGai Kohei sent in a patch to add binary input/output handlers to

Georgios Kokolatos sent in another revision of a patch to make dbsize more

Mark Dilger sent in another revision of a patch to add pg_amcheck, a command
line interface for running amcheck's verifications against tables and indexes.

John Naylor sent in two more revisions of a patch to make it possible to verify
utf-8 using SIMD instructions.

Hayato Kuroda sent in three revisions of a patch to refactor ECPGconnect and
allow IPv6 connections there.

Amit Langote, Greg Nancarrow, and Amit Kapila traded patches to make it possible
to execute INSERT (INTO ... SELECT ...) with multiple workers.

Julien Rouhaud sent in another revision of a patch to add a new COLLATION option

John Naylor sent in two revisions of a patch to allow inserting tuples into
almost-empty pages.

Paul Martinez sent in two more revisions of a patch to document the effect of
max_replication_slots on the subscriber side.

Ajin Cherian and Amit Kapila traded patches to avoid repeated decoding of
prepared transactions after the restart, and add an option to enable two-phase
commits in pg_create_logical_replication_slot.

Peter Eisentraut sent in another revision of a patch to fix use of cursor
sensitivity terminology to match that in the SQL standard, removes the
claim that sensitive cursors are supported, and adds a new option, ASENSITIVE,
to cursors, that being the default behavior.

Benoit Lobréau sent in a patch to document in more detail how archive_command
fails based on the signal it was sent, and whether it's reported in

Peter Eisentraut sent in another revision of a patch to set SNI for SSL
connections from the client, which allows an SNI-aware proxy to route

Peter Smith sent in three more revisions of a patch to implement logical
decoding of two-phase transactions.

Amit Kapila sent in another revision of a patch to update documentation of
logical replication to include the recently added logical replication
configuration settings, and mention the fact that table synchronization workers
are now using replication origins to track progress.

Thomas Munro sent in another revision of a patch to replace buffer I/O locks
with condition variables.

Amit Langote sent in another revision of a patch to fix a misbehavior of
partition row movement by ensuring that foreign key triggers are created on
partitioned tables, and use same to enforce foreign keys correctly during
cross-partition updates.

Thomas Munro sent in another revision of a patch to prevent latches from sending
signals to processes that aren't currently sleeping, use SIGURG rather than
SIGUSR1 for latches, use signalfd for epoll latches, which cuts down on system
calls and other overheads by waiting on a signalfd instead of a signal handler
and self-pipe, and use EVFILT_SIGNAL for kqueue latches.

Michaël Paquier sent in a patch to add a --tablespace option to reindexdb,
matching the recently added capability for REINDEX.

Kota Miyake sent in a patch to fix pgbench's reporting of database name in
errors when both PGUSER and PGPORT are set.

Amul Sul sent in another revision of a patch to implement wal prohibit state
using a global barrier, error or Assert before START_CRIT_SECTION for WAL write,
and document same.

Justin Pryzby sent in another revision of a patch to make it possible to use
CREATE INDEX CONCURRENTLY on a partitioned table.

Jacob Champion sent in another revision of a patch to save the user's original
authenticated identity for logging.

Daniel Gustafsson sent in another revision of a patch to disallow SSL
compression by ignoring the option that would have turned it on. A later
patch will remove the option entirely, now that it's deprecated.

Daniel Gustafsson sent in a patch to remove the defaults from libpq's authtype
parameter, as it has been deprecated.

Álvaro Herrera sent in another revision of a patch to implement ALTER TABLE ..

Dilip Kumar sent in two more revisions of a patch to make it possible to set the
compression type for a table.

Euler Taveira de Oliveira sent in another revision of a patch to implement row
filtering for logical replication using an optional WHERE clause in the DDL for

Thomas Munro sent in another revision of a patch to introduce symbolic names for
FeBeWaitSet positions, and use FeBeWaitSet for walsender.c.

Thomas Munro sent in another revision of a patch to use condition variables for
ProcSignalBarriers, allow condition variables to be used in interrupt code, and
use a global barrier to fix DROP TABLESPACE on Windows by making it by force all
backends to close all fds on that platform.

Andrey Borodin sent in a patch to use different compression methods for FPI.

Julien Rouhaud sent in a patch to change the explicit alignment use in
pg_prewarm and pg_stat_statements to CACHELINEALIGN, and updates the alignment
in hash_estimate_size() to an estimate of what ShmemInitHash will actually
consume based on CACHELINEALIGN.

Thomas Munro sent in a patch to remove latch.c workaround for Linux < 2.6.27.

Peter Eisentraut sent in another revision of a patch to psql which makes it show
all query results by default.

Jeff Janes sent in a patch to make SCRAM's behavior match MD5's by reporting in
a DETAIL message when the password does not match for a user.

Joel Jacobson sent in a patch to implement a regexp_positions() function.

Paul Förster sent in a patch to mention database URIs in psql's --help output.

Justin Pryzby sent in a patch to refactor ATExec{En,Dis}ableRowSecurity in the
style of ATExecForceNoForceRowSecurity, and do some further refactoring.

Justin Pryzby sent in a patch to implement ALTER TABLE SET TABLE ACCESS METHOD.

Browse pgsql-announce by date

  From Date Subject
Next Message Toshiba via PostgreSQL Announce 2021-03-03 15:03:06 Parquet S3 FDW 0.1 was newly released
Previous Message pgAdmin Development Team via PostgreSQL Announce 2021-02-25 18:16:43 pgAdmin 4 v5.0 Released