== PostgreSQL Weekly News - March 24, 2019 ==

From: David Fetter <david(at)fetter(dot)org>
To: PostgreSQL Announce <pgsql-announce(at)postgresql(dot)org>
Subject: == PostgreSQL Weekly News - March 24, 2019 ==
Date: 2019-03-24 18:57:30
Message-ID: 20190324185730.GA15968@fetter.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-announce

== PostgreSQL Weekly News - March 24, 2019 ==

== PostgreSQL Product News ==

barman 2.7, a backup and recovery manager for PostgreSQL, released.
https://www.pgbarman.org/barman-2-7-released/

check_pgbackrest 1.5, a Nagios-compatible monitor for pgBackRest, released.
https://github.com/dalibo/check_pgbackrest/releases

temboard 3.0, a management tool for PostgreSQL, released.
http://temboard.io/

== PostgreSQL Jobs for March ==

http://archives.postgresql.org/pgsql-jobs/2019-03/

== PostgreSQL Local ==

The German-speaking PostgreSQL Conference 2019 will take place on May 10, 2019
in Leipzig.
http://2019.pgconf.de/

PGDay.IT 2019 will take place May 16th and May 17th in Bologna, Italy.
https://2019.pgday.it/en/

PGCon 2019 will take place in Ottawa on May 28-31, 2019.
https://www.pgcon.org/2019

Swiss PGDay 2019 will take place in Rapperswil (near Zurich) on June 28, 2019.
The CfP is open through April 18, 2019, and registration is open.
http://www.pgday.ch/2019/

PostgresLondon 2019 will be July 2-3, 2019 with an optional training day on
July 1.
http://postgreslondon.org

PGConf.Brazil 2019 is on August 1-3 2019 in São Paulo.
http://pgconf.com.br

The first Austrian pgDay, will take place September 6, 2019 at the Hilton Garden
Inn in Wiener Neustadt. The CfP is open until April 1, 2019.
https://pgday.at/en/

== PostgreSQL in the News ==

Planet PostgreSQL: http://planet.postgresql.org/

PostgreSQL Weekly News is brought to you this week by David Fetter

Submit news and announcements by Sunday at 3:00pm PST8PDT to david(at)fetter(dot)org(dot)

== Applied Patches ==

Michaël Paquier pushed:

- Error out in pg_checksums on incompatible block size. pg_checksums is compiled
with a given block size and has a hard dependency to it per the way checksums
are calculated via checksum_impl.h, and trying to use the tool on a data
folder which has not the same block size would result in incorrect checksum
calculations and/or block read errors, meaning that the data folder is
corrupted. This is harmless as checksums are only checked now, but very
confusing for the user so issue an error properly if the block size used at
compilation and the block size used in the data folder do not match.
Reported-by: Sergei Kornilov Author: Michael Banck, Michael Paquier
Reviewed-by: Fabien Coelho, Magnus Hagander Discussion:
https://postgr.es/m/20190317054657.GA3357@paquier.xyz ackpatch-through: 11
https://git.postgresql.org/pg/commitdiff/fa3395659561b564051a2bbd3997de8e2923c8e3

- Fix pg_rewind when rewinding new database with tables included. This fixes an
issue introduced by 266b6ac, which has added filters to exclude file patterns
on the target and source data directories to reduce the number of files
transferred. Filters get applied to both the target and source data files,
and include pg_internal.init which is present for each database once relations
are created on it. However, if the target differed from the source with at
least one new database with relations, the rewind would fail due to the
exclusion filters applied on the target files, causing pg_internal.init to
still be present on the target database folder, while its contents should have
been completely removed so as there is nothing remaining inside at the time of
the folder deletion. Applying exclusion filters on the source files is fine,
because this way the amount of data copied from the source to the target is
reduced. And actually, not applying the filters on the target is what
pg_rewind should do, because this causes such files to be automatically
removed during the rewind on the target. Exclusion filters apply to paths
which are removed or recreated automatically at startup, so removing all those
files on the target during the rewind is a win. The existing set of TAP tests
already stresses the rewind of databases, but it did not include any tables on
those newly-created databases. Creating extra tables in this case is enough to
reproduce the failure, so the existing tests are extended to close the gap.
Reported-by: Mithun Cy Author: Michael Paquier Discussion:
https://postgr.es/m/CADq3xVYt6_pO7ZzmjOqPgY9HWsL=kLd-_tNyMtdfjKqEALDyTA@mail.gmail.com
Backpatch-through: 11
https://git.postgresql.org/pg/commitdiff/a7eadaaaaf089994279488f795bdedd9ded1682a

- Refactor more code logic to update the control file. ce6afc6 has begun the
refactoring work by plugging pg_rewind into a central routine to update the
control file, and left around two extra copies, with one in xlog.c for the
backend and one in pg_resetwal.c. By adding an extra option to the central
routine in controldata_utils.c to control if a flush of the control file needs
to be done, it is proving to be straight-forward to make xlog.c and
pg_resetwal.c use the central code path at the condition of moving the wait
event tracking there. Hence, this allows to have only one central code path to
update the control file, shaving the code from the duplicates. This
refactoring actually fixes a problem in pg_resetwal. Previously, the control
file was first removed before being recreated. So if a crash happened between
the moment the file was removed and the moment the file was created, then it
would have been possible to not have a control file anymore in the database
folder. Author: Fabien Coelho Reviewed-by: Michael Paquier Discussion:
https://postgr.es/m/alpine.DEB.2.21.1903170935210.2506@lancre
https://git.postgresql.org/pg/commitdiff/8b938d36f7446e76436ca4a8ddcebbebaeaab480

- Fix crash with pg_partition_root. Trying to call the function with the
top-most parent of a partition tree was leading to a crash. In this case the
correct result is to return the top-most parent itself. Reported-by: Álvaro
Herrera Author: Michael Paquier Reviewed-by: Amit Langote Discussion:
https://postgr.es/m/20190322032612.GA323@alvherre.pgsql
https://git.postgresql.org/pg/commitdiff/2ab6d28d233af17987ea323e3235b2bda89b4f2e

- Add options to enable and disable checksums in pg_checksums. An offline
cluster can now work with more modes in pg_checksums: - --enable enables
checksums in a cluster, updating all blocks with a correct checksum, and
updating the control file at the end. - --disable disables checksums in a
cluster, updating only the control file. - --check is an extra option able to
verify checksums for a cluster, and the default used if no mode is specified.
When running --enable or --disable, the data folder gets fsync'd for
durability, and then it is followed by a control file update and flush to keep
the operation consistent should the tool be interrupted, killed or the host
unplugged. If no mode is specified in the options, then --check is used for
compatibility with older versions of pg_checksums (named pg_verify_checksums
in v11 where it was introduced). Author: Michael Banck, Michael Paquier
Reviewed-by: Fabien Coelho, Magnus Hagander, Sergei Kornilov Discussion:
https://postgr.es/m/20181221201616.GD4974@nighthawk.caipicrew.dd-dns.de
https://git.postgresql.org/pg/commitdiff/ed308d78379008b2cebca30a986f97f992ee6122

- Add option -N/--no-sync to pg_checksums. This is an option consistent with
what pg_dump, pg_rewind and pg_basebackup provide which is useful for
leveraging the I/O effort when testing things, not to be used in a production
environment. Author: Michael Paquier Reviewed-by: Michael Banck, Fabien
Coelho, Sergei Kornilov Discussion:
https://postgr.es/m/20181221201616.GD4974@nighthawk.caipicrew.dd-dns.de
https://git.postgresql.org/pg/commitdiff/e0090c86900877bf0911c53dcf4a30bc81d03047

- Improve format of code and some error messages in pg_checksums. This makes the
code more consistent with the surroundings. Author: Fabrízio de Royes Mello
Discussion:
https://postgr.es/m/CAFcNs+pXb_35r5feMU3-dWsWxXU=Yjq+spUsthFyGFbT0QcaKg@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/4ba96d1b82d694fead0ac709f9429cbb7ea89cb0

- Make current_logfiles use permissions assigned to files in data directory.
Since its introduction in 19dc233c, current_logfiles has been assigned the
same permissions as a log file, which can be enforced with log_file_mode.
This setup can lead to incompatibility problems with group access permissions
as current_logfiles is not located in the log directory, but at the root of
the data folder. Hence, if group permissions are used but log_file_mode is
more restrictive, a backup with a user in the group having read access could
fail even if the log directory is located outside of the data folder. Per
discussion with the folks mentioned below, we have concluded that
current_logfiles should not be treated as a log file as it only stores
metadata related to log files, and that it should use the same permissions as
all other files in the data directory. This solution has the merit to be
simple and fixes all the interaction problems between group access and
log_file_mode. Author: Haribabu Kommi Reviewed-by: Stephen Frost, Robert
Haas, Tom Lane, Michael Paquier Discussion:
https://postgr.es/m/CAJrrPGcEotF1P7AWoeQyD3Pqr-0xkQg_Herv98DjbaMj+naozw@mail.gmail.com
Backpatch-through: 11, where group access has been added.
https://git.postgresql.org/pg/commitdiff/276d2e6c2d8141f194a26da03b5b79375eb7041b

Alexander Korotkov pushed:

- Revert 4178d8b91c. As it was agreed to worsen the code readability.
Discussion:
https://postgr.es/m/ecfcfb5f-3233-eaa9-0c83-07056fb49a83%402ndquadrant.com
https://git.postgresql.org/pg/commitdiff/a0478b69985056965a5737184279a99bde421f69

- Rename typedef in jsonpath_scan.l from "keyword" to "JsonPathKeyword". Typedef
name should be both unique and non-intersect with variable names across all
the sources. That makes both pg_indent and debuggers happy. Discussion:
https://postgr.es/m/23865.1552936099%40sss.pgh.pa.us
https://git.postgresql.org/pg/commitdiff/75c57058b0f5d511a9d80ddfab68a761229d68ea

- Rename typedef in jsonpath_gram.y from "string" to "JsonPathString". Reason is
the same as in 75c57058b0.
https://git.postgresql.org/pg/commitdiff/5e28b778bf9a5835e702277119c5f92b4dbab45e

- Remove ambiguity for jsonb_path_match() and jsonb_path_exists(). There are
2-arguments and 4-arguments versions of jsonb_path_match() and
jsonb_path_exists(). But 4-arguments versions have optional 3rd and 4th
arguments, that leads to ambiguity. In the same time 2-arguments versions are
needed only for @@ and @? operators. So, rename 2-arguments versions to
remove the ambiguity. Catversion is bumped.
https://git.postgresql.org/pg/commitdiff/641fde25233ef3ecc3b8101fe287eea9fceba6fd

- Get rid of jsonpath_gram.h and jsonpath_scanner.h. Jsonpath grammar and
scanner are both quite small. It doesn't worth complexity to compile them
separately. This commit makes grammar and scanner be compiled at once.
Therefore, jsonpath_gram.h and jsonpath_gram.h are no longer needed. This
commit also does some reorganization of code in jsonpath_gram.y. Discussion:
https://postgr.es/m/d47b2023-3ecb-5f04-d253-d557547cf74f%402ndQuadrant.com
https://git.postgresql.org/pg/commitdiff/550b9d26f80fa3048f2d5883f0779ed29465960a

Peter Eisentraut pushed:

- Remove unused macro. It has never been used.
https://git.postgresql.org/pg/commitdiff/fb5806533f9fe0433290d84c9b019399cd69e9c2

- Fix optimization of foreign-key on update actions. In
RI_FKey_pk_upd_check_required(), we check among other things whether the old
and new key are equal, so that we don't need to run cascade actions when
nothing has actually changed. This was using the equality operator. But the
effect of this is that if a value in the primary key is changed to one that
"looks" different but compares as equal, the update is not propagated.
(Examples are float -0 and 0 and case-insensitive text.) This appears to
violate the SQL standard, and it also behaves inconsistently if in a
multicolumn key another key is also updated that would cause the row to
compare as not equal. To fix, if we are looking at the PK table in
ri_KeysEqual(), then do a bytewise comparison similar to record_image_eq()
instead of using the equality operators. This only makes a difference for ON
UPDATE CASCADE, but for consistency we treat all changes to the PK the same.
For the FK table, we continue to use the equality operators. Discussion:
https://www.postgresql.org/message-id/flat/3326fc2e-bc02-d4c5-e3e5-e54da466e89a(at)2ndquadrant(dot)com
https://git.postgresql.org/pg/commitdiff/1ffa59a85cb40a61f4523fb03c8960db97eea124

- Fix bug in support for collation attributes on older ICU versions.
Unrecognized attribute names are supposed to be ignored. But the code would
error out on an unrecognized attribute value even if it did not recognize the
attribute name. So unrecognized attributes wouldn't really be ignored unless
the value happened to be one that matched a recognized value. This would
break some important cases where the attribute would be processed by
ucol_open() directly. Fix that and add a test case. The restructured code
should also avoid compiler warnings about initializing a UColAttribute value
to -1, because the type might be an unsigned enum. (reported by Andres
Freund)
https://git.postgresql.org/pg/commitdiff/1f050c08f91d866c560344d4510404ecd2763cbf

- Fix whitespace.
https://git.postgresql.org/pg/commitdiff/e537ac5182f8cfa7244a8c8ae772b787b2288605

- Ignore attempts to add TOAST table to shared or catalog tables. Running ALTER
TABLE on any table will check if a TOAST table needs to be added. On shared
tables, this would previously fail, thus effectively disabling ALTER TABLE for
those tables. On (non-shared) system catalogs, on the other hand, it would
add a TOAST table, even though we don't really want TOAST tables on some
system catalogs. In some cases, it would also fail with an error
"AccessExclusiveLock required to add toast table.", depending on what locks
the ALTER TABLE actions had already taken. So instead, just ignore attempts
to add TOAST tables to such tables, outside of bootstrap mode, pretending they
don't need one. This allows running ALTER TABLE on such tables without
messing up the TOAST situation. Legitimate uses for ALTER TABLE on system
catalogs include setting reloptions (say, fillfactor or autovacuum settings).
(All this still requires allow_system_table_mods, which is independent of
this.) Discussion:
https://www.postgresql.org/message-id/flat/e49f825b-fb25-0bc8-8afc-d5ad895c7975(at)2ndquadrant(dot)com
https://git.postgresql.org/pg/commitdiff/590a87025b0aa9ebca53c7b71ddf036e5acd8f08

- Reorder LOCALLOCK structure members to compact the size. Save 8 bytes (on
x86-64) by filling up padding holes. Author: Takayuki Tsunakawa
<tsunakawa(dot)takay(at)jp(dot)fujitsu(dot)com> Discussion:
https://www.postgresql.org/message-id/20190219001639.ft7kxir2iz644alf@alap3.anarazel.de
https://git.postgresql.org/pg/commitdiff/28988a84cf19c01dba3c3fb40e95d9cd6e4888da

- Collations with nondeterministic comparison. This adds a flag "deterministic"
to collations. If that is false, such a collation disables various
optimizations that assume that strings are equal only if they are byte-wise
equal. That then allows use cases such as case-insensitive or
accent-insensitive comparisons or handling of strings with different Unicode
normal forms. This functionality is only supported with the ICU provider. At
least glibc doesn't appear to have any locales that work in a nondeterministic
way, so it's not worth supporting this for the libc provider. The term
"deterministic comparison" in this context is from Unicode Technical Standard
#10 (https://unicode.org/reports/tr10/#Deterministic_Comparison). This patch
makes changes in three areas: - CREATE COLLATION DDL changes and system
catalog changes to support this new flag. - Many executor nodes and
auxiliary code are extended to track collations. Previously, this code
would just throw away collation information, because the eventually-called
user-defined functions didn't use it since they only cared about equality,
which didn't need collation information. - String data type functions that
do equality comparisons and hashing are changed to take the
(non-)deterministic flag into account. For comparison, this just means
skipping various shortcuts and tie breakers that use byte-wise comparison.
For hashing, we first need to convert the input string to a canonical "sort
key" using the ICU analogue of strxfrm(). Reviewed-by: Daniel Verite
<daniel(at)manitou-mail(dot)org> Reviewed-by: Peter Geoghegan <pg(at)bowt(dot)ie>
Discussion:
https://www.postgresql.org/message-id/flat/1ccc668f-4cbc-0bef-af67-450b47cdfee7(at)2ndquadrant(dot)com
https://git.postgresql.org/pg/commitdiff/5e1963fb764e9cc092e0f7b58b28985c311431d9

- Fix ICU tests for older ICU versions. Change the tests to use old-style ICU
locale specifications so that they can run on older ICU versions.
https://git.postgresql.org/pg/commitdiff/638db07814f389e739b2cfde01b592aa9150b1be

- Add gitignore entries for jsonpath_gram.h.
https://git.postgresql.org/pg/commitdiff/4e274a043fc8310ce1148190ef674beca06e990c

- Make subscription collation test work independent of locale. We need to set
the database to UTF8 encoding so that the test can use Unicode escapes.
https://git.postgresql.org/pg/commitdiff/87914e708aabb7e2cd9045fa95b4fed99ca458ec

- Revert "Add gitignore entries for jsonpath_gram.h". This reverts commit
4e274a043fc8310ce1148190ef674beca06e990c. These files aren't actually built
anymore since 550b9d26f.
https://git.postgresql.org/pg/commitdiff/7b084b38310cfe9c8b58cc615a81df625c771f5d

- Transaction chaining. Add command variants COMMIT AND CHAIN and ROLLBACK AND
CHAIN, which start new transactions with the same transaction characteristics
as the just finished one, per SQL standard. Support for transaction chaining
in PL/pgSQL is also added. This functionality is especially useful when
running COMMIT in a loop in PL/pgSQL. Reviewed-by: Fabien COELHO
<coelho(at)cri(dot)ensmp(dot)fr> Discussion:
https://www.postgresql.org/message-id/flat/28536681-324b-10dc-ade8-ab46f7645a5a(at)2ndquadrant(dot)com
https://git.postgresql.org/pg/commitdiff/280a408b48d5ee42969f981bceb9e9426c3a344c

Robert Haas pushed:

- Fold vacuum's 'int options' parameter into VacuumParams. Many places need
both, so this allows a few functions to take one fewer parameter. More
importantly, as soon as we add a VACUUM option that takes a non-Boolean
parameter, we need to replace 'int options' with a struct, and it seems better
to think of adding more fields to VacuumParams rather than passing around both
VacuumParams and a separate struct as well. Patch by me, reviewed by Masahiko
Sawada Discussion:
http://postgr.es/m/CA+Tgmob6g6-s50fyv8E8he7APfwCYYJ4z0wbZC2yZeSz=26CYQ@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/f41551f61f9cf4eedd5b7173f985a3bdb4d9858c

- Revise parse tree representation for VACUUM and ANALYZE. Like commit
f41551f61f9cf4eedd5b7173f985a3bdb4d9858c, this aims to make it easier to add
non-Boolean options to VACUUM (or, in this case, to ANALYZE). Instead of
building up a bitmap of options directly in the parser, build up a list of
DefElem objects and let ExecVacuum() sort it out; right now, we make no use of
the fact that a DefElem can carry an associated value, but it will be easy to
make that change in the future. Masahiko Sawada Discussion:
http://postgr.es/m/CAD21AoATE4sn0jFFH3NcfUZXkU2BMbjBWB_kDj-XWYA-LXDcQA@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/6776142a07afb4c28961f27059d800196902f5f1

- Don't auto-restart per-database autoprewarm workers. We should try to prewarm
each database only once. Otherwise, if prewarming fails for some reason, it
will just keep retrying in an infnite loop. This can happen if, for example,
the database has been dropped. The existing code was intended to implement
the try-once behavior, but failed to do so because it neglected to set
worker.bgw_restart_time to BGW_NEVER_RESTART. Mithun Cy, per a report from
Hans Buschmann Discussion:
http://postgr.es/m/CA+hUKGKpQJCWcgyy3QTC9vdn6uKAR_8r__A-MMm2GYfj45caag@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/1459e84cb2e57649627753ad1279428d35590df6

- Fix copyfuncs/equalfuncs support for VacuumStmt. Commit
6776142a07afb4c28961f27059d800196902f5f1 failed to do this, and the buildfarm
broke. Patch by me, per advice from Tom Lane and Michael Paquier.
Discussion: http://postgr.es/m/13988.1552960403@sss.pgh.pa.us
https://git.postgresql.org/pg/commitdiff/53680c116ce8c501e4081332d32ba0e93aa1aaa2

Andres Freund pushed:

- Remove leftover reference to oid column. I (Andres) missed this in
578b229718e8. Author: John Naylor Discussion:
https://postgr.es/m/CACPNZCtd+ckUgibRFs9KewK4Yr5rj3Oipefquupw+XJZebFhrA@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/7571ce6f11f24594274fd4956bd4d1114eebd485

- Fix typos in sgml docs about RefetchForeignRow(). I screwed this up in
ad0bda5d24e. Reported-By: Jie Zhang, Michael Paquier, Etsuro Fujita
Discussion:
https://postgr.es/m/1396E95157071C4EBBA51892C5368521017F2DA203@G08CNEXMBPEKD02.g08.fujitsu.local
https://git.postgresql.org/pg/commitdiff/11180a5015e9c6299ee732fa587b3a8bc6dca6b2

- Expand EPQ tests for UPDATEs and DELETEs. Previously there was basically no
coverage for UPDATEs encountering deleted rows, and no coverage for DELETE
having to perform EPQ. That's problematic for an upcoming commit in which EPQ
is tought to integrate with tableams. Also, there was no test for UPDATE to
encounter a row UPDATEd into another partition. Author: Andres Freund
https://git.postgresql.org/pg/commitdiff/cdcffe2263215eef9078ce97e6c9adece8ed1910

- tableam: Add tuple_{insert, delete, update, lock} and use. This adds new,
required, table AM callbacks for insert/delete/update and lock_tuple. To be
able to reasonably use those, the EvalPlanQual mechanism had to be adapted,
moving more logic into the AM. Previously both delete/update/lock call-sites
and the EPQ mechanism had to have awareness of the specific tuple format to be
able to fetch the latest version of a tuple. Obviously that needs to be
abstracted away. To do so, move the logic that find the latest row version
into the AM. lock_tuple has a new flag argument,
TUPLE_LOCK_FLAG_FIND_LAST_VERSION, that forces it to lock the last version,
rather than the current one. It'd have been possible to do so via a separate
callback as well, but finding the last version usually also necessitates
locking the newest version, making it sensible to combine the two. This
replaces the previous use of EvalPlanQualFetch(). Additionally
HeapTupleUpdated, which previously signaled either a concurrent update or
delete, is now split into two, to avoid callers needing AM specific knowledge
to differentiate. The move of finding the latest row version into tuple_lock
means that encountering a row concurrently moved into another partition will
now raise an error about "tuple to be locked" rather than "tuple to be
updated/deleted" - which is accurate, as that always happens when locking
rows. While possible slightly less helpful for users, it seems like an
acceptable trade-off. As part of this commit HTSU_Result has been renamed to
TM_Result, and its members been expanded to differentiated between updating
and deleting. HeapUpdateFailureData has been renamed to TM_FailureData. The
interface to speculative insertion is changed so nodeModifyTable.c does not
have to set the speculative token itself anymore. Instead there's a version of
tuple_insert, tuple_insert_speculative, that performs the speculative
insertion (without requiring a flag to signal that fact), and the speculative
insertion is either made permanent with table_complete_speculative(succeeded =
true) or aborted with succeeded = false). Note that multi_insert is not yet
routed through tableam, nor is COPY. Changing multi_insert requires changes to
copy.c that are large enough to better be done separately. Similarly,
although simpler, CREATE TABLE AS and CREATE MATERIALIZED VIEW are also only
going to be adjusted in a later commit. Author: Andres Freund and Haribabu
Kommi Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20190313003903.nwvrxi7rw3ywhdel@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
https://git.postgresql.org/pg/commitdiff/5db6df0c0117ff2a4e0cd87594d2db408cd5022f

- Remove spurious return. Per buildfarm member anole. Author: Andres Freund
https://git.postgresql.org/pg/commitdiff/b2db277057a375ccbcc98cc3bbce8ce5b4d788ea

Tom Lane pushed:

- Fix memory leak in printtup.c. Commit f2dec34e1 changed things so that
printtup's output stringinfo buffer was allocated outside the per-row
temporary context, not inside it. This creates a need to free that buffer
explicitly when the temp context is freed, but that was overlooked. In most
cases, this is all happening inside a portal or executor context that will go
away shortly anyhow, but that's not always true. Notably, the stringinfo ends
up getting leaked when JDBC uses row-at-a-time fetches. For a query that
returns wide rows, that adds up after awhile. Per bug #15700 from Matthias
Otterbach. Back-patch to v11 where the faulty code was added. Discussion:
https://postgr.es/m/15700-8c408321a87d56bb@postgresql.org
https://git.postgresql.org/pg/commitdiff/f2004f19ed9c9228d3ea2b12379ccb4b9212641f

- Make checkpoint requests more robust. Commit 6f6a6d8b1 introduced a delay of
up to 2 seconds if we're trying to request a checkpoint but the checkpointer
hasn't started yet (or, much less likely, our kill() call fails). However
buildfarm experience shows that that's not quite enough for slow or
heavily-loaded machines. There's no good reason to assume that the
checkpointer won't start eventually, so we may as well make the timeout much
longer, say 60 sec. However, if the caller didn't say CHECKPOINT_WAIT, it
seems like a bad idea to be waiting at all, much less for as long as 60 sec.
We can remove the need for that, and make this whole thing more robust, by
adjusting the code so that the existence of a pending checkpoint request is
clear from the contents of shared memory, and making sure that the
checkpointer process will notice it at startup even if it did not get a
signal. In this way there's no need for a non-CHECKPOINT_WAIT call to wait at
all; if it can't send the signal, it can nonetheless assume that the
checkpointer will eventually service the request. A potential downside of
this change is that "kill -INT" on the checkpointer process is no longer
enough to trigger a checkpoint, should anyone be relying on something so
hacky. But there's no obvious reason to do it like that rather than issuing a
plain old CHECKPOINT command, so we'll assume that nobody is. There doesn't
seem to be a way to preserve this undocumented quasi-feature without
introducing race conditions. Since a principal reason for messing with this
is to prevent intermittent buildfarm failures, back-patch to all supported
branches. Discussion: https://postgr.es/m/27830.1552752475@sss.pgh.pa.us
https://git.postgresql.org/pg/commitdiff/0dfe3d0ef5799e5197adb127a0ec354b61429982

- Restructure libpq's handling of send failures. Originally, if libpq got a
failure (e.g., ECONNRESET) while trying to send data to the server, it would
just report that and wash its hands of the matter. It was soon found that
that wasn't a very pleasant way of coping with server-initiated
disconnections, so we introduced a hack (pqHandleSendFailure) in the code that
sends queries to make it peek ahead for server error reports before reporting
the send failure. It now emerges that related cases can occur during
connection setup; in particular, as of TLS 1.3 it's unsafe to assume that SSL
connection failures will be reported by SSL_connect rather than during our
first send attempt. We could have fixed that in a hacky way by applying
pqHandleSendFailure after a startup packet send failure, but (a)
pqHandleSendFailure explicitly disclaims suitability for use in any state
except query startup, and (b) the problem still potentially exists for other
send attempts in libpq. Instead, let's fix this in a more general fashion by
eliminating pqHandleSendFailure altogether, and instead arranging to postpone
all reports of send failures in libpq until after we've made an attempt to
read and process server messages. The send failure won't be reported at all
if we find a server message or detect input EOF. (Note: this removes one of
the reasons why libpq typically overwrites, rather than appending to,
conn->errorMessage: pqHandleSendFailure needed that behavior so that the send
failure report would be replaced if we got a server message or read failure
report. Eventually I'd like to get rid of that overwrite behavior altogether,
but today is not that day. For the moment, pqSendSome is assuming that its
callees will overwrite not append to conn->errorMessage.) Possibly this
change should get back-patched someday; but it needs testing first, so let's
not consider that till after v12 beta. Discussion:
https://postgr.es/m/CAEepm=2n6Nv+5tFfe8YnkUm1fXgvxR0Mm1FoD+QKG-vLNGLyKg@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/1f39a1c0641531e0462a4822f2dba904c5d4d699

- Sort the dependent objects before deletion in DROP OWNED BY. This finishes a
task we left undone in commit f1ad067fc, by extending the
delete-in-descending-OID-order rule to deletions triggered by DROP OWNED BY.
We've coped with machine-dependent deletion orders one time too many, and the
new issues caused by Peter G's recent nbtree hacking seem like the last straw.
Discussion: https://postgr.es/m/E1h6eep-0001Mw-Vd@gemulon.postgresql.org
https://git.postgresql.org/pg/commitdiff/8aa9dd74b36757342b6208fbfebb5b35c2d67c53

- Improve error reporting for DROP FUNCTION/PROCEDURE/AGGREGATE/ROUTINE. These
commands allow the argument type list to be omitted if there is just one
object that matches by name. However, if that syntax was used with DROP IF
EXISTS and there was more than one match, you got a "function ... does not
exist, skipping" notice message rather than a truthful complaint about the
ambiguity. This was basically due to poor factorization and a rats-nest of
logic, so refactor the relevant lookup code to make it cleaner. Note that
this amounts to narrowing the scope of which sorts of error conditions IF
EXISTS will bypass. Per discussion, we only intend it to skip no-such-object
cases, not multiple-possible-matches cases. Per bug #15572 from Ash Marath.
Although this definitely seems like a bug, it's not clear that people would
thank us for changing the behavior in minor releases, so no back-patch. David
Rowley, reviewed by Julien Rouhaud and Pavel Stehule Discussion:
https://postgr.es/m/15572-ed1b9ed09503de8a@postgresql.org
https://git.postgresql.org/pg/commitdiff/bfb456c1b9656d5b717b84d833f62cf712b21726

- Don't copy PartitionBoundInfo in set_relation_partition_info. I (tgl) remain
dubious that it's a good idea for PartitionDirectory to hold a pin on a
relcache entry throughout planning, rather than copying the data or using some
kind of refcount scheme. However, it's certainly the responsibility of the
PartitionDirectory code to ensure that what it's handing back is a stable data
structure, not that of its caller. So this is a pretty clear oversight in
commit 898e5e329, and one that can cost a lot of performance when there are
many partitions. Amit Langote (extracted from a much larger patch set)
Discussion:
https://postgr.es/m/CA+TgmoY3bRmGB6-DUnoVy5fJoreiBJ43rwMrQRCdPXuKt4Ykaw@mail.gmail.com
Discussion:
https://postgr.es/m/9d7c5112-cb99-6a47-d3be-cf1ee6862a1d@lab.ntt.co.jp
https://git.postgresql.org/pg/commitdiff/c8151e642368599dc77c4448e6bdc34cc8810475

- Rearrange make_partitionedrel_pruneinfo to avoid work when we can't prune.
Postpone most of the effort of constructing PartitionedRelPruneInfos until
after we have found out whether run-time pruning is needed at all. This costs
very little duplicated effort (basically just an extra find_base_rel() call
per partition) and saves quite a bit when we can't do run-time pruning. Also,
merge the first loop (for building relid_subpart_map) into the second loop,
since we don't need the map to be valid during that loop. Amit Langote
Discussion:
https://postgr.es/m/9d7c5112-cb99-6a47-d3be-cf1ee6862a1d@lab.ntt.co.jp
https://git.postgresql.org/pg/commitdiff/734308a220729e4ececa3758bdcae39a335d55ea

- Add unreachable "break" to satisfy -Wimplicit-fallthrough. gcc is a bit
pickier about this than perhaps it should be. Discussion:
https://postgr.es/m/E1h6zzT-0003ft-DD@gemulon.postgresql.org
https://git.postgresql.org/pg/commitdiff/fb50d3f03fe6876b878d636a312c2ccc1f4f99af

- Accept XML documents when xmloption = content, as required by SQL:2006+.
Previously we were using the SQL:2003 definition, which doesn't allow this,
but that creates a serious dump/restore gotcha: there is no setting of
xmloption that will allow all valid XML data. Hence, switch to the 2006
definition. Since libxml doesn't accept <!DOCTYPE> directives in the mode we
use for CONTENT parsing, the implementation is to detect <!DOCTYPE> in the
input and switch to DOCUMENT parsing mode. This should not cost much, because
<!DOCTYPE> should be close to the front of the input if it's there at all.
It's possible that this causes the error messages for malformed input to be
slightly different than they were before, if said input includes <!DOCTYPE>;
but that does not seem like a big problem. In passing, buy back a few cycles
in parsing of large XML documents by not doing strlen() of the whole input in
parse_xml_decl(). Back-patch because dump/restore failures are not nice.
This change shouldn't break any cases that worked before, so it seems safe to
back-patch. Chapman Flack (revised a bit by me) Discussion:
https://postgr.es/m/CAN-V+g-6JqUQEQZ55Q3toXEN6d5Ez5uvzL4VR+8KtvJKj31taw@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/8d1dadb25bb522e09af7f141e9d78db5805d868c

- Ensure xmloption = content while restoring pg_dump output. In combination with
the previous commit, this ensures that valid XML data can always be dumped and
reloaded, whether it is "document" or "content". Discussion:
https://postgr.es/m/CAN-V+g-6JqUQEQZ55Q3toXEN6d5Ez5uvzL4VR+8KtvJKj31taw@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/4870dce37fab7ed308cee9856bec4d4c8c7590b3

- Remove inadequate check for duplicate "xml" PI. I failed to think about PIs
starting with "xml". We don't really need this check at all, so just take it
out. Oversight in commit 8d1dadb25 et al.
https://git.postgresql.org/pg/commitdiff/f778e537a0d02d5e05016da3e6f4068914101dee

Andrew Gierth pushed:

- Implement OR REPLACE option for CREATE AGGREGATE. Aggregates have acquired a
dozen or so optional attributes in recent years for things like parallel query
and moving-aggregate mode; the lack of an OR REPLACE option to add or change
these for an existing agg makes extension upgrades gratuitously hard. Rectify.
https://git.postgresql.org/pg/commitdiff/01bde4fa4c24f4eea0a634d8fcad0b376efda6b1

Peter Geoghegan pushed:

- Tweak nbtsearch.c function prototype order. nbtsearch.c's static function
prototypes were slightly out of order. Make the order consistent with static
function definition order.
https://git.postgresql.org/pg/commitdiff/1009920aaa39e19ecb36409447ece2f8102f4225

- Refactor nbtree insertion scankeys. Use dedicated struct to represent nbtree
insertion scan keys. Having a dedicated struct makes the difference between
search type scankeys and insertion scankeys a lot clearer, and simplifies the
signature of several related functions. This is based on a suggestion by
Andrey Lepikhov. Streamline how unique index insertions cache binary search
progress. Cache the state of in-progress binary searches within
_bt_check_unique() for later instead of having callers avoid repeating the
binary search in an ad-hoc manner. This makes it easy to add a new
optimization: _bt_check_unique() now falls out of its loop immediately in the
common case where it's already clear that there couldn't possibly be a
duplicate. The new _bt_check_unique() scheme makes it a lot easier to manage
cached binary search effort afterwards, from within _bt_findinsertloc(). This
is needed for the upcoming patch to make nbtree tuples unique by treating heap
TID as a final tiebreaker column. Unique key binary searches need to restore
lower and upper bounds. They cannot simply continue to use the >= lower bound
as the offset to insert at, because the heap TID tiebreaker column must be
used in comparisons for the restored binary search (unlike the original
_bt_check_unique() binary search, where scankey's heap TID column must be
omitted). Author: Peter Geoghegan, Heikki Linnakangas Reviewed-By: Heikki
Linnakangas, Andrey Lepikhov Discussion:
https://postgr.es/m/CAH2-WzmE6AhUdk9NdWBf4K3HjWXZBX3+umC7mH7+WDrKcRtsOw@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/e5adcb789d80ba565ccacb1ed4341a7c29085238

- Make heap TID a tiebreaker nbtree index column. Make nbtree treat all index
tuples as having a heap TID attribute. Index searches can distinguish
duplicates by heap TID, since heap TID is always guaranteed to be unique.
This general approach has numerous benefits for performance, and is
prerequisite to teaching VACUUM to perform "retail index tuple deletion".
Naively adding a new attribute to every pivot tuple has unacceptable overhead
(it bloats internal pages), so suffix truncation of pivot tuples is added.
This will usually truncate away the "extra" heap TID attribute from pivot
tuples during a leaf page split, and may also truncate away additional user
attributes. This can increase fan-out, especially in a multi-column index.
Truncation can only occur at the attribute granularity, which isn't
particularly effective, but works well enough for now. A future patch may add
support for truncating "within" text attributes by generating truncated key
values using new opclass infrastructure. Only new indexes (BTREE_VERSION 4
indexes) will have insertions that treat heap TID as a tiebreaker attribute,
or will have pivot tuples undergo suffix truncation during a leaf page split
(on-disk compatibility with versions 2 and 3 is preserved). Upgrades to
version 4 cannot be performed on-the-fly, unlike upgrades from version 2 to
version 3. contrib/amcheck continues to work with version 2 and 3 indexes,
while also enforcing stricter invariants when verifying version 4 indexes.
These stricter invariants are the same invariants described by "3.1.12
Sequencing" from the Lehman and Yao paper. A later patch will enhance the
logic used by nbtree to pick a split point. This patch is likely to
negatively impact performance without smarter choices around the precise point
to split leaf pages at. Making these two mostly-distinct sets of enhancements
into distinct commits seems like it might clarify their design, even though
neither commit is particularly useful on its own. The maximum allowed size of
new tuples is reduced by an amount equal to the space required to store an
extra MAXALIGN()'d TID in a new high key during leaf page splits. The
user-facing definition of the "1/3 of a page" restriction is already
imprecise, and so does not need to be revised. However, there should be a
compatibility note in the v12 release notes. Author: Peter Geoghegan
Reviewed-By: Heikki Linnakangas, Alexander Korotkov Discussion:
https://postgr.es/m/CAH2-WzkVb0Kom=R+88fDFb=JSxZMFvbHVC6Mn9LJ2n=X=kS-Uw@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/dd299df8189bd00fbe54b72c64f43b6af2ffeccd

- Consider secondary factors during nbtree splits. Teach nbtree to give some
consideration to how "distinguishing" candidate leaf page split points are.
This should not noticeably affect the balance of free space within each half
of the split, while still making suffix truncation truncate away significantly
more attributes on average. The logic for choosing a leaf split point now
uses a fallback mode in the case where the page is full of duplicates and it
isn't possible to find even a minimally distinguishing split point. When the
page is full of duplicates, the split should pack the left half very tightly,
while leaving the right half mostly empty. Our assumption is that logical
duplicates will almost always be inserted in ascending heap TID order with v4
indexes. This strategy leaves most of the free space on the half of the split
that will likely be where future logical duplicates of the same value need to
be placed. The number of cycles added is not very noticeable. This is
important because deciding on a split point takes place while at least one
exclusive buffer lock is held. We avoid using authoritative insertion scankey
comparisons to save cycles, unlike suffix truncation proper. We use a faster
binary comparison instead. Note that even pg_upgrade'd v3 indexes make use of
these optimizations. Benchmarking has shown that even v3 indexes benefit,
despite the fact that suffix truncation will only truncate non-key attributes
in INCLUDE indexes. Grouping relatively similar tuples together is beneficial
in and of itself, since it reduces the number of leaf pages that must be
accessed by subsequent index scans. Author: Peter Geoghegan Reviewed-By:
Heikki Linnakangas Discussion:
https://postgr.es/m/CAH2-WzmmoLNQOj9mAD78iQHfWLJDszHEDrAzGTUMG3mVh5xWPw@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/fab2502433870d98271ba8751f3794e2ed44140a

- Allow amcheck to re-find tuples using new search. Teach contrib/amcheck's
bt_index_parent_check() function to take advantage of the uniqueness property
of heapkeyspace indexes in support of a new verification option: non-pivot
tuples (non-highkey tuples on the leaf level) can optionally be re-found using
a new search for each, that starts from the root page. If a tuple cannot be
re-found, report that the index is corrupt. The new "rootdescend"
verification option is exhaustive, and can therefore make a call to
bt_index_parent_check() take a lot longer. Re-finding tuples during
verification is mostly intended as an option for backend developers, since the
corruption scenarios that it alone is uniquely capable of detecting seem
fairly far-fetched. For example, "rootdescend" verification is much more
likely to detect corruption of the least significant byte of a key from a
pivot tuple in the root page of a B-Tree that already has at least three
levels. Typically, only a few tuples on a cousin leaf page are at risk of
"getting overlooked" by index scans in this scenario. The corrupt key in the
root page is only slightly corrupt: corrupt enough to give wrong answers to
some queries, and yet not corrupt enough to allow the problem to be detected
without verifying agreement between the leaf page and the root page, skipping
at least one internal page level. The existing bt_index_parent_check() checks
never cross more than a single level. Author: Peter Geoghegan Reviewed-By:
Heikki Linnakangas Discussion:
https://postgr.es/m/CAH2-Wz=yTWnVu+HeHGKb2AGiADL9eprn-cKYAto4MkKOuiGtRQ@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/c1afd175b5b2e5c44f6da34988342e00ecdfb518

- Suppress DETAIL output from a foreign_data test. Unstable sort order related
to changes to nbtree from commit dd299df8 can cause two lines of DETAIL output
to be in opposite-of-expected order. Suppress the output using the same
VERBOSITY hack that is used elsewhere in the foreign_data tests. Note that
the same foreign_data.out DETAIL output was mechanically updated by commit
dd299df8. Only a few such changes were required, though. Per buildfarm
member batfish. Discussion:
https://postgr.es/m/CAH2-WzkCQ_MtKeOpzozj7QhhgP1unXsK8o9DMAFvDqQFEPpkYQ@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/7d3bf73ac416fdd74d6c7d473e0c00a19be90c82

- Fix spurious compiler warning in nbtxlog.c. Cleanup from commit dd299df8. Per
complaint from Tom Lane.
https://git.postgresql.org/pg/commitdiff/3d0dcc5c7fb9cfc349d1b2d476a1c0c5d64522bd

- Revert "Suppress DETAIL output from a foreign_data test.". This should be
superseded by commit 8aa9dd74.
https://git.postgresql.org/pg/commitdiff/fff518d051285bc47e2694a349d410e01972730b

- Go back to suppressing foreign_data DETAIL test output. This is almost a
straight revert of commit fff518d, which itself was a revert of 7d3bf73ac. It
turns out that commit 8aa9dd74, which sorted dependent objects before deletion
in DROP OWNED BY, was not sufficient to make all remaining unstable DETAIL
output stable. Unstable DETAIL output from DROP ROLE was not affected,
because that happens to use a different code path. It doesn't seem worthwhile
to fix the other code path at this time. Discussion:
https://postgr.es/m/6226.1553274783@sss.pgh.pa.us
https://git.postgresql.org/pg/commitdiff/09963cedced7ffb98a06298cc16305767fd2b4dd

- Add nbtree high key "continuescan" optimization. Teach nbtree forward index
scans to check the high key before moving to the right sibling page in the
hope of finding that it isn't actually necessary to do so. The new check may
indicate that the scan definitely cannot find matching tuples to the right,
ending the scan immediately. We already opportunistically force a similar
"continuescan orientated" key check of the final non-pivot tuple when it's
clear that it cannot be returned to the scan due to being dead-to-all. The
new high key check is complementary. The new approach for forward scans is
more effective than checking the final non-pivot tuple, especially with
composite indexes and non-unique indexes. The improvements to the logic for
picking a split point added by commit fab25024 make it likely that relatively
dissimilar high keys will appear on a page. A distinguishing key value that
can only appear on non-pivot tuples on the right sibling page will often be
present in leaf page high keys. Since forcing the final item to be key
checked no longer makes any difference in the case of forward scans, the
existing extra key check is now only used for backwards scans. Backward scans
continue to opportunistically check the final non-pivot tuple, which is
actually the first non-pivot tuple on the page (not the last). Note that even
pg_upgrade'd v3 indexes make use of this optimization. Author: Peter
Geoghegan, Heikki Linnakangas Reviewed-By: Heikki Linnakangas Discussion:
https://postgr.es/m/CAH2-WzkOmUduME31QnuTFpimejuQoiZ-HOf0pOWeFZNhTMctvA@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/29b64d1de7c77ffb5cb10696693e6ed8a6fc481c

- Suppress DETAIL output from an event_trigger test. Suppress 3 lines of
unstable DETAIL output from a DROP ROLE statement in event_trigger.sql. This
is further cleanup for commit dd299df8. Note that the event_trigger test
instability issue is very similar to the recently suppressed foreign_data test
instability issue. Both issues involve DETAIL output for a DROP ROLE
statement that needed to be changed as part of dd299df8. Per buildfarm member
macaque.
https://git.postgresql.org/pg/commitdiff/05f110cc0b83d9dc174f72cf96798299eb3e7f67

Álvaro Herrera pushed:

- Restore RI trigger sanity check. I unnecessarily removed this check in
3de241dba86f because I misunderstood what the final representation of
constraints across a partitioning hierarchy was to be. Put it back (in both
branches). Discussion:
https://postgr.es/m/201901222145.t6wws6t6vrcu@alvherre.pgsql
https://git.postgresql.org/pg/commitdiff/815b20ae0c6ed61a431fba124c736152f0df5022

- Add index_get_partition convenience function. This new function simplifies
some existing coding, as well as supports future patches. Discussion:
https://postgr.es/m/201901222145.t6wws6t6vrcu@alvherre.pgsql Reviewed-by: Amit
Langote, Jesper Pedersen
https://git.postgresql.org/pg/commitdiff/a6da0047158b8a227f883aeed19eb7fcfbef11fb

- Fix dependency recording bug for partitioned PKs. When DefineIndex recurses to
create constraints on partitions, it needs to use the value returned by
index_constraint_create to set up partition dependencies. However, in the
course of fixing the DEPENDENCY_INTERNAL_AUTO mess, commit 1d92a0c9f7dd
introduced some code to that function that clobbered the return value, causing
the recorded OID to be of the wrong object. Close examination of pg_depend
after creating the tables leads to indescribable objects :-( My sin (in commit
bdc3d7fa2376, while preparing for DDL deparsing in event triggers) was to use
a variable name for the return value that's typically used for throwaway
objects in dependency-setting calls ("referenced"). Fix by changing the
variable names to match extended practice (the return value is "myself" rather
than "referenced".) The pg_upgrade test notices the problem (in an indirect
way: the pg_dump outputs are in different order), but only if you create the
objects in a specific way that wasn't being used in the existing tests. Add a
stanza to leave some objects around that shows the bug. Catversion bump
because preexisting databases might have bogus pg_depend entries. Discussion:
https://postgr.es/m/20190318204235.GA30360@alvherre.pgsql
https://git.postgresql.org/pg/commitdiff/7e7c57bbb2ebed7e8acbd2e62fadca5a5fe5df5f

- Catversion bump announced in previous commit but forgotten.
https://git.postgresql.org/pg/commitdiff/03ae9d59bd5f5ef9a1cb387568e5cbf12b9c7b10

Thomas Munro pushed:

- Add DNS SRV support for LDAP server discovery. LDAP servers can be advertised
on a network with RFC 2782 DNS SRV records. The OpenLDAP command-line tools
automatically try to find servers that way, if no server name is provided by
the user. Teach PostgreSQL to do the same using OpenLDAP's support functions,
when building with OpenLDAP. For now, we assume that HAVE_LDAP_INITIALIZE (an
OpenLDAP extension available since OpenLDAP 2.0 and also present in Apple
LDAP) implies that you also have ldap_domain2hostlist() (which arrived in the
same OpenLDAP version and is also present in Apple LDAP). Author: Thomas
Munro Reviewed-by: Daniel Gustafsson Discussion:
https://postgr.es/m/CAEepm=2hAnSfhdsd6vXsM6VZVN0br-FbAZ-O+Swk18S5HkCP=A@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/0f086f84ad9041888b789af5871c7432f0e19c5b

Heikki Linnakangas pushed:

- Add IntegerSet, to hold large sets of 64-bit ints efficiently. The set is
implemented as a B-tree, with a compact representation at leaf items, using
Simple-8b algorithm, so that clusters of nearby values use less memory. The
IntegerSet isn't used for anything yet, aside from the test code, but we have
two patches in the works that would benefit from this: A patch to allow GiST
vacuum to delete empty pages, and a patch to reduce heap VACUUM's memory
usage, by storing the list of dead TIDs more efficiently and lifting the 1 GB
limit on its size. This includes a unit test module, in
src/test/modules/test_integerset. It can be used to verify correctness, as a
regression test, but if you run it manully, it can also print memory usage and
execution time of some of the tests. Author: Heikki Linnakangas, Andrey
Borodin Reviewed-by: Julien Rouhaud Discussion:
https://www.postgresql.org/message-id/b5e82599-1966-5783-733c-1a947ddb729f@iki.fi
https://git.postgresql.org/pg/commitdiff/df816f6ad532ad685a3897869a2e64d3a53fe312

- Delete empty pages during GiST VACUUM. To do this, we scan GiST two times. In
the first pass we make note of empty leaf pages and internal pages. At second
pass we scan through internal pages, looking for downlinks to the empty pages.
Deleting internal pages is still not supported, like in nbtree, the last child
of an internal page is never deleted. That means that if you have a workload
where new keys are always inserted to different area than where old keys are
removed, the index will still grow without bound. But the rate of growth will
be an order of magnitude slower than before. Author: Andrey Borodin
Discussion:
https://www.postgresql.org/message-id/B1E4DF12-6CD3-4706-BDBD-BF3283328F60@yandex-team.ru
https://git.postgresql.org/pg/commitdiff/7df159a620b760e289f1795b13542ed1b3e13b87

- Fix bug in the GiST vacuum's 2nd stage. We mustn't assume that the
IndexVacuumInfo pointer passed to bulkdelete() stage is still valid in the
vacuumcleanup() stage. Per very pink buildfarm.
https://git.postgresql.org/pg/commitdiff/d1b9ee4e44062cc540d8e406f49b160326d58a84

- Make the integerset test more verbose. Buildfarm member 'woodlouse' failed one
of the tests, and I'm not sure which test failed. Better to print the names of
the tests, so that it will appear in the regression.diffs on failure.
https://git.postgresql.org/pg/commitdiff/608c5f4347acefdbb2663b9fb6deab079b4b3c8b

- Make printf format strings in test_integerset portable. Use UINT64_FORMAT for
printing uint64s.
https://git.postgresql.org/pg/commitdiff/32f8ddf7e1c8b24382f98c14f6b588cd7e17418c

- More portability fixes for integerset tests. Use UINT64CONST for large
constants.
https://git.postgresql.org/pg/commitdiff/c477c68c8f660550219c69fac2ab41beb86d7f45

- Fix yet more portability bugs in integerset and its tests. There were more
large constants that needed UINT64CONST. And one variable was declared as
"int", when it needed to be uint64. These bugs were only visible on 32-bit
systems; clearly I should've tested on one, given that this code does a lot of
work with 64-bit integers. Also, in the test "huge distances" test, the code
created some values with random distances between them, but the test logic
didn't take into account the possibility that the random distance was exactly
1. That never actually happens with the seed we're using, but let's be tidy.
https://git.postgresql.org/pg/commitdiff/b5fd4972a3bc758c0b8e8c9cd4aa32bacdeb6605

== Pending Patches ==

Arseny Sher sent in a patch to allow parallel workers while backends are alive
in 'smart' shutdown.

Hugh Ranalli and Kyotaro HORIGUCHI traded patches to fix an issue with
contrib/unaccent on Windows.

Nikolay Shaplov sent in a patch to implement a dummy_index access method module,
which makes it easy to test reloptions from inside of the access method
extension.

Paul Ramsey sent in another revision of a patch to implement compressed datum
slicing for TOAST.

Peter Eisentraut sent in another revision of a patch to add the stored variant
of GENERATED columns.

Masahiko Sawada sent in three more revisions of a patch to implement block-level
parallel VACUUM.

Álvaro Herrera and Amit Langote traded patches to make it possible for foreign
keys to reference partitioned tables.

David Rowley and Tom Lane traded patches to Performance issue in
foreign-key-aware join estimationMasahiko Sawada sent in three more revisions of
a patch to implement block-level parallel VACUUM.

Álvaro Herrera and Amit Langote traded patches to make it possible for foreign
keys to reference partitioned tables.

David Rowley and Tom Lane traded patches to fix a performance issue in
foreign-key-aware join estimation.

Amit Langote, Yuzuko Hosoya, and Kyotaro HORIGUCHI traded patches to fix a
problem with default partition pruning.

Alexander Korotkov sent in a patch to fix an infelicity in the interlocking
between the VACUUM of the main table and the VACUUM of the TOAST table.

Rafia Sabih, Tatsuro Yamada, and Robert Haas traded patches to implement a
progress monitor for the CLUSTER command.

Michael Banck and Fabien COELHO traded patches to add progress reporting for
pg_verify_checksums.

Matheus de Oliveira sent in another revision of a patch to add support for ON
UPDATE/DELETE actions to ALTER CONSTRAINT.

Peter Eisentraut sent in another revision of a patch to add a macro to cast away
volatile without allowing changes to underlying type.

Dmitry Dolgov sent in another revision of a patch to implement index skip scan
a.k.a. loose index scan.

Dmitry Dolgov sent in two more revisions of a patch to implement generic type
subscripting.

Konstantin Knizhnik sent in another revision of a patch to add autoprepare.

Takayuki Tsunakawa sent in two more revisions of a patch to speeed up
transaction completion when many relations were touched during the transaction.

Paul Guo sent sent in a patch to ensure target clean shutdown at beginning of
pg_rewind.

Paul Guo sent in a patch to auto-generate a recovery.conf at the end of
pg_rewind.

Julien Rouhaud sent in another revision of a patch to add a query_id option
both to pg_stat_activity and to log_line_prefix.

Shaoqi Bai sent in three more revisions of a patch to add a tablespace TAP test
to pg_rewind.

Amit Langote sent in three more revisions of a patch to speed up planning with
partitions.

Kirk Jamison sent in another revision of a patch to avoid counting parallel
worker transactions stats.

Etsuro Fujita sent in another revision of a patch to make the PostgreSQL FDW
perform the UPPERREL_ORDERED and UPPERREL_FINAL steps remotely, and refactor
create_limit_path() to share the cost adjustment code.

Konstantin Knizhnik sent in another revision of a patch to implement a built-in
connection pooler.

Alexander Kuzmenkov sent in another revision of a patch to optimize the use of
immutable functions as relations.

David Steele sent in another revision of a patch to add exclusive backup
deprecation notes to documentation.

Kyotaro HORIGUCHI sent in another revision of a patch to fix a bug that
manifested as a WAL logging problem in PostgreSQL 9.4.3.

David Rowley sent in another revision of a patch to fix the error messages in
DROP FUNCTION IF NOT EXISTS.

Michaël Paquier sent in two more revisions of a patch to fix a crash in
partition bounds and rework the error messages for incorrect column references.

Amit Langote sent in two more revisions of a patch to fix the planner to load
partition constraints in some cases.

Heikki Linnakangas sent in another revision of a patch to add IntegerSet, a data
structure to hold large sets of 64-bit ints efficiently.

Alexander Kuzmenkov sent in two more revisions of a patch to remove unneeded
self-joins.

Evgeniy Efimkin sent in another revision of a patch to add a new role for
subscriptions.

Tomáš Vondra sent in another revision of a patch to fix a performance issue in
remove_from_unowned_list().

Thomas Munro sent in a patch to support MacPorts for "extra" tests.

Haribabu Kommi sent in a patch to add MSVC Build support with Visual Studio 2019.

Christoph Berg sent in a patch to align timestamps in pg_regress output.

Pavan Deolasee sent in another revision of a patch to add a table-level option
to control compression.

Shawn Debnath sent in another revision of a patch to add a timeout capability
for ConditionVariableSleep.

Joel Jacobson sent in another revision of a patch to reduce the footprint of
ScanKeyword.

Tomáš Vondra sent in a patch to fix an infelicity between jsonbd and custom
compression methods.

Lucas Viecelli sent in a patch to emit a warning when creating a publication
when wal_level is not already set to logical.

Markus Timmer sent in another revision of a patch to make it possible to use ICU
as the default collation provider.

Haribabu Kommi sent in another revision of a patch to add \dA to psql to show a
table type access method.

Andres Freund sent in two more revisions of a patch to add pluggable storage.

Takuma Hoshiai sent in another revision of a patch to suppress errors thrown by
to_reg*().

Haribabu Kommi sent in a WIP patch to restructure the session attributes in
libpq to be more extensible and use same to add a "prefer read" option.

Christoph Berg sent in a patch to change the default for page checksums to
enabled.

Sergey Cherkashin sent in a patch to add \dA (access methods) to psql.

Ramanarayana sent in a patch to add Perldoc for the TestLib module.

Álvaro Herrera sent in two revisions of a patch to fix the display of foreign
keys in psql.

Fabrízio de Royes Mello sent in a patch to make MIN() and MAX() work with the
LSN type.

legrand legrand and Julien Rouhaud traded patches to add planning counters to
contrib/pg_stat_statements.

Simon Riggs sent in a patch to rationalize constraint error messages.

Fabien COELHO sent in a patch to reorder the cluster fsyncing and control file
changes in pg_rewind so that the latter is done after all data are committed to
disk. This reflects the actual cluster status, similarly to what is done by
pg_checksums.

Fabien COELHO sent in two more revisions of a patch to remove \cset from
pgbench.

Hadi Moshayedi sent in a patch to fix the foreign key constraint check for
partitioned tables.

Fabien COELHO sent in four revisions of a patch to address high CPU costs caused
by Zipf distributions with s < 1 in pgbench.

Chapman Flack sent in another revision of a patch to document the limitations of
the current XML implementation.

Tomáš Vondra sent in another revision of a patch to implement multivariate
histograms and MCV lists.

Pavel Stěhule sent in another revision of a patch to implement schema variables.

David Rowley sent in another revision of a patch to convert MergeAppend to
Append in some cases.

Julien Rouhaud sent in a patch to avoid full index scans on GIN indexes when
possible.

Dean Rasheed sent in a patch to ensure that the correct user's RLS is used in
VIEWs.

Browse pgsql-announce by date

  From Date Subject
Next Message Julien Tachoires 2019-03-27 07:52:26 pg_activity 1.5.0 release
Previous Message Monica Real Amores 2019-03-21 14:20:28 Barman v2.7 Now Available