|From:||David Fetter <david(at)fetter(dot)org>|
|To:||PostgreSQL Announce <pgsql-announce(at)postgresql(dot)org>|
|Subject:||== PostgreSQL Weekly News - August 2, 2020 ==|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
== PostgreSQL Weekly News - August 2, 2020 ==
Person of the week: https://postgresql.life/post/gilberto_castillo/
== PostgreSQL Product News ==
pgBackRest 2.28, a backup and restore system for PostgreSQL, released.
PGDay Austria has been cancelled.
check_pgbackrest 1.9, a Nagios-compatible monitor for pgBackRest, released.
== PostgreSQL Local ==
PGDay Ukraine will take place September 5th, 2020 in Lviv at the Bank Hotel.
pgDay Israel 2020 will take place on September 10, 2020 in Tel Aviv.
== PostgreSQL in the News ==
Planet PostgreSQL: http://planet.postgresql.org/
PostgreSQL Weekly News is brought to you this week by David Fetter
Submit news and announcements by Sunday at 3:00pm PST8PDT to david(at)fetter(dot)org(dot)
== Applied Patches ==
Jeff Davis pushed:
- Fix LookupTupleHashEntryHash() pipeline-stall issue. Refactor hash lookups in
nodeAgg.c to improve performance. Author: Andres Freund and Jeff Davis
- HashAgg: use better cardinality estimate for recursive spilling. Use
HyperLogLog to estimate the group cardinality in a spilled partition. This
estimate is used to choose the number of partitions if we recurse. The
previous behavior was to use the number of tuples in a spilled partition as
the estimate for the number of groups, which lead to overpartitioning. That
could cause the number of batches to be much higher than expected (with each
batch being very small), which made it harder to interpret EXPLAIN ANALYZE
results. Reviewed-by: Peter Geoghegan Discussion:
- Use pg_bitutils for HyperLogLog. Using pg_leftmost_one_post32() yields
substantial performance benefits. Backpatching to version 13 because HLL is
used for HashAgg improvements in 9878b643, which was also backpatched to 13.
Reviewed-by: Peter Geoghegan Discussion:
Michaël Paquier pushed:
- Fix handling of structure for bytea data type in ECPG. Some code paths
dedicated to bytea used the structure for varchar. This did not lead to any
actual bugs, as bytea and varchar have the same definition, but it could
become a trap if one of these definitions changes for a new feature or a bug
fix. Issue introduced by 050710b. Author: Shenhao Wang Reviewed-by: Vignesh
C, Michael Paquier Discussion:
- Fix corner case with 16kB-long decompression in pgcrypto, take 2. A compressed
stream may end with an empty packet. In this case decompression finishes
before reading the empty packet and the remaining stream packet causes a
failure in reading the following data. This commit makes sure to consume such
extra data, avoiding a failure when decompression the data. This corner case
was reproducible easily with a data length of 16kB, and existed since e94dd6a.
A cheap regression test is added to cover this case based on a random,
incompressible string. The first attempt of this patch has allowed to find an
older failure within the compression logic of pgcrypto, fixed by b9b6105.
This involved SLES 15 with z390 where a custom flavor of libz gets used. Bonus
thanks to Mark Wong for providing access to the specific environment.
Reported-by: Frank Gagnepain Author: Kyotaro Horiguchi, Michael Paquier
Reviewed-by: Tom Lane Discussion:
- Fix incorrect print format in json.c. Oid is unsigned, so %u needs to be used
and not %d. The code path involved here is not normally reachable, so no
backpatch is done. Author: Justin Pryzby Discussion:
- doc: Mention index references in pg_inherits. Partitioned indexes are also
registered in pg_inherits, but the description of this catalog did not reflect
that. Author: Dagfinn Ilmari Mannsåker Discussion:
https://email@example.com Backpatch-through: 11
- Include partitioned tables for tab completion of VACUUM in psql. The relkinds
that support indexing are the same as the ones supporting VACUUM, so the code
gets refactored a bit with the completion query used for CLUSTER, but there is
no change for CLUSTER in this commit. Author: Justin Pryzby Reviewed-by:
Fujii Masao, Michael Paquier, Masahiko Sawada Discussion:
- Use multi-inserts for pg_attribute and pg_shdepend. For pg_attribute, this
allows to insert at once a full set of attributes for a relation (roughly 15%
of WAL reduction in extreme cases). For pg_shdepend, this reduces the work
done when creating new shared dependencies from a database template. The
number of slots used for the insertion is capped at 64kB of data inserted for
both, depending on the number of items to insert and the length of the rows
involved. More can be done for other catalogs, like pg_depend. This part
requires a different approach as the number of slots to use depends also on
the number of entries discarded as pinned dependencies. This is also related
to the rework or dependency handling for ALTER TABLE and CREATE TABLE, mainly.
Author: Daniel Gustafsson Reviewed-by: Andres Freund, Michael Paquier
- Fix comment in instrument.h. local_blks_dirtied tracks the number of local
blocks dirtied, not shared ones. Author: Kirk Jamison Discussion:
- Minimize slot creation for multi-inserts of pg_shdepend. When doing multiple
insertions in pg_shdepend for the copy of dependencies from a template
database in CREATE DATABASE, the same number of slots would have been created
and used all the time. As the number of items to insert is not known in
advance, this makes most of the slots created for nothing. This improves the
slot handling so as slot creation only happens when needed, minimizing the
overhead of the operation. Author: Michael Paquier Reviewed-by: Daniel
Gustafsson Discussion: https://postgr.es/m/20200731024148.GB3317@paquier.xyz
Peter Geoghegan pushed:
- Remove hashagg_avoid_disk_plan GUC. Note: This GUC was originally named
enable_hashagg_disk when it appeared in commit 1f39bce0, which added
disk-based hash aggregation. It was subsequently renamed in commit 92c58fd9.
Author: Peter Geoghegan Reviewed-By: Jeff Davis, Álvaro Herrera Discussion:
Backpatch: 13-, where disk-based hash aggregation was introduced.
- Doc: Remove obsolete CREATE AGGREGATE note. The planner is in fact willing to
use hash aggregation when work_mem is not set high enough for everything to
fit in memory. This has been the case since commit 1f39bce0, which added
disk-based hash aggregation. There are a few remaining cases in which hash
aggregation is avoided as a matter of policy when the planner surmises that
spilling will be necessary. For example, callers of choose_hashed_setop()
still conservatively avoid hash aggregation when spilling is anticipated. That
doesn't seem like a good enough reason to mention hash aggregation in this
context. Backpatch: 13-, where disk-based hash aggregation was introduced.
- Correct obsolete UNION hash aggs comment. Oversight in commit 1f39bce0, which
added disk-based hash aggregation. Backpatch: 13-, where disk-based hash
aggregation was introduced.
- Rename another "hash_mem" local variable. Missed by my commit 564ce621.
Backpatch: 13-, where disk-based hash aggregation was introduced.
- Add hash_mem_multiplier GUC. Add a GUC that acts as a multiplier on work_mem.
It gets applied when sizing executor node hash tables that were previously
size constrained using work_mem alone. The new GUC can be used to
preferentially give hash-based nodes more memory than the generic work_mem
limit. It is intended to enable admin tuning of the executor's memory usage.
Overall system throughput and system responsiveness can be improved by giving
hash-based executor nodes more memory (especially over sort-based
alternatives, which are often much less sensitive to being memory
constrained). The default value for hash_mem_multiplier is 1.0, which is also
the minimum valid value. This means that hash-based nodes continue to apply
work_mem in the traditional way by default. hash_mem_multiplier is generally
useful. However, it is being added now due to concerns about hash aggregate
performance stability for users that upgrade to Postgres 13 (which added
disk-based hash aggregation in commit 1f39bce0). While the old hash aggregate
behavior risked out-of-memory errors, it is nevertheless likely that many
users actually benefited. Hash agg's previous indifference to work_mem during
query execution was not just faster; it also accidentally made aggregation
resilient to grouping estimate problems (at least in cases where this didn't
create destabilizing memory pressure). hash_mem_multiplier can provide a
certain kind of continuity with the behavior of Postgres 12 hash aggregates in
cases where the planner incorrectly estimates that all groups (plus related
allocations) will fit in work_mem/hash_mem. This seems necessary because
hash-based aggregation is usually much slower when only a small fraction of
all groups can fit. Even when it isn't possible to totally avoid hash
aggregates that spill, giving hash aggregation more memory will reliably
improve performance (the same cannot be said for external sort operations,
which appear to be almost unaffected by memory availability provided it's at
least possible to get a single merge pass). The PostgreSQL 13 release notes
should advise users that increasing hash_mem_multiplier can help with
performance regressions associated with hash aggregation. That can be taken
care of by a later commit. Author: Peter Geoghegan Reviewed-By: Álvaro
Herrera, Jeff Davis Discussion:
Backpatch: 13-, where disk-based hash aggregation was introduced.
- Restore lost amcheck TOAST test coverage. Commit eba77534 fixed an amcheck
false positive bug involving inconsistencies in TOAST input state between
table and index. A test case was added that verified that such an
inconsistency didn't result in a spurious corruption related error. Test
coverage from the test was accidentally lost by commit 501e41dd, which
propagated ALTER TABLE ... SET STORAGE attstorage state to indexes. This
broke the test because the test specifically relied on attstorage not being
propagated. This artificially forced there to be index tuples whose datums
were equivalent to the datums in the heap without the datums actually being
bitwise equal. Fix this by updating pg_attribute directly instead. Commit
501e41dd made similar changes to a test_decoding TOAST-related test case which
made the same assumption, but overlooked the amcheck test case. Backpatch:
11-, just like commit eba77534 (and commit 501e41dd).
Etsuro Fujita pushed:
- Fix some issues with step generation in partition pruning. In the case of
range partitioning, get_steps_using_prefix() assumes that the passed-in prefix
list contains at least one clause for each of the partition keys earlier than
one specified in the passed-in step_lastkeyno, but the caller (ie,
gen_prune_steps_from_opexps()) didn't take it into account, which led to a
server crash or incorrect results when the list contained no clauses for such
partition keys, as reported in bug #16500 and #16501 from Kobayashi Hisanori.
Update the caller to call that function only when the list created there
contains at least one clause for each of the earlier partition keys in the
case of range partitioning. While at it, fix some other issues: * The list
to pass to get_steps_using_prefix() is allowed to contain multiple clauses
for the same partition key, as described in the comment for that function,
but that function actually assumed that the list contained just a single
clause for each of middle partition keys, which led to an assertion failure
when the list contained multiple clauses for such partition keys. Update
that function to match the comment. * In the case of hash partitioning,
partition keys are allowed to be NULL, in which case the list to pass to
get_steps_using_prefix() contains no clauses for NULL partition keys, but
that function treats that case as like the case of range partitioning, which
led to the assertion failure. Update the assertion test to take into
account NULL partition keys in the case of hash partitioning. * Fix a typo
in a comment in get_steps_using_prefix_recurse(). * gen_partprune_steps()
failed to detect self-contradiction from strict-qual clauses and an IS NULL
clause for the same partition key in some cases, producing incorrect
partition-pruning steps, which led to incorrect results of partition
pruning, but didn't cause any user-visible problems fortunately, as the
self-contradiction is detected later in the query planning. Update that
function to detect the self-contradiction. Per bug #16500 and #16501 from
Kobayashi Hisanori. Patch by me, initial diagnosis for the reported issue and
review by Dmitry Dolgov. Back-patch to v11, where partition pruning was
Amit Kapila pushed:
- Extend the logical decoding output plugin API with stream methods. This adds
seven methods to the output plugin API, adding support for streaming changes
of large in-progress transactions. * stream_start * stream_stop *
stream_abort * stream_commit * stream_change * stream_message *
stream_truncate Most of this is a simple extension of the existing methods,
with the semantic difference that the transaction (or subtransaction) is
incomplete and may be aborted later (which is something the regular API does
not really need to deal with). This also extends the 'test_decoding' plugin,
implementing these new stream methods. The stream_start/start_stop are used
to demarcate a chunk of changes streamed for a particular toplevel
transaction. This commit simply adds these new APIs and the upcoming patch to
"allow the streaming mode in ReorderBuffer" will use these APIs. Author:
Tomas Vondra, Dilip Kumar, Amit Kapila Reviewed-by: Amit Kapila Tested-by:
Neha Sharma and Mahendra Singh Thalor Discussion:
David Rowley pushed:
- Doc: Improve documentation for pg_jit_available(). Per complaint from Scott
Ribe. Based on wording suggestion from Tom Lane. Discussion:
Backpatch-through: 11, where pg_jit_available() was added
- Make EXPLAIN ANALYZE of HashAgg more similar to Hash Join. There were various
unnecessary differences between Hash Agg's EXPLAIN ANALYZE output and Hash
Join's. Here we modify the Hash Agg output so that it's better aligned to
Hash Join's. The following changes have been made: 1. Start batches counter
at 1 instead of 0. 2. Always display the "Batches" property, even when we
didn't spill to disk. 3. Use the text "Batches" instead of "HashAgg
Batches" for text format. 4. Use the text "Memory Usage" instead of "Peak
Memory Usage" for text format. 5. Include "Batches" before "Memory Usage"
in both text and non-text formats. In passing also modify the "Planned
Partitions" property so that we show it regardless of if the value is 0 or not
for non-text EXPLAIN formats. This was pointed out by Justin Pryzby and
probably should have been part of 40efbf870. Reviewed-by: Justin Pryzby, Jeff
Backpatch-through: 13, where HashAgg batching was introduced
- Use int64 instead of long in incremental sort code. Windows 64bit has 4-byte
long values which is not suitable for tracking disk space usage in the
incremental sort code. Let's just make all these fields int64s. Author: James
Backpatch-through: 13, where the incremental sort code was added
Thomas Munro pushed:
- Move syncscan.c to src/backend/access/common. Since the tableam.c code needs
to make use of the syncscan.c routines itself, and since other block-oriented
AMs might also want to use it one day, it didn't make sense for it to live
under src/backend/access/heap. Reviewed-by: Andres Freund
- Use a long lived WaitEventSet for WaitLatch(). Create LatchWaitSet at backend
startup time, and use it to implement WaitLatch(). This avoids repeated
epoll/kqueue setup and teardown system calls. Reorder SubPostmasterMain()
slightly so that we restore the postmaster pipe and Windows signal emulation
before we reach InitPostmasterChild(), to make this work in EXEC_BACKEND
builds. Reviewed-by: Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com> Discussion:
- Use WaitLatch() for condition variables. Previously, condition_variable.c
created a long lived WaitEventSet to avoid extra system calls. WaitLatch()
now uses something similar internally, so there is no point in wasting an
extra kernel descriptor. Reviewed-by: Kyotaro Horiguchi
- Introduce a WaitEventSet for the stats collector. This avoids avoids some
epoll/kqueue system calls for every wait. Reviewed-by: Kyotaro Horiguchi
- Cache smgrnblocks() results in recovery. Avoid repeatedly calling
lseek(SEEK_END) during recovery by caching the size of each fork. For now, we
can't use the same technique in other processes, because we lack a shared
invalidation mechanism. Do this by generalizing the pre-existing caching used
by FSM and VM to support all forks. Discussion:
- Preallocate some DSM space at startup. Create an optional region in the main
shared memory segment that can be used to acquire and release "fast" DSM
segments, and can benefit from huge pages allocated at cluster startup time,
if configured. Fall back to the existing mechanisms when that space is full.
The size is controlled by a new GUC min_dynamic_shared_memory, defaulting to
0. Main region DSM segments initially contain whatever garbage the memory
held last time they were used, rather than zeroes. That change revealed that
DSA areas failed to initialize themselves correctly in memory that wasn't
zeroed first, so fix that problem. Discussion:
- Fix compiler warning from Clang. Per build farm. Discussion:
- Improve programmer docs for simplehash and dynahash. When reading the code
it's not obvious when one should prefer dynahash over simplehash and
vice-versa, so, for programmer-friendliness, add comments to inform that
decision. Show sample simplehash method signatures. Author: James Coleman
- Use pg_pread() and pg_pwrite() in slru.c. This avoids lseek() system calls at
every SLRU I/O, as was done for relation files in commit c24dcd0c.
Reviewed-by: Ashwin Agrawal <aagrawal(at)pivotal(dot)io> Reviewed-by: Andres Freund
Fujii Masao pushed:
- Remove non-fast promotion. When fast promotion was supported in 9.3, non-fast
promotion became undocumented feature and it's basically not available for
ordinary users. However we decided not to remove non-fast promotion at that
moment, to leave it for a release or two for debugging purpose or as an
emergency method because fast promotion might have some issues, and then to
remove it later. Now, several versions were released since that decision and
there is no longer reason to keep supporting non-fast promotion. Therefore
this commit removes non-fast promotion. Author: Fujii Masao Reviewed-by:
Hamid Akhtar, Kyotaro Horiguchi Discussion:
- pg_stat_statements: track number of rows processed by some utility commands.
This commit makes pg_stat_statements track the total number of rows retrieved
or affected by CREATE TABLE AS, SELECT INTO, CREATE MATERIALIZED VIEW and
FETCH commands. Suggested-by: Pascal Legrand Author: Fujii Masao Reviewed-by:
Asif Rehman Discussion: https://firstname.lastname@example.org
Tatsuo Ishii pushed:
- Doc: fix high availability solutions comparison. In "High Availability, Load
Balancing, and Replication" chapter, certain descriptions of Pgpool-II were
not correct at this point. It does not need conflict resolution. Also
"Multiple-Server Parallel Query Execution" is not supported anymore.
Author: Tatsuo Ishii Reviewed-by: Bruce Momjian Backpatch-through: 9.5
Tom Lane pushed:
- Fix recently-introduced performance problem in ts_headline(). The new
hlCover() algorithm that I introduced in commit c9b0c678d turns out to
potentially take O(N^2) or worse time on long documents, if there are many
occurrences of individual query words but few or no substrings that actually
satisfy the query. (One way to hit this behavior is with a "common_word &
rare_word" type of query.) This seems unavoidable given the original goal of
checking every substring of the document, so we have to back off that idea.
Fortunately, it seems unlikely that anyone would really want headlines
spanning all of a long document, so we can avoid the worse-than-linear
behavior by imposing a maximum length of substring that we'll consider. For
now, just hard-wire that maximum length as a multiple of max_words times
max_fragments. Perhaps at some point somebody will argue for exposing it as a
ts_headline parameter, but I'm hesitant to make such a feature addition in a
back-patched bug fix. I also noted that the hlFirstIndex() function I'd added
in that commit was unnecessarily stupid: it really only needs to check whether
a HeadlineWordEntry's item pointer is null or not. This wouldn't make all
that much difference in typical cases with queries having just a few terms,
but a cycle shaved is a cycle earned. In addition, add a CHECK_FOR_INTERRUPTS
call in TS_execute_recurse. This ensures that hlCover's loop is cancellable if
it manages to take a long time, and it may protect some other TS_execute
callers as well. Back-patch to 9.6 as the previous commit was. I also chose
to add the CHECK_FOR_INTERRUPTS call to 9.5. The old hlCover() algorithm
seems to avoid the O(N^2) behavior, at least on the test case I tried, but
nonetheless it's not very quick on a long document. Per report from Stephen
- Fix oversight in ALTER TYPE: typmodin/typmodout must propagate to arrays. If a
base type supports typmods, its array type does too, with the same
interpretation. Hence changes in pg_type.typmodin/typmodout must be
propagated to the array type. While here, improve AlterTypeRecurse to not
recurse to domains if there is nothing we'd need to change. Oversight in
fe30e7ebf. Back-patch to v13 where that came in.
- Invent "amadjustmembers" AM method for validating opclass members. This allows
AM-specific knowledge to be applied during creation of pg_amop and pg_amproc
entries. Specifically, the AM knows better than core code which entries to
consider as required or optional. Giving the latter entries the appropriate
sort of dependency allows them to be dropped without taking out the whole
opclass or opfamily; which is something we'd like to have to correct
obsolescent entries in extensions. This callback also opens the door to
performing AM-specific validity checks during opclass creation, rather than
hoping than an opclass developer will remember to test with "amvalidate". For
the most part I've not actually added any such checks yet; that can happen in
a follow-on patch. (Note that we shouldn't remove any tests from
"amvalidate", as those are still needed to cross-check manually constructed
entries in the initdb data. So adding tests to "amadjustmembers" will be
somewhat duplicative, but it seems like a good idea anyway.) Patch by me,
reviewed by Alexander Korotkov, Hamid Akhtar, and Anastasia Lubennikova.
Noah Misch pushed:
- Change XID and mxact limits to warn at 40M and stop at 3M. We have edge-case
bugs when assigning values in the last few dozen pages before the wrap limit.
We may introduce similar bugs in the future. At default BLCKSZ, this makes
such bugs unreachable outside of single-user mode. Also, when VACUUM began to
consume mxacts, multiStopLimit did not change to compensate. pg_upgrade may
fail on a cluster that was already printing "must be vacuumed" warnings.
Follow the warning's instructions to clear the warning, then run pg_upgrade
again. One can still, peacefully consume 98% of XIDs or mxacts, so DBAs need
not change routine VACUUM settings. Discussion:
== Pending Patches ==
Jim Nasby sent in two revisions of a patch to fix a performance issue with
autovacuum of large numbers of tables by killing the autovacuum worker when it's
stuck in a tight loop.
Pavel Stěhule sent in another revision of a patch to add a --filter option to
Euler Taveira de Oliveira sent in a patch to fix an issue where tables with
deferrable primary keys were not replicated in logical replication by removing
the check that prevented them from doing so.
Bertrand Drouvot sent in a patch to make pg_stat_activity display the query
actually executing when several were sent separated by semicolons.
Mark Dilger sent in another revision of a patch to add a heapcheck contrib
Dilip Kumar sent in another revision of a patch to implement parallel bitmap
Vigneshwaran C sent in another revision of a patch to fix a problem that
manifested as parallel workers hanging while handling errors by rearranging
signal handling to ensure that this can't happen.
Alexandra Pervushina sent in a patch to add \si, \sm, \st and \sr functions to
show CREATE commands for indexes, matviews, triggers and tables to psql.
Justin Pryzby sent in two revisions of a patch to add tab completion for VACUUM
of partitioned tables to psql.
Dagfinn Ilmari Mannsåker sent in a patch to mention that pg_inherit can also
Mahendra Singh and Justin Pryzby traded patches to add offset with block number
in vacuum errcontext.
Dmitry Dolgov sent in another revision of a patch to implement generic type
Justin Pryzby sent in a patch to ensure that the leader explicitly cleans up
shared filesets in tuplesort.c.
Movead Li sent in another revision of a patch to implement CSN-based snapshots.
Chenyang Lu sent in a patch to fix an inconsistency between the English and
Japanese versions of an error message.
Amul Sul sent in another revision of a patch to implement ALTER SYSTEM READ
Pavel Stěhule sent in another revision of a patch to implement unescape_text().
Robert Haas sent in another revision of a patch to refactor pg_basebackup.c.
David Pirote sent in a patch to add logical decoding messages to pgoutput.
Thomas Munro sent in another revision of a patch to use WL_EXIT_ON_PM_DEATH in
FeBeWaitSet, introduce symbolic names for FeBeWaitSet positions, and use
FeBeWaitSet for walsender.c.
Etsuro Fujita sent in two more revisions of a patch to fix a bug with RETURNING
when UPDATE moves tuple.
Masahiko Sawada sent in another revision of a patch to implement an internal key
Dagfinn Ilmari Mannsåker sent in a patch to add section headers to index types
doc to make it easier to compare the properties of different index types at a
Pierre Ducroquet sent in a patch to remove useless DISTINCT clauses.
Atsushi Torikoshi sent in another revision of a patch to display generic and
custom plans in pg_stat_statements.
Ryo Matsumura sent in a patch to make "make installcheck" work with PGXS.
Ashutosh Sharma sent in another revision of a patch to add contrib/pg_surgery to
perform surgery on the damaged heap tables.
James Coleman sent in a patch to document concurrent indexes waiting on each
other and document vacuum on one table depending on concurrent index creation.
Atsushi Torikoshi sent in another revision of a patch to add a function exposing
memory usage of the local backend.
James Coleman sent in another revision of a patch to improve the standby
connection denied error message by sending a helpful error message so that the
user immediately knows that their server is configured to deny these
Andrew Dunstan sent in another revision of a patch to support libnss as a TLS
Thomas Munro sent in a WIP patch to cache smgrnblocks() in more cases.
Grigory Kryachko sent in a patch to use a stringInfo instead of a char for
replace_string in pg_regress, and add and use a heap allocated string version of
replace_string in pg_regress.
Justin Pryzby sent in two more revisions of a patch to remove a performance hack
for %*s format strings, which should no longer be needed since it was a
performance hack for specific platform snprintf, which are no longer used, and
include the leader PID in logfile.
Kyotaro HORIGUCHI sent in a patch to fix the behavior of pg_ctl with relative
David Rowley sent in a patch to make using simplehash more foolproof by guarding
against multiple inclusion.
Konstantin Knizhnik sent in a patch to fix the fact that CREATE TABLE ... LIKE
... INCLUDING ALL doesn't include creating a sequence.
Bharath Rupireddy sent in a patch to implement background worker shared memory
access for EXEC_BACKEND cases.
Vigneshwaran C sent in another revision of a patch to implement parallel COPY.
Peter Geoghegan sent in another revision of a patch to avoid a backwards scan
page deletion standby race.
Tom Lane sent in a patch to remove <@ from contrib/intarray's GiST opclasses.
|Next Message||David Fetter||2020-08-09 21:49:50||== PostgreSQL Weekly News - August 9, 2020 ==|
|Previous Message||Stefan Fercot||2020-07-28 12:51:46||check_pgbackrest 1.9 has been released|