== PostgreSQL Weekly News - March 3, 2019 ==

From: David Fetter <david(at)fetter(dot)org>
To: PostgreSQL Announce <pgsql-announce(at)postgresql(dot)org>
Subject: == PostgreSQL Weekly News - March 3, 2019 ==
Date: 2019-03-03 21:41:09
Message-ID: 20190303214109.GA3279@fetter.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-announce

== PostgreSQL Weekly News - March 3, 2019 ==

== PostgreSQL Jobs for March ==


== PostgreSQL Local ==

PostgreSQL(at)SCaLE is a two day, two track event which takes place on
March 7-8, 2019, at Pasadena Convention Center, as part of SCaLE 17X.

pgDay Paris 2019 will be held in Paris, France on March 12, 2019
at 199bis rue Saint-Martin.

Nordic PGDay 2019 will be held in Copenhagen, Denmark, at the
Copenhagen Marriott Hotel, on March 19, 2019.

PGConf APAC 2019 will be held in Singapore March 19-21, 2019.

The German-speaking PostgreSQL Conference 2019 will take place on May 10, 2019
in Leipzig.

PGDay.IT 2019 will take place May 16th and May 17th in Bologna, Italy.

PGCon 2019 will take place in Ottawa on May 28-31, 2019.

Swiss PGDay 2019 will take place in Rapperswil (near Zurich) on June 28, 2019.
The CfP is open through April 18, 2019, and registration is open.

PostgresLondon 2019 will be July 2-3, 2019 with an optional training day on
July 1. The CfP is open at https://goo.gl/forms/hsvZKAmq0c96XQ4l2 through March
15, 2019.

PGConf.Brazil 2019 is on August 1-3 2019 in São Paulo.

== PostgreSQL in the News ==

Planet PostgreSQL: http://planet.postgresql.org/

PostgreSQL Weekly News is brought to you this week by David Fetter

Submit news and announcements by Sunday at 3:00pm PST8PDT to david(at)fetter(dot)org(dot)

== Applied Patches ==

Thomas Munro pushed:

- Fix inconsistent out-of-memory error reporting in dsa.c. Commit 16be2fd1
introduced the flag DSA_ALLOC_NO_OOM to control whether the DSA allocator
would raise an error or return InvalidDsaPointer on failure to allocate. One
edge case was not handled correctly: if we fail to allocate an internal "span"
object for a large allocation, we would always return InvalidDsaPointer
regardless of the flag; a caller not expecting that could then dereference a
null pointer. This is a plausible explanation for a one-off report of a
segfault. Remove a redundant pair of braces so that all three stanzas that
handle DSA_ALLOC_NO_OOM match in style, for visual consistency. While fixing
inconsistencies, if FreePageManagerGet() can't supply the pages that our
book-keeping says it should be able to supply, then we should always report a
FATAL error. Previously we treated that as a regular allocation failure in
one code path, but as a FATAL condition in another. Back-patch to 10, where
dsa.c landed. Author: Thomas Munro Reported-by: Jakub Glapa Discussion:

Michaël Paquier pushed:

- Make release of 2PC identifier and locks consistent in COMMIT PREPARED. When
preparing a transaction in two-phase commit, a dummy PGPROC entry holding the
GID used for the transaction is registered, which gets released once COMMIT
PREPARED is run. Prior releasing its shared memory state, all the locks taken
in the prepared transaction are released using a dedicated set of callbacks
(pgstat and multixact having similar callbacks), which may cause the locks to
be released before the GID is set free. Hence, there is a small window where
lock conflicts could happen, for example: - Transaction A releases its locks,
still holding its GID in shared memory. - Transaction B held a lock which
conflicted with locks of transaction A. - Transaction B continues its
processing, reusing the same GID as transaction A. - Transaction B fails
because of a conflicting GID, already in use by transaction A. This commit
changes the shared memory state release so as post-commit callbacks and
predicate lock cleanup happen consistently with the shared memory state
cleanup for the dummy PGPROC entry. The race window is small and 2PC had this
issue from the start, so no backpatch is done. On top if that fixes discussed
involved ABI breakages, which are not welcome in stable branches.
Reported-by: Oleksii Kliukin, Ildar Musin Diagnosed-by: Oleksii Kliukin, Ildar
Musin Author: Michael Paquier Reviewed-by: Masahiko Sawada, Oleksii Kliukin

- Simplify some code in pg_rewind when syncing target directory. 9a4059d
simplified the flush of target data folder when finishing processing, and
could have done a bit more. Discussion:

- Fix memory leak when inserting tuple at relation creation for CTAS. The leak
has been introduced by 763f2ed which has addressed the problem for transient
tables, and forgot CREATE TABLE AS which shares a similar logic when receiving
a new tuple to store into the newly-created relation. Author: Jeff Janes

- Fix SCRAM authentication via SSL when mixing versions of OpenSSL. When using a
libpq client linked with OpenSSL 1.0.1 or older to connect to a backend linked
with OpenSSL 1.0.2 or newer, the server would send SCRAM-SHA-256-PLUS and
SCRAM-SHA-256 as valid mechanisms for the SASL exchange, and the client would
choose SCRAM-SHA-256-PLUS even if it does not support channel binding, leading
to a confusing error. In this case, what the client ought to do is switch to
SCRAM-SHA-256 so as the authentication can move on and succeed. So for a
SCRAM authentication over SSL, here are all the cases present and how we deal
with them using libpq: 1) Server supports channel binding, it sends
SCRAM-SHA-256-PLUS and SCRAM-SHA-256 as allowed mechanisms. 1-1) Client
supports channel binding, chooses SCRAM-SHA-256-PLUS. 1-2) Client does not
support channel binding, chooses SCRAM-SHA-256. 2) Server does not support
channel binding, sends SCRAM-SHA-256 as allowed mechanism. 2-1) Client
supports channel binding, still it has no choice but to choose SCRAM-SHA-256.
2-2) Client does not support channel binding, it chooses SCRAM-SHA-256. In all
these scenarios the connection should succeed, and the one which was handled
incorrectly prior this commit is 1-2), causing the connection attempt to fail
because client chose SCRAM-SHA-256-PLUS over SCRAM-SHA-256. Reported-by: Hugh
Ranalli Diagnosed-by: Peter Eisentraut Author: Michael Paquier Reviewed-by:
Peter Eisentraut Discussion:
Backpatch-through: 11

- Improve documentation of data_sync_retry. Reflecting an updated parameter
value requires a server restart, which was not mentioned in the documentation
and in postgresql.conf.sample. Reported-by: Thomas Poty Discussion:

- Make pg_partition_tree return no rows on unsupported and undefined objects.
The function was tweaked so as it returned one row full of NULLs when working
on an unsupported relkind or an undefined object as of cc53123, and after
discussion with Amit and Álvaro it looks more natural to make it return no
rows. Author: Michael Paquier Reviewed-by: Álvaro Herrera, Amit Langote
Discussion: https://postgr.es/m/20190227184808.GA17357@alvherre.pgsql

- Consider only relations part of partition trees in partition functions. This
changes the partition functions so as tables and indexes which are not part of
partition trees are handled the same way as what is done for undefined objects
and unsupported relkinds: pg_partition_tree() returns no rows and
pg_partition_root() returns a NULL result. Hence, partitioned tables,
partitioned indexes and relations whose flag pg_class.relispartition is set
are considered as valid objects to process. Previously, tables and indexes
not included in a partition tree were processed the same way as a partition or
a partitioned table, which caused the functions to return inconsistent results
for inherited tables, especially when inheriting from multiple tables.
Reported-by: Álvaro Herrera Author: Amit Langote, Michael Paquier Reviewed-by:
Tom Lane Discussion: https://postgr.es/m/20190228193203.GA26151@alvherre.pgsql

Peter Eisentraut pushed:

- Remove unnecessary use of PROCEDURAL. Remove some unnecessary, legacy-looking
use of the PROCEDURAL keyword before LANGUAGE. We mostly don't use this
anymore, so some of these look a bit old. There is still some use in pg_dump,
which is harder to remove because it's baked into the archive format, so I'm
not touching that. Discussion:

- psql: Remove obsolete code. The check in create_help.pl for a null end tag
(</>) has been obsolete since the conversion from SGML to XML, since XML does
not allow that anymore.

- Set cluster_name for PostgresNode.pm instances. This can help identifying test
instances more easily at run time, and it also provides some minimal test
coverage for the cluster_name feature. Reviewed-by: Euler Taveira
<euler(at)timbira(dot)com(dot)br> Discussion:

- Set fallback_application_name for a walreceiver to cluster_name. By default,
the fallback_application_name for a physical walreceiver is "walreceiver".
This means that multiple standbys cannot be distinguished easily on a primary,
for example in pg_stat_activity or synchronous_standby_names. If cluster_name
is set, use that for fallback_application_name in the walreceiver. (If it's
not set, it remains "walreceiver".) If someone set cluster_name to identify
their instance, we might as well use that by default to identify the node
remotely as well. It's still possible to specify another application_name in
primary_conninfo explicitly. Reviewed-by: Euler Taveira
<euler(at)timbira(dot)com(dot)br> Discussion:

- Remove unused macro. It has never been used as long as hstore has been in the

- Update comment. for ff11e7f4b9ae017585c3ba146db7ba39c31f209a

- Remove unnecessary unused MATCH PARTIAL code. ri_triggers.c spends a lot of
space catering to a not-yet-implemented MATCH PARTIAL option. An actual
implementation would probably not use the existing code structure anyway, so
let's just simplify this for now. First, have ri_FetchConstraintInfo() check
that riinfo->confmatchtype is valid. Then we don't have to repeat that
everywhere. In the various referential action functions, we don't need to pay
attention to the match type at all right now, so remove all that code. A
future MATCH PARTIAL implementation would probably have some conditions added
to the present code, but it won't need an entirely separate switch branch in
each case. In RI_FKey_fk_upd_check_required(), reorganize the code to make it
much simpler. Reviewed-by: Corey Huinker <corey(dot)huinker(at)gmail(dot)com>

- Compact for loops. Declare loop variable in for loop, for readability and to
save space. Reviewed-by: Corey Huinker <corey(dot)huinker(at)gmail(dot)com> Discussion:

- Reduce comments. Reduce the vertical space used by comments in ri_triggers.c,
making the file longer and more tedious to read than it needs to be. Update
some comments to use a more common style. Reviewed-by: Corey Huinker
<corey(dot)huinker(at)gmail(dot)com> Discussion:

- Clean up some variable names in ri_triggers.c. There was a mix of
old_slot/oldslot, new_slot/newslot. Since we've changed everything from row
to slot, we might as well take this opportunity to clean this up. Also update
some more comments for the slot change.

- Merge near-duplicate code in RI triggers. Merge ri_setnull and ri_setdefault
into one function ri_set. These functions were to a large part identical.
This is a continuation in spirit of 4797f9b519995ceca5d6b8550b5caa2ff6d19347.
Author: Corey Huinker <corey(dot)huinker(at)gmail(dot)com> Discussion:

- Fix whitespace.

Peter Geoghegan pushed:

- Correct obsolete nbtree page deletion comment. Commit efada2b8e92, which made
the nbtree page deletion algorithm more robust, removed _bt_getstackbuf()
calls from _bt_pagedel(). It failed to update a comment that referenced the
earlier approach. Update the comment to explain that the _bt_getstackbuf()
page deletion call site mirrors the only other remaining _bt_getstackbuf()
call site, which is reached during page splits.

- Remove unneeded argument from _bt_getstackbuf(). _bt_getstackbuf() is called
at exactly two points following commit efada2b8e92 (one call site is concerned
with page splits, while the other is concerned with page deletion). The
parent buffer returned by _bt_getstackbuf() is write-locked in both cases.
Remove the 'access' argument and make _bt_getstackbuf() assume that callers
require a write-lock.

Michael Meskes pushed:

- Hopefully fixing memory handling issues in ecpglib that Coverity found.

- Free memory in ecpg bytea regression test. While not really a problem it's
easier to run tools like valgrind against it when fixed.

Robert Haas pushed:

- Change lock acquisition order in expand_inherited_rtentry. Previously, this
function acquired locks in the order using find_all_inheritors(), which locks
the children of each table that it processes in ascending OID order, and which
processes the inheritance hierarchy as a whole in a breadth-first fashion.
Now, it processes the inheritance hierarchy in a depth-first fashion, and at
each level it proceeds in the order in which tables appear in the
PartitionDesc. If table inheritance rather than table partitioning is used,
the old order is preserved. This change moves the locking of any given
partition much closer to the code that actually expands that partition. This
seems essential if we ever want to allow concurrent DDL to add or remove
partitions, because if the set of partitions can change, we must use the same
data to decide which partitions to lock as we do to decide which partitions to
expand; otherwise, we might expand a partition that we haven't locked. It
should hopefully also facilitate efforts to postpone inheritance expansion or
locking for performance reasons, because there's really no way to postpone
locking some partitions if we're blindly locking them all using
find_all_inheritors(). The only downside of this change which is known to me
is that it further deviates from the principle that we should always lock the
inheritance hierarchy in find_all_inheritors() order to avoid deadlock risk.
However, we've already crossed that bridge in commit
9eefba181f7782d27d85d7e94e6028371e7ab2d7 and there are futher patches pending
that make similar changes, so this isn't really giving up anything that we
haven't surrendered already -- and it seems entirely worth it, given the
performance benefits some of those changes seem likely to bring. Patch by me;
thanks to David Rowley for discussion of these issues. Discussion:

Andres Freund pushed:

- Add ExecStorePinnedBufferHeapTuple. This allows to avoid an unnecessary
pin/unpin cycle when storing a tuple in an already pinned buffer into a slot,
when the pin isn't further needed at the call site. Only a single caller for
now (to ensure coverage), but upcoming patches will increase use of the new
function. Author: Andres Freund Discussion:

- Allow to use HeapTupleData embedded in [Buffer]HeapTupleTableSlot. That avoids
having to care about the lifetime of the HeapTupleHeaderData passed to
ExecStore[Buffer]HeapTuple(). That doesn't make a huge difference for a plain
HeapTupleTableSlot, but for BufferHeapTupleTableSlot it can be a significant
advantage, avoiding the need to materialize slots where it's inconvenient to
provide a HeapTupleData with appropriate lifetime to point to the on-disk
tuple. It's quite possible that we'll want to add support functions for
constructing HeapTuples using that embedded HeapTupleData, but for now callers
do so themselves. Author: Andres Freund Discussion:

- Store table oid and tuple's tid in tuple slots directly. After the
introduction of tuple table slots all table AMs need to support returning the
table oid of the tuple stored in a slot created by said AM. It does not make
sense to re-implement that in every AM, therefore move handling of table OIDs
into the TupleTableSlot structure itself. It's possible that we, at a later
date, might want to get rid of HeapTupleData.t_tableOid entirely, but doing so
before the abstractions for table AMs are integrated turns out to be too hard,
so delay that for now. Similarly, every AM needs to support the concept of a
tuple identifier (tid / item pointer) for its tuples. It's quite possible that
we'll generalize the exact form of a tid at a future point (to allow for
things like index organized tables), but for now many parts of the code know
about tids, so there's not much point in abstracting tids away. Therefore also
move into slot (rather than providing API to set/get the tid associated with
the tuple in a slot). Once table AM includes insert/updating/deleting tuples,
the responsibility to set the correct tid after such an action will move into
that. After that change, code doing such modifications, should not have to
deal with HeapTuples directly anymore. Author: Andres Freund, Haribabu Kommi
and Ashutosh Bapat Discussion:

- Use slots in trigger infrastructure, except for the actual invocation. In
preparation for abstracting table storage, convert trigger.c to track tuples
in slots. Which also happens to make code calling triggers simpler. As the
calling interface for triggers themselves is not changed in this patch,
HeapTuples still are extracted from the slot at that time. But that's handled
solely inside trigger.c, not visible to callers. It's quite likely that we'll
want to revise the external trigger interface, but that's a separate large
project. As part of this work the slots used for old/new/return tuples are
moved from EState into ResultRelInfo, as different updated tables might need
different slots. The slots are now also now created on-demand, which is good
both from an efficiency POV, but also makes the modifying code simpler.
Author: Andres Freund, Amit Khandekar and Ashutosh Bapat Discussion:

- Initialize variable to silence compiler warning. After ff11e7f4b9ae Tom's
compiler warns about accessing a potentially uninitialized rInfo. That's not
actually possible, but it's understandable the compiler would get this wrong.
NULL initialize too. Reported-By: Tom Lane Discussion:

- Allow buffer tuple table slots to materialize after ExecStoreVirtualTuple().
While not common, it can be useful to store a virtual tuple into a buffer
tuple table slot, and then materialize that slot. So far we've asserted out,
which surprisingly wasn't a problem for anything in core. But that seems
fragile, and it also breaks redis_fdw after ff11e7f4b9. Thus, allow
materializing a virtual tuple stored in a buffer tuple table slot. Author:
Andres Freund Discussion:

- Don't superfluously materialize slot after DELETE from an FDW. Previously that
was needed to safely store the table oid, but after b8d71745eac0a127 that's
not necessary anymore. Author: Andres Freund

- Don't force materializing when copying a buffer tuple table slot. After
5408e233f0667478 it's not necessary to force materializing the target slot
when copying from one buffer slot to another. Previously that was required
because the HeapTupleData portion of the source slot wasn't guaranteed to stay
valid long enough, but now we can simply copy that part into the destination
slot's tupdata. Author: Andres Freund

- Store tuples for EvalPlanQual in slots, rather than as HeapTuples. For the
upcoming pluggable table access methods it's quite inconvenient to store
tuples as HeapTuples, as that'd require converting tuples from a their native
format into HeapTuples. Instead use slots to manage epq tuples. To fit into
that scheme, change the foreign data wrapper callback RefetchForeignRow, to
store the tuple in a slot. Insist on using the caller provided slot, so it
conveniently can be stored in the corresponding EPQ slot. As there is no in
core user of RefetchForeignRow, that change was done blindly, but we plan to
test that soon. To avoid duplicating that work for row locks, move row locks
to just directly use the EPQ slots - it previously temporarily stored tuples
in LockRowsState.lr_curtuples, but that doesn't seem beneficial, given we'd
possibly end up with a significant number of additional slots. The behaviour
of es_epqTupleSet[rti -1] is now checked by es_epqTupleSlot[rti -1] != NULL,
as that is distinguishable from a slot containing an empty tuple. Author:
Andres Freund, Haribabu Kommi, Ashutosh Bapat Discussion:

- Use a virtual rather than a heap slot in two places where that suffices.
Author: Andres Freund Discussion:

Tom Lane pushed:

- Standardize some more loops that chase down parallel lists. We have forboth()
and forthree() macros that simplify iterating through several parallel lists,
but not everyplace that could reasonably use those was doing so. Also invent
forfour() and forfive() macros to do the same for four or five parallel lists,
and use those where applicable. The immediate motivation for doing this is to
reduce the number of ad-hoc lnext() calls, to reduce the footprint of a WIP
patch. However, it seems like good cleanup and error-proofing anyway; the
places that were combining forthree() with a manually iterated loop seem
particularly illegible and bug-prone. There was some speculation about
restructuring related parsetree representations to reduce the need for
parallel list chasing of this sort. Perhaps that's a win, or perhaps not, but
in any case it would be considerably more invasive than this patch; and it's
not particularly related to my immediate goal of improving the List
infrastructure. So I'll leave that question for another day. Patch by me;
thanks to David Rowley for review. Discussion:

- Teach optimizer's predtest.c more things about ScalarArrayOpExpr. In
particular, make it possible to prove/refute "x IS NULL" and "x IS NOT NULL"
predicates from a clause involving a ScalarArrayOpExpr even when we are unable
or unwilling to deconstruct the expression into an AND/OR tree. This avoids a
former unexpected degradation of plan quality when the size of an ARRAY[]
expression or array constant exceeded the arbitrary MAX_SAOP_ARRAY_SIZE limit.
For IS-NULL proofs, we don't really care about the values of the individual
array elements; at most, we care whether there are any, and for some common
cases we needn't even know that. The main user-visible effect of this is to
let the optimizer recognize applicability of partial indexes with "x IS NOT
NULL" predicates to queries with "x IN (array)" clauses in some cases where it
previously failed to recognize that. The structure of predtest.c is such that
a bunch of related proofs will now also succeed, but they're probably much
less useful in the wild. James Coleman, reviewed by David Rowley Discussion:

- Check we don't misoptimize a NOT IN where the subquery returns no rows.
Future-proofing against a common mistake in attempts to optimize NOT IN. We
don't have such an optimization right now, but attempts to do so are in the
works, and some of 'em are buggy. Add a regression test case covering the
point. David Rowley Discussion:

Álvaro Herrera pushed:

- pg_dump: Fix ArchiveEntry handling of some empty values. Commit f831d4acc
changed what pg_dump emits for some empty fields: they were output as empty
strings before, NULL pointer afterwards. That makes old pg_restore unable to
work (crash) with such files, which is unacceptable. Return to the original
representation by explicitly setting those struct members to "" where needed;
remove some no longer needed checks for NULL input. We can declutter the code
a little by returning to NULLs when we next update the archive version, so add
a note to remind us later. Discussion:
https://postgr.es/m/20190225074539.az6j3u464cvsoxh6@depesz.com Reported-by:
hubert depesz lubaczewski Author: Dmitry Dolgov

- Improve docs for ALTER TABLE .. SET TABLESPACE. Discussion:
https://postgr.es/m/20190220173815.GA7959@alvherre.pgsql Reviewed-by: Robert

Joe Conway pushed:

- Make get_controlfile not leak file descriptors. When backend functions were
added to expose controldata via SQL, reading of pg_control was consolidated
under src/common so that both frontend and backend could share the same code.
That move from frontend-only to shared frontend-backend failed to recognize
the risk (and coding standards violation) of using a bare open(). In
particular, it risked leaking file descriptors if transient errors occurred
while reading the file. Fix that by using OpenTransientFile() instead in the
backend case, which is purpose-built for this type of usage. Since there have
been no complaints from the field, and an intermittent failure low risk, no
backpatch. Hard failure would of course be bad, but in that case these
functions are probably the least of your worries. Author: Joe Conway
Reviewed-By: Michael Paquier Reported by: Michael Paquier Discussion:

Amit Kapila pushed:

- Clear the local map when not used. After commit b0eaa4c51b, we use a local map
of pages to find the required space for small relations. We do clear this map
when we have found a block with enough free space, when we extend the
relation, or on transaction abort so that it can be used next time. However,
we miss to clear it when we didn't find any pages to try from the map which
leads to an assertion failure when we later tried to use it after relation
extension. In the passing, I have improved some comments in this area.
Reported-by: Tom Lane based on buildfarm results Author: Amit Kapila
Reviewed-by: John Naylor Tested-by: Kuntal Ghosh Discussion:

Andrew Dunstan pushed:

- Add --exclude-database option to pg_dumpall. This option functions similarly
to pg_dump's --exclude-table option, but for database names. The option can be
given once, and the argument can be a pattern including wildcard characters.
Author: Andrew Dunstan. Reviewd-by: Fabien Coelho and Michael Paquier

- Add extra descriptive headings in pg_dumpall. Headings are added for the User
Configurations and Databases sections, and for each user configuration and
database in the output. Author: Fabien Coelho Discussion:

- Remove tests for pg_dumpall --exclude-database missing argument. It turns out
that different getopt implementations spell the error for missing arguments
different ways. This test is of fairly marginal value, so instead of trying to
keep up with the different error messages just remove the test.

- Avoid accidental wildcard expansion in msys shell. Commit f092de05 added a
test for pg_dumpall --exclude-database including the wildcard pattern '*dump*'
which matches some files in the source directory. The test library on msys
uses the shell which expands this and thus the program gets incorrect
arguments. This doesn't happen if the pattern doesn't match any files, so here
the pattern is set to '*dump_test*' which is such a pattern. Per buildfarm
animal jacana.

Dean Rasheed pushed:

- Further fixing for multi-row VALUES lists for updatable views. Previously,
rewriteTargetListIU() generated a list of attribute numbers from the
targetlist, which were passed to rewriteValuesRTE(), which expected them to
contain the same number of entries as there are columns in the VALUES RTE, and
to be in the same order. That was fine when the target relation was a table,
but for an updatable view it could be broken in at least three different ways
--- rewriteTargetListIU() could insert additional targetlist entries for view
columns with defaults, the view columns could be in a different order from the
columns of the underlying base relation, and targetlist entries could be
merged together when assigning to elements of an array or composite type. As a
result, when recursing to the base relation, the list of attribute numbers
generated from the rewritten targetlist could no longer be relied upon to
match the columns of the VALUES RTE. We got away with that prior to 41531e42d3
because it used to always be the case that rewriteValuesRTE() did nothing for
the underlying base relation, since all DEFAULTS had already been replaced
when it was initially invoked for the view, but that was incorrect because it
failed to apply defaults from the base relation. Fix this by examining the
targetlist entries more carefully and picking out just those that are simple
Vars referencing the VALUES RTE. That's sufficient for the purposes of
rewriteValuesRTE(), which is only responsible for dealing with DEFAULT items
in the VALUES RTE. Any DEFAULT item in the VALUES RTE that doesn't have a
matching simple-Var-assignment in the targetlist is an error which we complain
about, but in theory that ought to be impossible. Additionally, move this
code into rewriteValuesRTE() to give a clearer separation of concerns between
the 2 functions. There is no need for rewriteTargetListIU() to know about the
details of the VALUES RTE. While at it, fix the comment for
rewriteValuesRTE() which claimed that it doesn't support array element and
field assignments --- that hasn't been true since a3c7a993d5 (9.6 and later).
Back-patch to all supported versions, with minor differences for the pre-9.6
branches, which don't support array element and field assignments to the same
column in multi-row VALUES lists. Reviewed by Amit Langote. Discussion:

== Pending Patches ==

Masahiko Sawada sent in another revision of a patch to add a function to copy
replication slots.

Peter Eisentraut sent in a patch to remove volatile from latch API.

Etsuro Fujita sent in another revision of a patch to fix another oddity in
costing aggregate pushdown paths.

Kyotaro HORIGUCHI sent in another revision of a patch to move the temporary
storage used by the stats collector from files to shared memory.

Mike Palmiotto sent in a patch to change StartChildProcess to take a struct with
data for forking/execing each different process.

David Rowley sent in a patch to increase the default vacuum_cost_limit from 200
to 2000.

Peter Eisentraut sent in another revision of a patch to implement GENERATED
columns in two flavors: virtual (generated at query time) and stored.

Peter Eisentraut sent in two more revisions of a patch to psql to add a
documentation URL to the \help output.

Mike Palmiotto sent in two revisions of a patch to add a "partition pruning"

Takeshi Ideriha sent in a patch to protect syscache from bloating with negative
cache entries.

Konstantin Knizhnik and Michaël Paquier traded patches to fix the readdir
implementation on Windows.

Takayuki Tsunakawa sent in two more revisions of a patch to disable vacuum

Nagaura Ryohei sent in five more revisions of a patch to add TCP timeout

Haribabu Kommi sent in a patch to move the current_logfiles file into

Andrew Gierth sent in a patch to ensure that JIT works on FreeBSD/ARMv7.

Aleksey Kondratov sent in a patch to remove some redundant tests from

Noah Misch sent in a patch to make it possible to change a column's type from
timestamp to timestamptz without a full-table rewrite.

Michael Banck sent in another revision of a patch to make it possible to enable
page checksums offline.

Thomas Munro sent in a patch to report bgworker launch failure during smart

Álvaro Herrera sent in two more revisions of a patch to create a function
pg_partition_ancestors() and use same to display foreign key relationships in
psql for partitioned tables.

Simon Riggs sent in two revisions of a patch to replace the hard-wired value of
10 in pgbench with a MAX_ARGS parameter.

Robert Haas sent in two more revisions of a patch to implement ATTACH/DETACH

Amit Kapila sent in three revisions of a patch to ensure that when a block is
not obtained from the local map, it is cleared.

Etsuro Fujita sent in a patch to fix an oddity with parallel safety test for
scan/join target in grouping_planner.

Peter Geoghegan sent in another revision of a patch to make all nbtree entries
unique by having heap TIDs participate in comparisons.

Justin Pryzby sent in another revision of a patch to avoid repetitive log of
PREPARE during EXECUTE of prepared statements.

Haribabu Kommi sent in another revision of a patch to add libpq support to
connect to standby server as priority.

Dean Rasheed sent in a patch to fix an inconsistent use of default for updatable

Etsuro Fujita sent in a patch to remove some unneeded parallel safety tests from

Shawn Debnath sent in another revision of a patch to refactor the fsync
mechanism to support future SMGR implementations.

David Rowley and Tom Lane traded patches to turn use arrays instead of the List

Álvaro Herrera sent in another revision of a patch to take advantage of CoW
filesystems for WAL.

Alexander Kuzmenkov sent in another revision of a patch to remove unneeded

Ildus Kurbangaliev sent in another revision of a patch to make it possible to
use custom compression methods.

Antonin Houska sent in another revision of a patch to enable aggregate pushdown.

Joe Conway sent in a patch to allow for redacting all client messages for
functions marked as leakproof.

Jeff Janes sent in another revision of a patch to fix the costing for Bloom

Peter Moser sent in another revision of a patch to implement temporal queries
using range types.

Takamichi Osumi sent in a patch to implement CREATE OR REPLACE TRIGGER.

David Steele sent in two revisions of a patch to add exclusive backup
deprecation notes to documentation.

Álvaro Herrera sent in another revision of a patch to make it possible for
partitioned tables to be referenced by foreign keys.

Surafel Temesgen sent in two more revisions of a patch to implement FETCH FIRST

Masahiko Sawada sent in another revision of a patch to add a DISABLE_INDEX_CLEANUP
option to VACUUM and a corresponding --disable-index-cleanup option to vacuumdb.

Thomas Munro sent in a patch to drop the "smgr" type.

Tomáš Vondra and David Rowley traded patches to implement multivariate MCV lists
and histograms.

Christoph Berg and Tom Lane traded patches to fix an issue which manifested as
errors mentioning incomplete startup packets.

Nikita Glukhov sent in another revision of a patch to add the SQL/JSON

Nikita Glukhov sent in a patch to fix some memory leaks and mistaken error
handling in jsonb_plpython.

Kyotaro HORIGUCHI sent in another revision of a patch to prevent syscache from
bloating with negative cache entries.

David Rowley sent in another revision of a patch to make ALTER TABLE ... SET NOT
NULL operate by constraints alone.

Michael Meskes sent in a patch to make PREPARE work in ECPG.

Amit Langote sent in another revision of a patch to speed up planning with

Etsuro Fujita sent in a patch to revive the modify_in_place parameter to

Nikita Glukhov sent in another revision of a patch to implement JSON_TABLE.

Adrien Nayrat sent in another revision of a patch to make it possible to log a
sample of transactions.

Tatsuro Yamada sent in another revision of a patch to implement a progress
monitor for CLUSTER.

Michael Banck and Fabien COELHO traded patches to make it possible to enable
page checksums online.

Nikita Glukhov and Alexander Korotkov traded patches to implement jsonpath.

Sergei Kornilov sent in another revision of a patch to refactor
WaitForWALToBecomeAvailable to gracefully restart source.

Fabien COELHO sent in another revision of a patch to pgbench to add a
pseudo-random permutation function.

David Rowley sent in two more revisions of a patch to turn some NOT IN queries
into antijoins.

Pavel Stěhule sent in another revision of a patch to implement schema variables.

Browse pgsql-announce by date

  From Date Subject
Next Message Britt Cole 2019-03-05 13:34:35 Registration for PostgresLondon 2019 Now Open!
Previous Message Stephen Frost 2019-02-28 15:20:32 PostgreSQL Participates in Google Summer of Code 2019!