|From:||David Fetter <david(at)fetter(dot)org>|
|To:||PostgreSQL Announce <pgsql-announce(at)postgresql(dot)org>|
|Subject:||== PostgreSQL Weekly News - October 14, 2018 ==|
|Views:||Raw Message | Whole Thread | Download mbox|
== PostgreSQL Weekly News - October 14, 2018 ==
PostgreSQL 11 RC1 released. This is a great opportunity to test!
If all goes well, 11 will be released October 18, 2018.
pgday.Seoul 2018 will be held in Seoul, South Korea on November 3, 2018 in
Seoul. Korean language information is here:
== PostgreSQL Product News ==
Ajqvue Version 2.10, a java-based UI which supports PostgreSQL, released.
pg_partman 4.0.0, a management system for partitioned tables, released.
pgFormatter 3.2, a formatter/beautifier for SQL code, released.
== PostgreSQL Jobs for October ==
== PostgreSQL Local ==
PostgresConf South Africa 2018 will take place in Johannesburg on October 9, 2018
PostgreSQL Conference Europe 2018 will be held on October 23-26, 2018 at the
Lisbon Marriott Hotel in Lisbon, Portugal.
2Q PGConf will be on December 4-5, 2018 in Chicago, IL.
PGConf.ASIA 2018 will take place on December 10-12, 2018 in Akihabara, Tokyo,
pgDay Paris 2019 will be held in Paris, France on March 12, 2019
at 199bis rue Saint-Martin. The CfP is open until November 30, 2018.
PGDay.IT 2019 will take place May 16th and May 17th in Bologna, Italy. The CfP
is open at https://2019.pgday.it/en/blog/cfp and the Call for Workshops is at
https://2019.pgday.it/en/blog/cfw until January 15, 2019.
== PostgreSQL in the News ==
Planet PostgreSQL: http://planet.postgresql.org/
PostgreSQL Weekly News is brought to you this week by David Fetter
Submit news and announcements by Sunday at 3:00pm PST8PDT to david(at)fetter(dot)org(dot)
== Applied Patches ==
Magnus Hagander pushed:
- Fix speling error. Reported by Alexander Lakhin in bug #15423
Michaël Paquier pushed:
- Improve two error messages related to foreign keys on partitioned tables.
Error messages for creating a foreign key on a partitioned table using ONLY or
NOT VALID were wrong in mentioning the objects they worked on. This commit
adds on the way some regression tests missing for those cases. Author:
Laurenz Albe Reviewed-by: Michael Paquier Discussion:
- Add pg_ls_archive_statusdir function. This function lists the contents of the
WAL archive status directory, and is intended to be used by monitoring tools.
Unlike pg_ls_dir(), access to it can be granted to non-superusers so that
those monitoring tools can observe the principle of least privilege. Access
is also given by default to members of pg_monitor. Author: Christoph
Moench-Tegeder Reviewed-by: Aya Iwata Discussion:
- Add TAP tests for pg_verify_checksums All options available in the utility get
coverage: - Tests with disabled page checksums. - Tests with enabled test
checksums. - Emulation of corruption and broken checksums with a full scan
and. single relfilenode scan. This patch has been contributed mainly by
Michael Banck and Magnus Hagander with things presented on various threads,
and I have gathered all the contents into a single patch. Author: Michael
Banck, Magnus Hagander, Michael Paquier Reviewed-by: Peter Eisentraut
- Avoid duplicate XIDs at recovery when building initial snapshot. On a
primary, sets of XLOG_RUNNING_XACTS records are generated on a periodic basis
to allow recovery to build the initial state of transactions for a hot
standby. The set of transaction IDs is created by scanning all the entries in
ProcArray. However it happens that its logic never counted on the fact that
two-phase transactions finishing to prepare can put ProcArray in a state where
there are two entries with the same transaction ID, one for the initial
transaction which gets cleared when prepare finishes, and a second, dummy,
entry to track that the transaction is still running after prepare finishes.
This way ensures a continuous presence of the transaction so as callers of for
example TransactionIdIsInProgress() are always able to see it as alive. So,
if a XLOG_RUNNING_XACTS takes a standby snapshot while a two-phase transaction
finishes to prepare, the record can finish with duplicated XIDs, which is a
state expected by design. If this record gets applied on a standby to initial
its recovery state, then it would simply fail, so the odds of facing this
failure are very low in practice. It would be tempting to change the
generation of XLOG_RUNNING_XACTS so as duplicates are removed on the source,
but this requires to hold on ProcArrayLock for longer and this would impact
all workloads, particularly those using heavily two-phase transactions.
XLOG_RUNNING_XACTS is also actually used only to initialize the standby state
at recovery, so instead the solution is taken to discard duplicates when
applying the initial snapshot. Diagnosed-by: Konstantin Knizhnik Author:
Michael Paquier Discussion:
Peter Eisentraut pushed:
- Track procedure calls in pg_stat_user_functions. This was forgotten when
procedures were implemented. Reported-by: Lukas Fittl <lukas(at)fittl(dot)com>
- Turn transaction_isolation into GUC enum. It was previously a string setting
that was converted into an enum by custom code, but using the GUC enum
facility seems much simpler and doesn't change any functionality, except that
set transaction_isolation='default'; no longer works, but that was never
documented and doesn't work with any other transaction characteristics. (Note
that this is not the same as RESET or SET TO DEFAULT, which still work.)
Reviewed-by: Heikki Linnakangas <hlinnaka(at)iki(dot)fi> Discussion:
- Test that event triggers work in functions and procedures. This ensures that
we have coverage of all the ProcessUtilityContext variants.
- Slightly correct context check for event triggers. The previous check for a
"complete query" omitted the new PROCESS_UTILITY_QUERY_NONATOMIC value. This
didn't actually make a difference in practice, because only CALL and SET from
PL/pgSQL run in this state, but it's more correct to include it anyway.
Álvaro Herrera pushed:
- Silence compiler warning in Assert(). gcc 6.3 does not whine about this
mistake I made in 39808e8868c8 but evidently lots of other compilers do,
according to Michael Paquier, Peter Eisentraut, Arthur Zakirov, Tomas Vondra.
Discussion: too many to list
- Correct attach/detach logic for FKs in partitions. There was no code to
handle foreign key constraints on partitioned tables in the case of ALTER
TABLE DETACH; and if you happened to ATTACH a partition that already had an
equivalent constraint, that one was ignored and a new constraint was created.
Adding this to the fact that foreign key cloning reuses the constraint name on
the partition instead of generating a new name (as it probably should, to
cater to SQL standard rules about constraint naming within schemas), the
result was a pretty poor user experience -- the most visible failure was that
just detaching a partition and re-attaching it failed with an error such as
ERROR: duplicate key value violates unique constraint
"pg_constraint_conrelid_contypid_conname_index" DETAIL: Key (conrelid,
contypid, conname)=(26702, 0, test_result_asset_id_fkey) already exists.
because it would try to create an identically-named constraint in the
partition. To make matters worse, if you tried to drop the constraint in the
now-independent partition, that would fail because the constraint was still
seen as dependent on the constraint in its former parent partitioned table:
ERROR: cannot drop inherited constraint "test_result_asset_id_fkey" of
relation "test_result_cbsystem_0001_0050_monthly_2018_09" This fix attacks the
problem from two angles: first, when the partition is detached, the constraint
is also marked as independent, so the drop now works. Second, when the
partition is re-attached, we scan existing constraints searching for one
matching the FK in the parent, and if one exists, we link that one to the
parent constraint. So we don't end up with a duplicate -- and better yet, we
don't need to scan the referenced table to verify that the constraint holds.
To implement this I made a small change to previously planner-only struct
ForeignKeyCacheInfo to contain the constraint OID; also relcache now maintains
the list of FKs for partitioned tables too. Backpatch to 11. Reported-by:
Michael Vitale (bug #15425) Discussion:
Tom Lane pushed:
- Avoid O(N^2) cost in ExecFindRowMark(). If there are many ExecRowMark
structs, we spent O(N^2) time in ExecFindRowMark during executor startup.
Once upon a time this was not of great concern, but the addition of native
partitioning has squeezed out enough other costs that this can become the
dominant overhead in some use-cases for tables with many partitions. To fix,
simply replace that List data structure with an array. This adds a little bit
of cost to execCurrentOf(), but not much, and anyway that code path is neither
of large importance nor very efficient now. If we ever decide it is a
bottleneck, constructing a hash table for lookup-by-tableoid would likely be
the thing to do. Per complaint from Amit Langote, though this is different
from his fix proposal. Discussion:
- Improve snprintf.c's handling of NaN, Infinity, and minus zero. Up to now,
float4out/float8out handled NaN and Infinity cases explicitly, and invoked
psprintf only for ordinary float values. This was done because platform
implementations of snprintf produce varying representations of these special
cases. But now that we use snprintf.c always, it's better to give it the
responsibility to produce a uniform representation of these cases, so that we
have uniformity across the board not only in float4out/float8out. Hence, move
that work into fmtfloat(). Also, teach fmtfloat() to recognize IEEE minus
zero and handle it correctly. The previous coding worked only accidentally,
and would fail for e.g. "%+f" format (it'd print "+-0.00000"). Now that we're
using snprintf.c everywhere, it's not acceptable for it to do weird things in
corner cases. (This incidentally avoids a portability problem we've seen on
some really ancient platforms, that native sprintf does the wrong thing with
minus zero.) Also, introduce a new entry point in snprintf.c to allow
floatout to bypass the work of interpreting a well-known format spec, as
well as bypassing the overhead of the psprintf layer. I modeled this API
loosely on strfromd(). In my testing, this brings floatout back to
approximately the same speed they had when using native snprintf, fixing one
of the main performance issues caused by using snprintf.c. (There is some
talk of more aggressive work to improve the speed of floating-point output
conversion, but these changes seem to provide a better starting point for such
work anyway.) Getting rid of the previous ad-hoc hack for Infinity/NaN in
fmtfloat() allows removing <ctype.h> from snprintf.c's #includes. I also
removed a few other #includes that I think are historical, though the
buildfarm may expose that as wrong. Discussion:
- Advance transaction timestamp for intra-procedure transactions. Per
discussion, this behavior seems less astonishing than not doing so. Peter
Eisentraut and Tom Lane Discussion:
- Fix omissions in snprintf.c's coverage of standard *printf functions. A
warning on a NetBSD box revealed to me that pg_waldump/compat.c is using
vprintf(), which snprintf.c did not provide coverage for. This is not good if
we want to have uniform *printf behavior, and it's pretty silly to omit when
it's a one-line function. I also noted that snprintf.c has pg_vsprintf() but
for some reason it was not exposed to the outside world, creating another way
in which code might accidentally invoke the platform *printf family. Let's
just make sure that we replace all eight of the POSIX-standard printf family.
Also, upgrade plperl.h and plpython.h to make sure that they do their
undefine/redefine rain dance for all eight, not some random maybe-sufficient
- Convert some long lists in configure.in to one-line-per-entry style. The idea
here is that patches that add items to these lists will often be easier to
rebase over other additions to the same lists, because they won't be trying to
touch the very same line of configure.in. There will still be merge conflicts
in the configure script, but that can be fixed just by re-running autoconf (or
by leaving configure out of the submitted patch to begin with ...)
Implementation note: use of m4_normalize() is necessary to get rid of the
newlines, else incorrect shell syntax will be emitted. But with that hack,
the generated configure script is identical to what it was before.
- Select appropriate PG_PRINTF_ATTRIBUTE for recent NetBSD. NetBSD-current
generates a large number of warnings about "%m" not being appropriate to use
with *printf functions. While that's true for their native printf, it's
surely not true for snprintf.c, so I think they have misunderstood gcc's
definition of the "gnu_printf" archetype. Nonetheless, choosing "__syslog__"
instead silences the warnings; so teach configure about that. Since this is
only a cosmetic warning issue (and anyway it depends on previous hacking to be
self-consistent), no back-patch. Discussion:
- Remove no-longer-needed variant expected regression result files.
numerology_1.out and float8-small-is-zero_1.out differ from their base files
only in showing plain zero rather than minus zero for some results. I believe
that in the wake of commit 6eb3eb577, we will print minus zero as such on all
IEEE-float platforms (and non-IEEE floats are going to cause many more
regression diffs than this, anyway). Hence we should be able to remove these
and eliminate a bit of maintenance pain. Let's see if the buildfarm agrees.
- Make src/common/exec.c's error logging less ugly. This code used elog where
it really ought to use ereport, mainly so that it can report a SQLSTATE
different from ERRCODE_INTERNAL_ERROR. There were some other random
deviations from typical error report practice too. In addition, we can make
some cleanups that were impractical six months ago: * Use one variadic macro,
instead of several with different numbers of arguments, reducing the
temptation to force-fit messages into particular numbers of arguments; * Use
%m, even in the frontend case, simplifying the code. Discussion:
- Make float exponent output on Windows look the same as elsewhere. Windows,
alone among our supported platforms, likes to emit three-digit exponent fields
even when two digits would do. Adjust such results to look like the way
everyone else does it. Eliminate a bunch of variant expected-output files
that were needed only because of this quirk. Discussion:
- Remove dead reference to ecpg resultmap file. I missed this in my prior
commit because it doesn't matter in non-VPATH builds. Per buildfarm.
- Simplify use of AllocSetContextCreate() wrapper macro. We can allow this
macro to accept either abbreviated or non-abbreviated allocation parameters by
making use of __VA_ARGS__. As noted by Andres Freund, it's unlikely that any
compiler would have __builtin_constant_p but not __VA_ARGS__, so this gives up
little or no error checking, and it avoids a minor but annoying API break for
extensions. With this change, there is no reason for anybody to call
AllocSetContextCreateExtended directly, so in HEAD I renamed it to
AllocSetContextCreateInternal. It's probably too late for an ABI break like
that in 11, though. Discussion:
- Another round of portability hacking on ECPG regression tests. Removing the
separate Windows expected-files in commit f1885386f turns out to have been too
optimistic: on most (but not all!) of our Windows buildfarm members, the tests
still print floats with three exponent digits, because they're invoking the
native printf() not snprintf.c. But rather than put back the extra
expected-files, let's hack the three tests in question so that they adjust
float formatting the same way snprintf.c does. Discussion:
- Make an editing pass over v11 release notes. Set the release date. Do a
bunch of copy-editing and markup improvement, rearrange some stuff into what
seemed a more sensible order, move some things that did not seem to be in the
- Doc: copy-editing for CREATE INDEX reference page. Justin Pryzby, Jonathan S.
Katz, and myself. Discussion:
- Doc: further copy-editing for v11 release notes. Justin Pryzby, Jonathan S.
Katz, and myself. Discussion:
- Doc: still further copy-editing for v11 release notes. Justin Pryzby and
myself. Discussion: https://postgr.es/m/20181006134249.GD871@telsasoft.com
- Clean up/tighten up coercibility checks in opr_sanity regression test. With
the removal of the old abstime type, there are no longer any cases in this
test where we need to use the weaker castcontext-ignoring form of binary
coercibility check. (The other major source of such headaches,
apparently-incompatible hash functions, is now hashvalidate()'s problem not
this test script's problem.) Hence, just use binary_coercible() everywhere,
and remove the comments explaining why we don't do so --- which were broken
anyway by cda6a8d01. I left physically_coercible() in place but renamed it to
better match what it's actually testing, and added some comments. Also, in
test queries that have an assumption about the maximum number of function
arguments they need to handle, add a clause to make them fail if someday
there's a relevant function with more arguments. Otherwise we're likely not
to notice that we need to extend the queries. Discussion:
- Use PlaceHolderVars within the quals of a FULL JOIN. This prevents failures
in cases where we pull up a constant or var-free expression from a subquery
and put it into a full join's qual. That can result in not recognizing the
qual as containing a mergejoin-able or hashjoin-able condition. A PHV
prevents the problem because it is still recognized as belonging to the side
of the join the subquery is in. I'm not very sure about the net effect of
this change on plan quality. In "typical" cases where the join keys are Vars,
nothing changes. In an affected case, the PHV-wrapped expression is less
likely to be seen as equal to PHV-less instances below the join, but more
likely to be seen as equal to similar expressions above the join, so it may
end up being a wash. In the one existing case where there's any visible
change in a regression-test plan, it amounts to referencing a lower
computation of a COALESCE result instead of recomputing it, which seems like a
win. Given my uncertainty about that and the lack of field complaints, no
back-patch, even though this is a very ancient problem. Discussion:
- Make some subquery-using test cases a bit more robust. These test cases could
be adversely affected by an upcoming change to allow pullup of FROM-less
subqueries. Tweak them to ensure that they'll continue to test what they did
before. Discussion: https://firstname.lastname@example.org
Thomas Munro pushed:
- Relax transactional restrictions on ALTER TYPE ... ADD VALUE (redux).
Originally committed as 15bc038f (plus some follow-ups), this was reverted in
28e07270 due to a problem discovered in parallel workers. This new version
corrects that problem by sending the list of uncommitted enum values to
parallel workers. Here follows the original commit message describing the
change: To prevent possibly breaking indexes on enum columns, we must keep
uncommitted enum values from getting stored in tables, unless we can be sure
that any such column is new in the current transaction. Formerly, we enforced
this by disallowing ALTER TYPE ... ADD VALUE from being executed at all in a
transaction block, unless the target enum type had been created in the current
transaction. This patch removes that restriction, and instead insists that an
uncommitted enum value can't be referenced unless it belongs to an enum type
created in the same transaction as the value. Per discussion, this should be
a bit less onerous. It does require each function that could possibly return
a new enum value to SQL operations to check this restriction, but there aren't
so many of those that this seems unmaintainable. Author: Andrew Dunstan and
Tom Lane, with parallel query fix by Thomas Munro Reviewed-by: Tom Lane
Greg Stark pushed:
- Add "B" suffix for bytes to docs. 6e7baa3227 and b06d8e58b5 added "B" as a
valid suffix for GUC_UNIT_BYTES but neglected to add it to the docs.
Andres Freund pushed:
- Fix logical decoding error when system table w/ toast is repeatedly rewritten.
Repeatedly rewriting a mapped catalog table with VACUUM FULL or CLUSTER could
cause logical decoding to fail with: ERROR, "could not map filenode \"%s\" to
relation OID" To trigger the problem the rewritten catalog had to have live
tuples with toasted columns. The problem was triggered as during catalog
table rewrites the heap_insert() check that prevents logical decoding
information to be emitted for system catalogs, failed to treat the new heap's
toast table as a system catalog (because the new heap is not recognized as a
catalog table via RelationIsLogicallyLogged()). The relmapper, in contrast to
the normal catalog contents, does not contain historical information. After a
single rewrite of a mapped table the new relation is known to the relmapper,
but if the table is rewritten twice before logical decoding occurs, the
relfilenode cannot be mapped to a relation anymore. Which then leads us to
error out. This only happens for toast tables, because the main table
contents aren't re-inserted with heap_insert(). The fix is simple, add a new
heap_insert() flag that prevents logical decoding information from being
emitted, and accept during decoding that there might not be tuple data for
toast tables. Unfortunately that does not fix pre-existing logical decoding
errors. Doing so would require not throwing an error when a filenode cannot be
mapped to a relation during decoding, and that seems too likely to hide bugs.
If it's crucial to fix decoding for an existing slot, temporarily changing the
ERROR in ReorderBufferCommit() to a WARNING appears to be the best fix.
Author: Andres Freund Discussion:
Backpatch: 9.4-, where logical decoding was introduced
- Force synchronous commit to be enabled for all test_decoding tests. Without
that the tests fail when forced to be run against a cluster with
synchronous_commit = off (as the WAL might not yet be flushed to disk by the
point logical decoding gets called, and thus the expected output breaks). Most
tests already do that, add it to a few newer tests. Author: Andres Freund
- Remove timetravel extension. The extension depended on old types which are
about to be removed. As the code additionally was pretty crufty and didn't
provide much in the way of functionality, removing the extension seems to be
the best way forward. It's fairly trivial to write functionality in plpgsql
that more than covers what timetravel did. Author: Andres Freund Discussion:
- Move timeofday() implementation out of nabstime.c. nabstime.c is about to be
removed, but timeofday() isn't related to the rest of the functionality
therein, and some find it useful. Move to timestamp.c. Discussion:
- Remove deprecated abstime, reltime, tinterval datatypes. These types have
been deprecated for a *long* time. Catversion bump, for obvious reasons.
Author: Andres Freund Discussion:
Alexander Korotkov pushed:
- contrib/bloom documentation improvement. This commit documents rounding of
"length" parameter and absence of support for unique indexes and NULLs
searching. Backpatch to 9.6 where contrib/bloom was introduced. Discussion:
Author: Oleg Bartunov with minor editorialization by me Backpatch-through: 9.6
- Add missed tag in bloom.sgml. Backpatch commits don't contain this error.
== Pending Patches ==
David Rowley sent in a patch to clarify and correct the documentation around
runtime partition pruning.
Alexander Kuzmenkov sent in another revision of a patch to remove some unneeded
Laurenz Albe sent in another revision of a patch to add pg_promote() to promote
Peter Eisentraut sent in another revision of a patch to implement chained
transactions (T261 in the SQL standard).
David Rowley sent in another revision of a patch to make run-time partition
pruning more efficient.
Andrew Dunstan sent in another revision of a patch to add an --exclude-database
option to pg_dumpall.
Haribabu Kommi sent in a patch to add a new API, setNewfilenode, and an API to
create an INIT_FORKNUM file and a wrapper for same, table_create_init_fork.
Kyotaro HORIGUCHI sent in another revision of a patch to ensure that
anti-wraparound VACUUMs are always set as aggressive.
Yang Xiao sent in a patch to ECPG to ensure that functions in dt_common
correctly detect integer overflow.
Yang Xiao sent in a patch to add an overflow test to numeric_exp().
Andres Freund sent in a patch to C99-ify FunctionCallInfoData and remove
fmgr.[ch] duplication using macro magic.
David Rowley sent in another revision of a patch to correct some comments and
fix an out-dated README from run-time pruning.
Daniel Gustafsson sent in two revisions of a patch to support using a custom
socket directory during pg_upgrade.
Thomas Munro sent in two more revisions of a patch to use pread()/pwrite()
instead of lseek() + read()/write() where available.
Thomas Munro sent in another revision of a patch to add kqueue(2) support for
Julien Demoor sent in another revision of a patch to make it possible to
collapse duplicate NOTIFYs.
Thomas Munro sent in another revision of a patch to enable parallel query with
SERIALIZABLE isolation and enable the read-only SERIALIZABLE optimization for
Peter Eisentraut sent in another revision of a patch to ensure that when an
error occurs during a pgbench run, pgbench exits with a non-zero exit code
distinct from the one for errors during initialization and writes a message at
the end reflecting the reason.
Amit Khandekar sent in another revision of a patch to implement the
Thomas Munro sent in four more revisions of a patch to refactor random seed and
start time initialization. Among other things, this ensures that background
workers, including parallel workers, have distinct sequence numbers in random().
Daniel Gustafsson sent in another revision of a patch to send an optional
message to the user when terminating or cancelling a backend.
Thomas Munro sent in a patch to add a proc_die_hook to customize die() interrupt
handling and remove the async-signal-unsafe bgworker_die() function.
Richard Guo and Michaël Paquier traded patches to restore CurrentUserId only if
'prevUser' is valid.
Amit Langote sent in another revision of a patch to add pg_partition_tree to
display information about partitions.
Haribabu Kommi sent in a patch to add the extension-specific details of all
extensions present in the installation directory to pg_available_extensions.
Dilip Kumar sent in a WIP patch to implement undoworker and transaction
rollback using an UNDO log.
Kyotaro HORIGUCHI sent in two more revisions of a patch to add a TAP test for
the copy-truncation optimization, write a WAL entry for any empty nbtree index
build, add infrastructure to the WAL-logging skip feature, and fix the
WAL-skipping feature using same.
Surafel Temesgen sent in a patch to add an optional WHEN to COPY ... FROM.
Andrey Klychkov sent in a patch to change simple_heap_insert() to a macro,
avoiding a function call in the process.
Konstantin Knizhnik sent in another revision of a patch to optimize usage of
Sergei Kornilov sent in another revision of a patch to redo and expand the
Tom Lane sent in a patch to Use PHVs within the join quals of a full join, even
when it's the lowest nulling outer join. This is a partial fix for a
performance problem with FULL JOINs.
Nathan Bossart sent in a patch to refactor the maximum password length enforced
by client utilities, add documentation regarding effective password length
limits, and increase the accepted length of password messages to 8192.
Fabien COELHO sent in another revision of a patch to clean up pgbench's
John Naylor sent in another revision of a patch to add a pg_language lookup and
replace the /ad hoc/ format for conversion functions in the genbki
John Naylor sent in two more revisions of a patch to avoid creating a free space
map for small tables.
Matheus de Oliveira sent in another revision of a patch to add support for ON
UPDATE/DELETE actions on ALTER CONSTRAINT.
Tom Lane sent in a patch to get rid of empty jointrees.
|Next Message||Britt Cole||2018-10-15 07:16:28||2Q PGConf 2018 Schedule Announced|
|Previous Message||Keith Fiske||2018-10-12 20:34:44||pg_partman 4.0.0 released|