|From:||Bruce Momjian <bruce(at)momjian(dot)us>|
|To:||Justin Pryzby <pryzby(at)telsasoft(dot)com>|
|Subject:||Re: PG 13 release notes, first draft|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
On Tue, May 5, 2020 at 12:50:01PM -0500, Justin Pryzby wrote:
> On Tue, May 05, 2020 at 01:18:09PM -0400, Bruce Momjian wrote:
> > > |Release date: 2020-05-03
> > > => Should say 2020-XX-XX, before someone like me goes and installs it everywhere in sight.
> > Agreed!
> > > |These triggers cannot change the destination partition.
> > > => Maybe say "cannot change which partition is the destination"
> Looks like you copied my quote mark :(
I kind of liked it, but OK, removed. ;-)
> > > | Allow hash aggregation to use disk storage for large aggregation result sets (Jeff Davis)
> > > | Previously, hash aggregation was not used if it was expected to use more than work_mem memory. This is controlled by enable_hashagg_disk.
> > > => enable_hashagg_disk doesn't behave like other enable_* parameters.
> > > As I understand, disabling it only "opportunisitically" avoids plans which are
> > > *expected* to overflow work_mem. I think we should specifically say that, and
> > > maybe suggest recalibrating work_mem.
> > I went with "avoided":
> > Previously, hash aggregation was avoided if it was expected to use more
> > than work_mem memory. This is controlled by enable_hashagg_disk.
> I think we should expand on this:
> |Previously, hash aggregation was avoided if it was expected to use more than
> |work_mem memory, but it wasn't enforced, and hashing could still exceed
> |work_mem. To get back the old behavior, increasing work_mem.
I think work_mem has too many other effects to recommend just changing
it for this.
> |The parameter enable_hashagg_disk controls whether a plan which is *expected*
> |to spill to disk will be considered. During execution, an aggregate node which
> |exceeding work_mem will spill to disk regardless of this parameter.
> I wrote something similar here:
I think this kind of information should be in our docs, not really the
> > > | This is controlled by GUC wal_skip_threshold.
> > > I think you should say that's a size threshold which determines which strategy
> > > to use (WAL or fsync).
> > I went with:
> > The WAL write amount where this happens is controlled by wal_skip_threshold.
> > They can use the doc link if they want more detail.
> I guess I would say "relations larger than wal_skip_threshold will be fsynced
> rather than copied to WAL"
How is this?
Relations larger than wal_skip_threshold will have their files fynsced
rather than writing their WAL records.
+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +
|Next Message||Ranier Vilela||2020-05-05 18:15:43||Re: Unify drop-by-OID functions|
|Previous Message||Robert Haas||2020-05-05 17:56:48||Re: Unify drop-by-OID functions|