|From:||Andres Freund <andres(at)anarazel(dot)de>|
|To:||Robert Haas <robertmhaas(at)gmail(dot)com>|
|Cc:||Peter Geoghegan <pg(at)bowt(dot)ie>, Melanie Plageman <melanieplageman(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, Jeff Davis <pgsql(at)j-davis(dot)com>|
|Subject:||Re: Eager page freeze criteria clarification|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
On 2023-09-06 10:46:03 -0400, Robert Haas wrote:
> On Fri, Sep 1, 2023 at 9:07 PM Peter Geoghegan <pg(at)bowt(dot)ie> wrote:
> > Why not also avoid setting pages all-visible? The WAL records aren't
> > too much smaller than most freeze records these days -- 64 bytes on
> > most systems. I realize that the rules for FPIs are a bit different
> > when page-level checksums aren't enabled, but fundamentally it's the
> > same situation. No?
> It's an interesting point. AFAIK, whether or not page-level checksums
> are enabled doesn't really matter here.
I think it does matter:
visibilitymap_set(Relation rel, BlockNumber heapBlk, Buffer heapBuf,
XLogRecPtr recptr, Buffer vmBuf, TransactionId cutoff_xid,
recptr = log_heap_visible(rel, heapBuf, vmBuf, cutoff_xid, flags);
* If data checksums are enabled (or wal_log_hints=on), we
* need to protect the heap page from being torn.
* If not, then we must *not* update the heap page's LSN. In
* this case, the FPI for the heap page was omitted from the
* WAL record inserted above, so it would be incorrect to
* update the heap page's LSN.
Page heapPage = BufferGetPage(heapBuf);
* Perform XLogInsert for a heap-visible operation. 'block' is the block
* being marked all-visible, and vm_buffer is the buffer containing the
* corresponding visibility map block. Both should have already been modified
* and dirtied.
* snapshotConflictHorizon comes from the largest xmin on the page being
* marked all-visible. REDO routine uses it to generate recovery conflicts.
* If checksums or wal_log_hints are enabled, we may also generate a full-page
* image of heap_buffer. Otherwise, we optimize away the FPI (by specifying
* REGBUF_NO_IMAGE for the heap buffer), in which case the caller should *not*
* update the heap page's LSN.
log_heap_visible(Relation rel, Buffer heap_buffer, Buffer vm_buffer,
TransactionId snapshotConflictHorizon, uint8 vmflags)
I.e. setting an, otherwise unmodified, page all-visible won't trigger an FPI
if checksums are disabled, but will FPI with checksums enabled. I think that's
a substantial difference in WAL volume for insert-only workloads...
In contrast to that, freezing will almost always trigger an FPI (except for
empty pages, but we imo ought to stop setting empty pages all frozen ).
Yep, a quick experiment confirms that:
DROP TABLE IF EXISTS foo;
CREATE TABLE foo AS SELECT generate_series(1, 10000000);
VACUUM (VERBOSE) foo;
checksums off: WAL usage: 44249 records, 3 full page images, 2632091 bytes
checksums on: WAL usage: 132748 records, 44253 full page images, 388758161 bytes
I initially was confused by the 3x wal records - I was expecting 2x. The
reason is that with checksums on, we emit an FPI during the visibility check,
which then triggers the current heuristic for opportunistic freezing. The
saving grace is that WAL volume is completely dominated by the FPIs:
Type N (%) Record size (%) FPI size (%) Combined size (%)
---- - --- ----------- --- -------- --- ------------- ---
XLOG/FPI_FOR_HINT 44253 ( 33.34) 2168397 ( 7.84) 361094232 (100.00) 363262629 ( 93.44)
Transaction/INVALIDATION 1 ( 0.00) 78 ( 0.00) 0 ( 0.00) 78 ( 0.00)
Standby/INVALIDATIONS 1 ( 0.00) 90 ( 0.00) 0 ( 0.00) 90 ( 0.00)
Heap2/FREEZE_PAGE 44248 ( 33.33) 22876120 ( 82.72) 0 ( 0.00) 22876120 ( 5.88)
Heap2/VISIBLE 44248 ( 33.33) 2610642 ( 9.44) 16384 ( 0.00) 2627026 ( 0.68)
Heap/INPLACE 1 ( 0.00) 188 ( 0.00) 0 ( 0.00) 188 ( 0.00)
-------- -------- -------- --------
Total 132752 27655515 [7.11%] 361110616 [92.89%] 388766131 [100%]
In realistic tables, where rows are wider than a single int, FPI_FOR_HINT
dominates even further, as the FREEZE_PAGE would be smaller if there weren't
226 tuples on each page...
|Next Message||Lepikhov Andrei||2023-09-08 05:11:31||Re: Optimize planner memory consumption for huge arrays|
|Previous Message||Michael Paquier||2023-09-08 04:40:44||Re: Impact of checkpointer during pg_upgrade|