From: | Sergey Fukanchik <s(dot)fukanchik(at)postgrespro(dot)ru> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | [PATCH] Perform check for oversized WAL record before calculating record CRC |
Date: | 2025-09-06 11:00:46 |
Message-ID: | db2c6c76-3ff0-484f-9957-11b99732d943@postgrespro.ru |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi Postgres hackers,
I found a case where CRC of 1Gb block is calculated first and then
immediately
discarded.
There is a limit on WAL record size - XLogRecordMaxSize. If the record
being inserted is larger than that, it is discarded and error is reported:
ERROR: oversized WAL record
DETAIL: WAL record would be 1069547521 bytes (of maximum 1069547520 bytes)
However, crc of record data is calculated before the record size is
validated,
and in case of oversized record this crc is not used anywhere.
It is surely a minor issue, but might be worth fixing. I'm proposing a
patch.
Since this situation is not covered by any tests I also included a test case
for failing on huge WAL records.
---
Sergey Fukanchik
Attachment | Content-Type | Size |
---|---|---|
0001-Perform-check-for-oversized-WAL-record-before-calcul.patch | text/x-patch | 4.0 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Treat | 2025-09-06 13:12:19 | Re: postmaster uses more CPU in 18 beta1 with io_method=io_uring |
Previous Message | Dilip Kumar | 2025-09-06 09:38:00 | Re: Proposal: Conflict log history table for Logical Replication |