From: | Evgeny Morozov <postgresql3(at)realityexists(dot)net> |
---|---|
To: | Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at> |
Cc: | PostgreSQL General <pgsql-general(at)postgresql(dot)org>, "Peter J(dot) Holzer" <hjp-pgsql(at)hjp(dot)at> |
Subject: | Re: "PANIC: could not open critical system index 2662" - twice |
Date: | 2023-04-11 16:44:54 |
Message-ID: | 010201877134be1c-fb837249-04a1-4cb0-a13f-c542425b50a0-000000@eu-west-1.amazonses.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> No idea about the former, but bad hardware is a good enough explanation.
> As to keeping it from happening: use good hardware.
Alright, thanks, I'll just keep my fingers crossed that it doesn't
happen again then!
> Also: Use checksums. PostgreSQL offers data checksums[1]. Some
filesystems also offer checksums.
We have data_checksums=on. (It must be on by default, since I cannot
find that in our config files anywhere.) However, the docs say "Only
data pages are protected by checksums; internal data structures and
temporary files are not.", so I guess pg_class_oid_index might be an
"internal data structure"?
We also have checksum=on for the ZFS dataset on which the data is stored
(also the default - we didn't change it). ZFS did detect problems (zpool
status reported read, write and checksum errors for one of the old
disks), but it also said "errors: No known data errors". I understood
that to meant that it recovered from the errors, i.e. wrote the data
different disk blocks or read it from another disk in the pool.
From | Date | Subject | |
---|---|---|---|
Next Message | Joe Carlson | 2023-04-11 17:41:36 | TEXT column > 1Gb |
Previous Message | Christian Schröder | 2023-04-11 14:41:05 | RE: Performance issue after migration from 9.4 to 15 |