Re: Spurious "apparent wraparound" via SimpleLruTruncate() rounding

From: Noah Misch <noah(at)leadboat(dot)com>
To: Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Spurious "apparent wraparound" via SimpleLruTruncate() rounding
Date: 2019-07-25 03:45:48
Message-ID: 20190725034548.GB2028622@rfd.leadboat.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, Jul 24, 2019 at 05:27:18PM +0900, Kyotaro Horiguchi wrote:
> Sorry in advance for link-breaking message forced by gmail..

Using the archives page "Resend email" link avoids that.

> https://www.postgresql.org/message-id/flat/20190202083822(dot)GC32531(at)gust(dot)leadboat(dot)com
>
> > 1. The result of the test is valid only until we release the SLRU ControlLock,
> > which we do before SlruScanDirCbDeleteCutoff() uses the cutoff to evaluate
> > segments for deletion. Once we release that lock, latest_page_number can
> > advance. This creates a TOCTOU race condition, allowing excess deletion:
> >
> >
> > [local] test=# table trunc_clog_concurrency ;
> > ERROR: could not access status of transaction 2149484247
> > DETAIL: Could not open file "pg_xact/0801": No such file or directory.
>
> It seems like some other vacuum process saw larger cutoff page?

No, just one VACUUM suffices.

> If I'm
> not missing something, the missing page is no longer the
> "recently-populated" page at the time (As I understand it as the last
> page that holds valid data). Couldn't we just ignore ENOENT there?

The server reported this error while attempting to read CLOG to determine
whether a tuple's xmin committed or aborted. That ENOENT means the relevant
CLOG page is not available. To ignore that ENOENT, the server would need to
guess whether to consider the xmin committed or consider it aborted. So, no,
we can't just ignore the ENOENT.

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Geoghegan 2019-07-25 03:48:59 Re: [HACKERS] [WIP] Effective storage of duplicates in B-tree index.
Previous Message Tom Lane 2019-07-25 03:11:45 Re: dropdb --force