Re: New strategies for freezing, advancing relfrozenxid early

From: Andres Freund <andres(at)anarazel(dot)de>
To: Peter Geoghegan <pg(at)bowt(dot)ie>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Jeff Davis <pgsql(at)j-davis(dot)com>, Matthias van de Meent <boekewurm+postgres(at)gmail(dot)com>, John Naylor <john(dot)naylor(at)enterprisedb(dot)com>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: New strategies for freezing, advancing relfrozenxid early
Date: 2023-01-27 05:58:05
Message-ID: 20230127055805.ac6xlrrhwlsigpns@awork3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi,

On 2023-01-26 19:01:03 -0800, Peter Geoghegan wrote:
> On Thu, Jan 26, 2023 at 6:37 PM Andres Freund <andres(at)anarazel(dot)de> wrote:
> > I also don't really see how that is responsive to anything else in my
> > email. That's just as true for the current gating condition (the issuance of
> > an FPI during heap_page_prune() / HTSV()).
> >
> > What I was wondering about is whether we should replace the
> > fpi_before != pgWalUsage.wal_fpi
> > with
> > records_before != pgWalUsage.wal_records && !WouldIssueFpi(page)
>
> I understand that. What I'm saying is that that's going to create a
> huge problem of its own, unless you separately account for that
> problem.

> The simplest and obvious example is something like a pgbench_tellers
> table. VACUUM will generally run fast enough relative to the workload
> that it will set some of those pages all-visible. Now it's going to
> freeze them, too. Arguably it shouldn't even be setting the pages
> all-visible, but now you make that existing problem much worse.

So the benefit of the FPI condition is that it indicates that the page hasn't
been updated all that recently, because, after all, a checkpoint has happened
since? If that's the intention, it needs a huge honking comment - at least I
can't read that out of:

Also freeze when pruning generated an FPI, if doing so means that we set the
page all-frozen afterwards (might not happen until final heap pass).

It doesn't seem like a great proxy to me. ISTM that this means that how
aggressive vacuum is about opportunistically freezing pages depends on config
variables like checkpoint_timeout & max_wal_size (less common opportunistic
freezing), full_page_writes & use of unlogged tables (no opportunistic
freezing), and the largely random scheduling of autovac workers.

I can see it making a difference for pgbench_tellers, but it's a pretty small
difference in overall WAL volume. I can think of more adverse workloads though
- but even there the difference seems not huge, and not predictably
reached. Due to the freeze plan stuff you added, the amount of WAL for
freezing a page is pretty darn small compared to the amount of WAL if compared
to the amount of WAL needed to fill a page with non-frozen tuples.

That's not to say we shouldn't reduce the risk - I agree that both the "any
fpi" and the "any record" condition can have adverse effects!

However, an already dirty page getting frozen is also the one case where
freezing won't have meaningful write amplication effect. So I think it's worth
trying spending effort figuring out how we can make freezing in that situation
have unlikely and small downsides.

The cases with downsides are tables that are very heavily updated througout,
where the page is going to be defrosted again almost immediately. As you say,
the all-visible marking has a similar problem.

Essentially the "any fpi" logic is a very coarse grained way of using the page
LSN as a measurement. As I said, I don't think "has a checkpoint occurred
since the last write" is a good metric to avoid unnecessary freezing - it's
too coarse. But I think using the LSN is the right thought. What about
something like

lsn_threshold = insert_lsn - (insert_lsn - lsn_of_last_vacuum) * 0.1
if (/* other conds */ && PageGetLSN(page) <= lsn_threshold)
FreezeMe();

I probably got some details wrong, what I am going for with lsn_threshold is
that we'd freeze an already dirty page if it's not been updated within 10% of
the LSN distance to the last VACUUM.

> The important point is that there doesn't seem to be any good way
> around thinking about the table as a whole if you're going to freeze
> speculatively. This is not the same dynamic as we see with the FPI
> thing IMV -- that's not nearly so speculative as what you're talking
> about, since it is speculative in roughly the same sense that eager
> freezing was speculative (hence the suggestion that something like
> vacuum_freeze_strategy_threshold could have a roll to play).

I don't think the speculation is that fundamentally different - a heavily
updated table with a bit of a historic, non-changing portion, makes
vacuum_freeze_strategy_threshold freeze way more aggressively than either "any
record" or "any fpi".

> The FPI thing is mostly about the cost now versus the cost later on.
> You're gambling that you won't get another FPI later on if you freeze
> now. But the cost of a second FPI later on is so much higher than the
> added cost of freezing now that that's a very favorable bet, that we
> can afford to "lose" many times while still coming out ahead overall.

Agreed. And not just avoiding FPIs, avoiding another dirtying of the page! The
latter part is especially huge IMO. Depending on s_b size it can also avoid
another *read* of the page...

> And even when we lose, you generally still won't have been completely
> wrong -- even then there generally will indeed be a second FPI later
> on for the same page, to go with everything else. This makes the
> wasted freezing even less significant, on a comparative basis!

This is precisely why I think that we can afford to be quite aggressive about
freezing already dirty pages...

Greetings,

Andres Freund

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Andres Freund 2023-01-27 06:17:45 Re: Improve WALRead() to suck data directly from WAL buffers when possible
Previous Message Nathan Bossart 2023-01-27 05:40:58 Re: recovery modules