Re: Raising the checkpoint_timeout limit

From: Andres Freund <andres(at)anarazel(dot)de>
To: Simon Riggs <simon(at)2ndQuadrant(dot)com>
Cc: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Noah Misch <noah(at)leadboat(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Raising the checkpoint_timeout limit
Date: 2016-02-02 11:32:00
Message-ID: 20160202113200.GU8743@awork2.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 2016-02-02 11:37:15 +0100, Simon Riggs wrote:
> If people wish to turn off crash recovery, they can already set fsync=off.
> I don't wish to see us support a setting that causes problems for people
> that don't understand what checkpoints are and why everybody needs them.

I don't think fsync=off and very long checkpoints are really
comparable. Many large modern machines, especially with directly
attached storage and/or large amounts of memory, take a *long* while to
boot. So any outage will be dealth with a failover anyway. But at the
same time, a database in the 10TB+ range can't easily be copied again.
Thus running with fsync=off isn't something that you'd want in those
scenarios - it'd prevent the previous master/other standbys from failing
back/catching up; the databases could be arbitrarily corrupted after
all.

Additionally a significant portion of the cost of checkpoints are full
page writes - you easily can get into the situation where you have
~20MB/sec normal WAL without FPWs, but with them 300MB/s. That rate is
rather expensive, regardless fsync=off.

> The current code needs to act differently with regard to very low settings,
> so when we are a small number of blocks remaining we don't spend hours
> performing them. Allowing very large values would make that even more
> strange.

Why is that a good thing? Every checkpoint triggers a new round of full
page writes. I don't see why you want to accellerate a checkpoint, just
because there's few writes remaining? Yes, the current code partially
behaves that way, but that's imo more an implementation artifact or even
a bug.

> Some systems offer a recovery_time_objective setting that is used to
> control how frequently checkpoints occur. That might be a more usable
> approach.

While desirable, I have no idea to realistically calculate that :(. It's
also a lot bigger than just adjusting a pointlessly low GUC limit.

Regards,

Andres

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Anastasia Lubennikova 2016-02-02 11:47:29 Re: [WIP] Effective storage of duplicates in B-tree index.
Previous Message Masahiko Sawada 2016-02-02 11:25:23 Re: Freeze avoidance of very large table.