Re: Redesigning checkpoint_segments

From: Josh Berkus <josh(at)agliodbs(dot)com>
To: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Andres Freund <andres(at)2ndquadrant(dot)com>
Cc: Peter Eisentraut <peter_e(at)gmx(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Redesigning checkpoint_segments
Date: 2015-01-05 17:59:34
Message-ID: 54AAD106.4000800@agliodbs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 01/05/2015 09:06 AM, Heikki Linnakangas wrote:
> I wasn't clear on my opinion here. I think I understood what Josh meant,
> but I don't think we should do it. Seems like unnecessary nannying of
> the DBA. Let's just mention in the manual that if you set
> wal_keep_segments higher than [insert formula here], you will routinely
> have more WAL in pg_xlog than what checkpoint_wal_size is set to.
>
>> That seems a unrealistic goal. I've seen setups that have set
>> checkpoint_segments intentionally, and with good reasoning, north of
>> 50k.
>
> So? I don't see how that's relevant.
>
>> Neither wal_keep_segments, nor failing archive_commands nor replication
>> slot should have an influence on checkpoint pacing.
>
> Agreed.

Oh, right, slots can also cause the log to increase in size. And we've
already had the discussion about hard limits, which is maybe a future
feature and not part of this patch.

Can we figure out a reasonable formula? My thinking is 50% for
wal_keep_segments, because we need at least 50% of the wals to do a
reasonable spread checkpoint. If max_wal_size is 1GB, and
wal_keep_segments is 1.5GB, what would happen? What if
wal_keep_segments is 0.9GB?

I need to create a fake benchmark for this ...

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Fabien COELHO 2015-01-05 18:00:36 Re: add modulo (%) operator to pgbench
Previous Message Tom Lane 2015-01-05 17:54:41 Re: Re: Patch to add functionality to specify ORDER BY in CREATE FUNCTION for SRFs