Re: Checkpointer on hot standby runs without looking checkpoint_segments

From: Florian Pflug <fgp(at)phlo(dot)org>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org, heikki(dot)linnakangas(at)enterprisedb(dot)com, masao(dot)fujii(at)gmail(dot)com
Subject: Re: Checkpointer on hot standby runs without looking checkpoint_segments
Date: 2012-06-08 17:01:12
Message-ID: 91EF619D-8C38-4ABB-8F29-33FEEF600DEE@phlo.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Jun8, 2012, at 15:47 , Robert Haas wrote:
> On Fri, Jun 8, 2012 at 5:02 AM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
>> On 8 June 2012 09:14, Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp> wrote:
>>
>>> The requirement for this patch is as follows.
>>>
>>> - What I want to get is similarity of the behaviors between
>>> master and (hot-)standby concerning checkpoint
>>> progression. Specifically, checkpoints for streaming
>>> replication running at the speed governed with
>>> checkpoint_segments. The work of this patch is avoiding to get
>>> unexpectedly large number of WAL segments stay on standby
>>> side. (Plus, increasing the chance to skip recovery-end
>>> checkpoint by my another patch.)
>>
>> Since we want wal_keep_segments number of WAL files on master (and
>> because of cascading, on standby also), I don't see any purpose to
>> triggering more frequent checkpoints just so we can hit a magic number
>> that is most often set wrong.
>
> This is a good point. Right now, if you set checkpoint_segments to a
> large value, we retain lots of old WAL segments even when the system
> is idle (cf. XLOGfileslop). I think we could be smarter about that.
> I'm not sure what the exact algorithm should be, but right now users
> are forced between setting checkpoint_segments very large to achieve
> optimum write performance and setting it small to conserve disk space.
> What would be much better, IMHO, is if the number of retained
> segments could ratchet down when the system is idle, eventually
> reaching a state where we keep only one segment beyond the one
> currently in use.

I'm a bit sceptical about this. It seems to me that you wouldn't actually
be able to do anything useful with the conserved space, since postgres
could re-claim it at any time. At which point it'd better be available,
or your whole cluster comes to a screeching halt...

best regards,
Florian Pflug

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2012-06-08 17:07:53 Re: Avoiding adjacent checkpoint records
Previous Message Kevin Grittner 2012-06-08 16:24:56 Re: Avoiding adjacent checkpoint records