Re: Load distributed checkpoint V4

From: Heikki Linnakangas <heikki(at)enterprisedb(dot)com>
To: ITAGAKI Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Load distributed checkpoint V4
Date: 2007-04-23 10:02:26
Message-ID: 462C8432.5020101@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers pgsql-patches

ITAGAKI Takahiro wrote:
> Heikki Linnakangas <hlinnaka(at)iki(dot)fi> wrote:
>> We might want to call GetCheckpointProgress something
>> else, though. It doesn't return the amount of progress made, but rather
>> the amount of progress we should've made up to that point or we're in
>> danger of not completing the checkpoint in time.
>
> GetCheckpointProgress might be a bad name; It returns the progress we should
> have done, not at that time. How about GetCheckpointTargetProgress?

Better. A bit long though. Not that I have any better suggestions ;-)

>> In the sync phase, we sleep between each fsync until enough
>> time/segments have passed, assuming that the time to fsync is
>> proportional to the file length. I'm not sure that's a very good
>> assumption. We might have one huge files with only very little changed
>> data, for example a logging table that is just occasionaly appended to.
>> If we begin by fsyncing that, it'll take a very short time to finish,
>> and we'll then sleep for a long time. If we then have another large file
>> to fsync, but that one has all pages dirty, we risk running out of time
>> because of the unnecessarily long sleep. The segmentation of relations
>> limits the risk of that, though, by limiting the max. file size, and I
>> don't really have any better suggestions.
>
> It is difficult to estimate fsync costs. We need additonal statistics to
> do it. For example, if we record the number of write() for each segment,
> we might use the value as the number of dirty pages in segments. We don't
> have per-file write statistics now, but if we will have those information,
> we can use them to control checkpoints more cleverly.

It's probably not worth it to be too clever with that. Even if we
recorded the number of writes we made, we still wouldn't know how many
of them haven't been flushed to disk yet.

I guess we're fine if we do just avoid excessive waiting per the
discussion in the next paragraph, and use a reasonable safety margin in
the default values.

>> Should we try doing something similar for the sync phase? If there's
>> only 2 small files to fsync, there's no point sleeping for 5 minutes
>> between them just to use up the checkpoint_sync_percent budget.
>
> Hmmm... if we add a new parameter like kernel_write_throughput [kB/s] and
> clamp the maximum sleeping to size-of-segment / kernel_write_throuput (*1),
> we can avoid unnecessary sleeping in fsync phase. Do we want to have such
> a new parameter? I think we have many and many guc variables even now.

How about using the same parameter that controls the minimum write speed
of the write-phase (the patch used bgwriter_all_maxpages, but I
suggested renaming it)?

> I don't want to add new parameters any more if possible...

Agreed.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Simon Riggs 2007-04-23 10:14:05 Re: [PATCH] A crash and subsequent recovery of themaster can cause the slave to get out-of-sync
Previous Message Marcin Waldowski 2007-04-23 09:42:20 Re: BUG #3242: FATAL: could not unlock semaphore: error code 298

Browse pgsql-patches by date

  From Date Subject
Next Message Gregory Stark 2007-04-23 11:12:30 Re: Dead Space Map version 3 (simplified)
Previous Message ITAGAKI Takahiro 2007-04-23 05:57:21 Re: Load distributed checkpoint V4