From: | Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com> |
---|---|
To: | Greg Smith <greg(at)2ndquadrant(dot)com> |
Cc: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Spread checkpoint sync |
Date: | 2010-12-02 06:11:21 |
Message-ID: | 4CF73889.7090203@enterprisedb.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 01.12.2010 23:30, Greg Smith wrote:
> Heikki Linnakangas wrote:
>> Do you have any idea how to autotune the delay between fsyncs?
>
> I'm thinking to start by counting the number of relations that need them
> at the beginning of the checkpoint. Then use the same basic math that
> drives the spread writes, where you assess whether you're on schedule or
> not based on segment/time progress relative to how many have been sync'd
> out of that total. At a high level I think that idea translates over
> almost directly into the existing write spread code. Was hoping for a
> sanity check from you in particular about whether that seems reasonable
> or not before diving into the coding.
Sounds reasonable to me. fsync()s are a lot less uniform than write()s,
though. If you fsync() a file with one dirty page in it, it's going to
return very quickly, but a 1GB file will take a while. That could be
problematic if you have a thousand small files and a couple of big ones,
as you would want to reserve more time for the big ones. I'm not sure
what to do about it, maybe it's not a problem in practice.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff Davis | 2010-12-02 06:15:31 | Re: is cachedFetchXid ever invalidated? |
Previous Message | Joachim Wieland | 2010-12-02 05:39:08 | Re: WIP patch for parallel pg_dump |