From: | Heikki Linnakangas <heikki(at)enterprisedb(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Bgwriter strategies |
Date: | 2007-07-06 15:47:19 |
Message-ID: | 468E6407.5030809@enterprisedb.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Tom Lane wrote:
> Heikki Linnakangas <heikki(at)enterprisedb(dot)com> writes:
>> Tom Lane wrote:
>>> buffers_to_clean = Max(buffers_used * 1.1,
>>> buffers_to_clean * 0.999);
>
>> That would be overly aggressive on a workload that's steady on average,
>> but consists of small bursts. Like this: 0 0 0 0 100 0 0 0 0 100 0 0 0 0
>> 100. You'd end up writing ~100 pages on every bgwriter round, but you
>> only need an average of 20 pages per round.
>
> No, you wouldn't be *writing* that many, you'd only be keeping that many
> *clean*; which only costs more work if any of them get re-dirtied
> between writing and use. Which is a fairly small probability if we're
> talking about a small difference in the number of buffers to keep clean.
> So I think the average number of writes is hardly different, it's just
> that the backends are far less likely to have to do any of them.
Ah, ok, I misunderstood what you were proposing. Yes, that seems like a
good algorithm then.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Gregory Stark | 2007-07-06 16:09:25 | Re: Bgwriter strategies |
Previous Message | Tom Lane | 2007-07-06 15:44:47 | Re: Bgwriter strategies |