From: | ncm(at)zembu(dot)com (Nathan Myers) |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: CommitDelay performance improvement |
Date: | 2001-02-25 01:21:38 |
Message-ID: | 20010224172138.A20484@store.zembu.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sat, Feb 24, 2001 at 01:07:17AM -0500, Tom Lane wrote:
> ncm(at)zembu(dot)com (Nathan Myers) writes:
> > I see, I had it backwards: N=0 corresponds to "always delay", and
> > N=infinity (~0) is "never delay", or what you call zero delay. N=1 is
> > not interesting. N=M/2 or N=sqrt(M) or N=log(M) might be interesting,
> > where M is the number of backends, or the number of backends with begun
> > transactions, or something. N=10 would be conservative (and maybe
> > pointless) just because it would hardly ever trigger a delay.
>
> Why is N=1 not interesting? That requires at least one other backend
> to be in a transaction before you'll delay. That would seem to be
> the minimum useful value --- N=0 (always delay) seems clearly to be
> too stupid to be useful.
N=1 seems arbitrarily aggressive. It assumes any open transaction will
commit within a few milliseconds; otherwise the delay is wasted. On a
fairly busy system, it seems to me to impose a strict upper limit on
transaction rate for any client, regardless of actual system I/O load.
(N=0 would impose that strict upper limit even for a single client.)
Delaying isn't free, because it means that the client can't turn around
and do even a cheap query for a while. In a sense, when you delay you are
charging the committer a tax to try to improve overall throughput. If the
delay lets you reduce I/O churn enough to increase the total bandwidth,
then it was worthwhile; if not, you just cut system performance, and
responsiveness to each client, for nothing.
The above suggests that maybe N should depend on recent disk I/O activity,
so you get a larger N (and thus less likely delay and more certain payoff)
for a more lightly-loaded system. On a system that has maxed its I/O
bandwidth, clients will suffer delays anyhow, so they might as well
suffer controlled delays that result in better total throughput. On a
lightly-loaded system there's no need, or payoff, for such throttling.
Can we measure disk system load by averaging the times taken for fsyncs?
Nathan Myers
ncm(at)zembu(dot)com
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2001-02-25 04:10:43 | Re: A patch for xlog.c |
Previous Message | Roberto Mello | 2001-02-25 00:37:30 | PL/SQL-to-PL/pgSQL-HOWTO + PL/pgSQL documentation |