From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Noah Misch <noah(at)leadboat(dot)com>, Ashutosh Sharma <ashu(dot)coek88(at)gmail(dot)com>, Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr>, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Subject: | Re: Perf Benchmarking and regression. |
Date: | 2016-06-03 17:43:14 |
Message-ID: | 20160603174314.7kn4la5rf25stadg@alap3.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 2016-06-03 13:33:31 -0400, Robert Haas wrote:
> On Fri, Jun 3, 2016 at 12:39 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> > On 2016-06-03 12:31:58 -0400, Robert Haas wrote:
> >> Now, what varies IME is how much total RAM there is in the system and
> >> how frequently they write that data, as opposed to reading it. If
> >> they are on a tightly RAM-constrained system, then this situation
> >> won't arise because they won't be under the dirty background limit.
> >> And if they aren't writing that much data then they'll be fine too.
> >> But even putting all of that together I really don't see why you're
> >> trying to suggest that this is some bizarre set of circumstances that
> >> should only rarely happen in the real world.
> >
> > I'm saying that if that happens constantly, you're better off adjusting
> > shared_buffers, because you're likely already suffering from latency
> > spikes and other issues. Optimizing for massive random write throughput
> > in a system that's not configured appropriately, at the cost of well
> > configured systems to suffer, doesn't seem like a good tradeoff to me.
>
> I really don't get it. There's nothing in any set of guidelines for
> setting shared_buffers that I've ever seen which would cause people to
> avoid this scenario.
The "roughly 1/4" of memory guideline already mostly avoids it? It's
hard to constantly re-dirty a written-back page within 30s, before the
10% (background)/20% (foreground) limits apply; if your shared buffers
are larger than the 10%/20% limits (which only apply to *available* not
total memory btw).
> You're the first person I've ever heard describe this as a
> misconfiguration.
Huh? People tried addressing this problem for *years* with bigger /
smaller shared buffers, but couldn't easily.
I'm inclined to give up and disable backend_flush_after (not the rest),
because it's new and by far the "riskiest". But I do think it's a
disservice for the majority of our users.
Greetings,
Andres Freund
From | Date | Subject | |
---|---|---|---|
Next Message | Jim Nasby | 2016-06-03 17:45:08 | Re: array of domain types |
Previous Message | Tom Lane | 2016-06-03 17:42:09 | Re: Perf Benchmarking and regression. |