RE: Protect syscache from bloating with negative cache entries

From: "Ideriha, Takeshi" <ideriha(dot)takeshi(at)jp(dot)fujitsu(dot)com>
To: 'Robert Haas' <robertmhaas(at)gmail(dot)com>, "Tsunakawa, Takayuki" <tsunakawa(dot)takay(at)jp(dot)fujitsu(dot)com>
Cc: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, "andres(at)anarazel(dot)de" <andres(at)anarazel(dot)de>, "alvherre(at)2ndquadrant(dot)com" <alvherre(at)2ndquadrant(dot)com>, "tomas(dot)vondra(at)2ndquadrant(dot)com" <tomas(dot)vondra(at)2ndquadrant(dot)com>, "bruce(at)momjian(dot)us" <bruce(at)momjian(dot)us>, "tgl(at)sss(dot)pgh(dot)pa(dot)us" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org>, "michael(dot)paquier(at)gmail(dot)com" <michael(dot)paquier(at)gmail(dot)com>, "david(at)pgmasters(dot)net" <david(at)pgmasters(dot)net>, "craig(at)2ndquadrant(dot)com" <craig(at)2ndquadrant(dot)com>
Subject: RE: Protect syscache from bloating with negative cache entries
Date: 2019-02-27 08:16:36
Message-ID: 4E72940DA2BF16479384A86D54D0988A6F442188@G01JPEXMBKW04
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

>From: Robert Haas [mailto:robertmhaas(at)gmail(dot)com]
>
>On Mon, Feb 25, 2019 at 3:50 AM Tsunakawa, Takayuki
><tsunakawa(dot)takay(at)jp(dot)fujitsu(dot)com> wrote:
>> How can I make sure that this context won't exceed, say, 10 MB to avoid OOM?
>
>As Tom has said before and will probably say again, I don't think you actually want that.
>We know that PostgreSQL gets roughly 100x slower with the system caches disabled
>- try running with CLOBBER_CACHE_ALWAYS. If you are accessing the same system
>cache entries repeatedly in a loop - which is not at all an unlikely scenario, just run the
>same query or sequence of queries in a loop - and if the number of entries exceeds
>10MB even, perhaps especially, by just a tiny bit, you are going to see a massive
>performance hit.
>Maybe it won't be 100x because some more-commonly-used entries will always stay
>cached, but it's going to be really big, I think.
>
>Now you could say - well it's still better than running out of memory.
>However, memory usage is quite unpredictable. It depends on how many backends
>are active and how many copies of work_mem and/or maintenance_work_mem are in
>use, among other things. I don't think we can say that just imposing a limit on the
>size of the system caches is going to be enough to reliably prevent an out of memory
>condition unless the other use of memory on the machine happens to be extremely
>stable.

>So I think what's going to happen if you try to impose a hard-limit on the size of the
>system cache is that you will cause some workloads to slow down by 3x or more
>without actually preventing out of memory conditions. What you need to do is accept
>that system caches need to grow as big as they need to grow, and if that causes you
>to run out of memory, either buy more memory or reduce the number of concurrent
>sessions you allow. It would be fine to instead limit the cache memory if those cache
>entries only had a mild effect on performance, but I don't think that's the case.

I'm afraid I may be quibbling about it.
What about users who understand performance drops but don't want to
add memory or decrease concurrency?
I think that PostgreSQL has a parameter
which most of users don't mind and use is as default
but a few of users want to change it.
In this case as you said, introducing hard limit parameter causes
performance decrease significantly so how about adding detailed caution
to the document like planner cost parameter?

Regards,
Takeshi Ideriha

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message leif 2019-02-27 09:14:40 Re: BUG #15589: Due to missing wal, restore ends prematurely and opens database for read/write
Previous Message Peter Eisentraut 2019-02-27 08:14:59 Re: psql show URL with help