|From:||Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>|
|Cc:||tsunakawa(dot)takay(at)jp(dot)fujitsu(dot)com, alvherre(at)alvh(dot)no-ip(dot)org, andres(at)anarazel(dot)de, robertmhaas(at)gmail(dot)com, michael(dot)paquier(at)gmail(dot)com, david(at)pgmasters(dot)net, Jim(dot)Nasby(at)bluetreble(dot)com, craig(at)2ndquadrant(dot)com, pgsql-hackers(at)postgresql(dot)org|
|Subject:||Re: Protect syscache from bloating with negative cache entries|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
At Wed, 07 Mar 2018 23:12:29 -0500, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote in <352(dot)1520482349(at)sss(dot)pgh(dot)pa(dot)us>
> Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp> writes:
> > At Thu, 8 Mar 2018 00:28:04 +0000, "Tsunakawa, Takayuki" <tsunakawa(dot)takay(at)jp(dot)fujitsu(dot)com> wrote in <0A3221C70F24FB45833433255569204D1F8FF0D9(at)G01JPEXMBYT05>
> >> Yes. We are now facing the problem of too much memory use by PostgreSQL, where about some applications randomly access about 200,000 tables. It is estimated based on a small experiment that each backend will use several to ten GBs of local memory for CacheMemoryContext. The total memory use will become over 1 TB when the expected maximum connections are used.
> >> I haven't looked at this patch, but does it evict all kinds of entries in CacheMemoryContext, ie. relcache, plancache, etc?
> > This works only for syscaches, which could bloat with entries for
> > nonexistent objects.
> > Plan cache is a utterly deferent thing. It is abandoned at the
> > end of a transaction or such like.
> When I was at Salesforce, we had *substantial* problems with plancache
> bloat. The driving factor there was plans associated with plpgsql
> functions, which Salesforce had a huge number of. In an environment
> like that, there would be substantial value in being able to prune
> both the plancache and plpgsql's function cache. (Note that neither
> of those things are "abandoned at the end of a transaction".)
Mmm. Right. Thanks for pointing it. Anyway plan cache seems to be
a different thing.
> > Relcache is not based on catcache and out of the scope of this
> > patch since it doesn't get bloat with nonexistent entries. It
> > uses dynahash and we could introduce a similar feature to it if
> > we are willing to cap relcache size.
> I think if the case of concern is an application with 200,000 tables,
> it's just nonsense to claim that relcache size isn't an issue.
> In short, it's not really apparent to me that negative syscache entries
> are the major problem of this kind. I'm afraid that you're drawing very
> large conclusions from a specific workload. Maybe we could fix that
> workload some other way.
The current patch doesn't consider whether an entry is negative
or positive(?). Just clean up all entries based on time.
If relation has to have the same characterictics to syscaches, it
might be better be on the catcache mechanism, instaed of adding
the same pruning mechanism to dynahash..
NTT Open Source Software Center
|Next Message||amul sul||2018-03-09 09:32:21||Re: [HACKERS] Restrict concurrent update/delete with UPDATE of partition key|
|Previous Message||John Naylor||2018-03-09 08:32:59||Re: WIP: a way forward on bootstrap data|