|From:||Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>|
|Cc:||tsunakawa(dot)takay(at)jp(dot)fujitsu(dot)com, alvherre(at)alvh(dot)no-ip(dot)org, andres(at)anarazel(dot)de, robertmhaas(at)gmail(dot)com, michael(dot)paquier(at)gmail(dot)com, david(at)pgmasters(dot)net, Jim(dot)Nasby(at)bluetreble(dot)com, craig(at)2ndquadrant(dot)com, tgl(at)sss(dot)pgh(dot)pa(dot)us|
|Subject:||Re: Protect syscache from bloating with negative cache entries|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
Hello. I rebased this patchset.
At Thu, 15 Mar 2018 14:12:46 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp> wrote in <20180315(dot)141246(dot)130742928(dot)horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
> At Mon, 12 Mar 2018 17:34:08 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp> wrote in <20180312(dot)173408(dot)162882093(dot)horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
> > > > In short, it's not really apparent to me that negative syscache entries
> > > > are the major problem of this kind. I'm afraid that you're drawing very
> > > > large conclusions from a specific workload. Maybe we could fix that
> > > > workload some other way.
> > >
> > > The current patch doesn't consider whether an entry is negative
> > > or positive(?). Just clean up all entries based on time.
> > >
> > > If relation has to have the same characterictics to syscaches, it
> > > might be better be on the catcache mechanism, instaed of adding
> > > the same pruning mechanism to dynahash..
This means unifying catcache and dynahash. It doesn't seem
win-win consolidation. Addition to that relcache links palloc'ed
memory which needs additional treat.
Or we could abstract the pruning mechanism applicable to both
machinaries. Specifically unifying CatCacheCleanupOldEntries in
0001 and prune_entries in 0002. Or could refactor dynahash and
rebuild catcache based on dynahash.
> > For the moment, I added such feature to dynahash and let only
> > relcache use it in this patch. Hash element has different shape
> > in "prunable" hash and pruning is performed in a similar way
> > sharing the setting with syscache. This seems working fine.
> I gave consideration on plancache. The most different
> characteristics from catcache and relcache is the fact that it is
> not voluntarily removable since CachedPlanSource, the root struct
> of a plan cache, holds some indispensable inforamtion. In regards
> to prepared queries, even if we store the information into
> another location, for example in "Prepred Queries" hash, it
> merely moving a big data into another place.
> Looking into CachedPlanSoruce, generic plan is a part that is
> safely removable since it is rebuilt as necessary. Keeping "old"
> plancache entries not holding a generic plan can reduce memory
> For testing purpose, I made 50000 parepared statement like
> "select sum(c) from p where e < $" on 100 partitions,
> With disabling the feature (0004 patch) VSZ of the backend
> exceeds 3GB (It is still increasing at the moment), while it
> stops to increase at about 997MB for min_cached_plans = 1000 and
> plancache_prune_min_age = '10s'.
> # 10s is apparently short for acutual use, of course.
> It is expected to be significant amount if the plan is large
> enough but I'm still not sure it is worth doing, or is a right
> The attached is the patch set including this plancache stuff.
> 0001- catcache time-based expiration (The origin of this thread)
> 0002- introduces dynahash pruning feature
> 0003- implement relcache pruning using 0002
> 0004- (perhaps) independent from the three above. PoC of
> plancache pruning. Details are shown above.
I found up to v3 in this thread so I named this version 4.
NTT Open Source Software Center
|Next Message||Ashutosh Bapat||2018-06-26 09:02:21||Re: [HACKERS] advanced partition matching algorithm for partition-wise join|
|Previous Message||Amit Langote||2018-06-26 08:57:03||Re: [HACKERS] advanced partition matching algorithm for partition-wise join|