Re: Protect syscache from bloating with negative cache entries

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
Cc: Andres Freund <andres(at)anarazel(dot)de>, "Tsunakawa, Takayuki" <tsunakawa(dot)takay(at)jp(dot)fujitsu(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Bruce Momjian <bruce(at)momjian(dot)us>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Ideriha, Takeshi" <ideriha(dot)takeshi(at)jp(dot)fujitsu(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, David Steele <david(at)pgmasters(dot)net>, Craig Ringer <craig(at)2ndquadrant(dot)com>
Subject: Re: Protect syscache from bloating with negative cache entries
Date: 2019-02-20 18:09:08
Message-ID: CA+TgmoZXw+SwK_9Tp=wLqZDstW_X+Ant=rd7K+q4zmYONPuL=w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Feb 19, 2019 at 11:15 PM Kyotaro HORIGUCHI
<horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp> wrote:
> Difference from v15:
>
> Removed AllocSet accounting stuff. We use approximate memory
> size for catcache.
>
> Removed prune-by-number(or size) stuff.
>
> Adressing comments from Tsunakawa-san and Ideriha-san .
>
> Separated catcache monitoring feature. (Removed from this set)
> (But it is crucial to check this feature...)
>
> Is this small enough ?

The commit message in 0002 says 'This also can put a hard limit on the
number of catcache entries.' but neither of the GUCs that you've
documented have that effect. Is that a leftover from a previous
version?

I'd like to see some evidence that catalog_cache_memory_target has any
value, vs. just always setting it to zero. I came up with the
following somewhat artificial example that shows that it might have
value.

rhaas=# create table foo (a int primary key, b text) partition by hash (a);
[rhaas pgsql]$ perl -e 'for (0..9999) { print "CREATE TABLE foo$_
PARTITION OF foo FOR VALUES WITH (MODULUS 10000, REMAINDER $_);\n"; }'
| psql

First execution of 'select * from foo' in a brand new session takes
about 1.9 seconds; subsequent executions take about 0.7 seconds. So,
if catalog_cache_memory_target were set to a high enough value to
allow all of that stuff to remain in cache, we could possibly save
about 1.2 seconds coming off the blocks after a long idle period.
That might be enough to justify having the parameter. But I'm not
quite sure how high the value would need to be set to actually get the
benefit in a case like that, or what happens if you set it to a value
that's not quite high enough. I think it might be good to play around
some more with cases like this, just to get a feeling for how much
time you can save in exchange for how much memory.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2019-02-20 18:15:57 Re: Compressed TOAST Slicing
Previous Message Jerry Sievert 2019-02-20 18:06:07 Re: BUG #15646: Inconsistent behavior for current_setting/set_config