|From:||Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>|
|To:||"'Kyotaro HORIGUCHI'" <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>|
|Cc:||"Ideriha, Takeshi" <ideriha(dot)takeshi(at)jp(dot)fujitsu(dot)com>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org>, "Tsunakawa, Takayuki" <tsunakawa(dot)takay(at)jp(dot)fujitsu(dot)com>, "alvherre(at)alvh(dot)no-ip(dot)org" <alvherre(at)alvh(dot)no-ip(dot)org>, "andres(at)anarazel(dot)de" <andres(at)anarazel(dot)de>, "robertmhaas(at)gmail(dot)com" <robertmhaas(at)gmail(dot)com>, "michael(dot)paquier(at)gmail(dot)com" <michael(dot)paquier(at)gmail(dot)com>, "david(at)pgmasters(dot)net" <david(at)pgmasters(dot)net>, "Jim(dot)Nasby(at)bluetreble(dot)com" <Jim(dot)Nasby(at)bluetreble(dot)com>, "craig(at)2ndquadrant(dot)com" <craig(at)2ndquadrant(dot)com>|
|Subject:||Re: Protect syscache from bloating with negative cache entries|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
I'm really disappointed by the direction this thread is going in.
The latest patches add an enormous amount of mechanism, and user-visible
complexity, to do something that we learned was a bad idea decades ago.
Putting a limit on the size of the syscaches doesn't accomplish anything
except to add cycles if your cache working set is below the limit, or
make performance fall off a cliff if it's above the limit. I don't think
there's any reason to believe that making it more complicated will avoid
What does seem promising is something similar to Horiguchi-san's
original patches all the way back at
That is, identify usage patterns in which we tend to fill the caches with
provably no-longer-useful entries, and improve those particular cases.
Horiguchi-san identified one such case in that message: negative entries
in the STATRELATTINH cache, caused by the planner probing for stats that
aren't there, and then not cleared when the relevant table gets dropped
(since, by definition, they don't match any pg_statistic entry that gets
deleted). We saw another recent report of the same problem at
so I'd been thinking about ways to fix that case in particular. I came
up with a fix that I think is simpler and a bit more efficient than
what Horiguchi-san proposed originally: rather than trying to reverse-
engineer what to do in low-level cache callbacks, let's have the catalog
manipulation code explicitly send out invalidation commands when the
relevant situations arise. In the attached, heap.c's RemoveStatistics
sends out an sinval message commanding deletion of negative STATRELATTINH
entries that match the OID of the table being deleted. We could use the
same infrastructure to clean out dead RELNAMENSP entries after a schema
deletion, as per Horiguchi-san's second original suggestion; although
I haven't done so here because I'm not really convinced that that's got
an attractive cost-benefit ratio. (In both my patch and Horiguchi-san's,
we have to traverse all entries in the affected cache, so sending out one
of these messages is potentially not real cheap.)
To do this we need to adjust the representation of sinval messages so
that we can have two different kinds of messages that include a cache ID.
Fortunately, because there's padding space available, that's not costly.
0001 below is a simple refactoring patch that converts the message type
ID into a plain enum field that's separate from the cache ID if any.
(I'm inclined to apply this whether or not people like 0002: it makes
the code clearer, more maintainable, and probably a shade faster thanks
to replacing an if-then-else chain with a switch.) Then 0002 adds the
feature of an sinval message type saying "delete negative entries in
cache X that have OID Y in key column Z", and teaches RemoveStatistics
to use that.
regards, tom lane
|Next Message||Tom Lane||2019-01-13 18:34:57||Re: [PATCH] check for ctags utility in make_ctags|
|Previous Message||Andrew Dunstan||2019-01-13 15:04:26||Re: Three animals fail test-decoding-check on REL_10_STABLE|