RE: Protect syscache from bloating with negative cache entries

From: "Tsunakawa, Takayuki" <tsunakawa(dot)takay(at)jp(dot)fujitsu(dot)com>
To: 'Kyotaro HORIGUCHI' <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
Cc: "alvherre(at)2ndquadrant(dot)com" <alvherre(at)2ndquadrant(dot)com>, "tomas(dot)vondra(at)2ndquadrant(dot)com" <tomas(dot)vondra(at)2ndquadrant(dot)com>, "bruce(at)momjian(dot)us" <bruce(at)momjian(dot)us>, "andres(at)anarazel(dot)de" <andres(at)anarazel(dot)de>, "robertmhaas(at)gmail(dot)com" <robertmhaas(at)gmail(dot)com>, "tgl(at)sss(dot)pgh(dot)pa(dot)us" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Ideriha, Takeshi" <ideriha(dot)takeshi(at)jp(dot)fujitsu(dot)com>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org>, "michael(dot)paquier(at)gmail(dot)com" <michael(dot)paquier(at)gmail(dot)com>, "david(at)pgmasters(dot)net" <david(at)pgmasters(dot)net>, "craig(at)2ndquadrant(dot)com" <craig(at)2ndquadrant(dot)com>
Subject: RE: Protect syscache from bloating with negative cache entries
Date: 2019-02-19 05:57:58
Message-ID: 0A3221C70F24FB45833433255569204D1FB9D796@G01JPEXMBYT05
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi Horiguchi-san,

This is the rest of my review comments.

(5) patch 0003
CatcacheClockTimeoutPending = 0;
+
+ /* Update timetamp then set up the next timeout */
+

false is better than 0, to follow other **Pending variables.

timetamp -> timestamp

(6) patch 0003
GetCatCacheClock() is not used now. Why don't we add it when the need arises?

(7) patch 0003
Why don't we remove the catcache timer (Setup/UpdateCatCacheClockTimer), unless we need it by all means? That simplifies the code.

Long-running queries can be thought as follows:

* A single lengthy SQL statement, e.g. SELECT for reporting/analytics, COPY for data loading, and UPDATE/DELETE for batch processing, should only require small number of catalog entries during their query analysis/planning. They won't suffer from cache eviction during query execution.

* Do not have to evict cache entries while executing a long-running stored procedure, because its constituent SQL statements may access the same tables. If the stored procedure accesses so many tables that you are worried about the catcache memory overuse, then catalog_cache_max_size can be used. Another natural idea would be to update the cache clock when SPI executes each SQL statement.

(8) patch 0003
+ uint64 base_size;
+ uint64 base_size = MemoryContextGetConsumption(CacheMemoryContext);

This may also as well be Size, not uint64.

(9) patch 0003
@@ -1940,7 +2208,7 @@ CatCacheFreeKeys(TupleDesc tupdesc, int nkeys, int *attnos, Datum *keys)
/*
* Helper routine that copies the keys in the srckeys array into the dstkeys
* one, guaranteeing that the datums are fully allocated in the current memory
- * context.
+ * context. Returns allocated memory size.
*/
static void
CatCacheCopyKeys(TupleDesc tupdesc, int nkeys, int *attnos,
@@ -1976,7 +2244,6 @@ CatCacheCopyKeys(TupleDesc tupdesc, int nkeys, int *attnos,
att->attbyval,
att->attlen);
}
-
}

This change seem to be no longer necessary thanks to the memory accounting.

(10) patch 0004
How about separating this in another thread, so that the rest of the patch set becomes easier to review and commit?

Regarding the design, I'm inclined to avoid each backend writing the file. To simplify the code, I think we can take advantage of the fortunate situation -- the number of backends and catcaches are fixed at server startup. My rough sketch is:

* Allocate an array of statistics entries in shared memory, whose element is (pid or backend id, catcache id or name, hits, misses, ...). The number of array elements is MaxBackends * number of catcaches (some dozens).

* Each backend updates its own entry in the shared memory during query execution.

* Stats collector periodically scans the array and write it to the stats file.

(11) patch 0005
+dlist_head cc_lru_list = {0};
+Size global_size = 0;

It is better to put these in CatCacheHeader. That way, backends that do not access the catcache (archiver, stats collector, etc.) do not have to waste memory for these global variables.

(12) patch 0005
+ else if (catalog_cache_max_size > 0 &&
+ global_size > catalog_cache_max_size * 1024)
CatCacheCleanupOldEntries(cache);

On the second line, catalog_cache_max_size should be cast to Size to avoid overflow.

(13) patch 0005
+ gettext_noop("Sets the maximum size of catcache in kilobytes."),

catcache -> catalog cache

(14) patch 0005
+ CatCache *owner; /* owner catcache */

CatCTup already has my_cache member.

(15) patch 0005
if (nremoved > 0)
elog(DEBUG1, "pruning catalog cache id=%d for %s: removed %d / %d",
cp->id, cp->cc_relname, nremoved, nelems_before);

In prune-by-size case, this elog doesn't very meaningful data. How about dividing this function into two, one is for prune-by-age and another for prune-by-size? I supppose that would make the functions easier to understand.

Regards
Takayuki Tsunakawa

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Matsumura, Ryo 2019-02-19 06:58:51 RE: SQL statement PREPARE does not work in ECPG
Previous Message Shawn Debnath 2019-02-19 05:54:28 Re: Change of email address