From: | Arthur Zakirov <a(dot)zakirov(at)postgrespro(dot)ru> |
---|---|
To: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [PROPOSAL] Shared Ispell dictionaries |
Date: | 2019-01-21 16:42:45 |
Message-ID: | 5113daa6-b6e7-59f6-c4a8-96b5b81474fb@postgrespro.ru |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 21.01.2019 17:56, Tomas Vondra wrote:
> On 1/21/19 12:51 PM, Arthur Zakirov wrote:
>> I'll try to implement the syntax, you suggested earlier:
>>
>> ALTER TEXT SEARCH DICTIONARY x UNLOAD/RELOAD
>>
>> The main point here is that UNLOAD/RELOAD can't release the memory
>> immediately, because some other backend may pin a DSM.
>>
>> The second point we should consider (I think) - how do we know which
>> dictionary should be unloaded. There was such function earlier, which
>> was removed. But what about adding an information in the "\dFd" psql's
>> command output? It could be a column which shows is a dictionary loaded.
>>
> ...The only thing we have is "unload" capability by closing the
> connection...
BTW, even if the connection was closed and there are no other
connections a dictionary still remains "loaded". It is because
dsm_pin_segment() is called during loading the dictionary into DSM.
> ...
> I wonder if we could devise some simple cache eviction policy. We don't
> have any memory limit GUC anymore, but maybe we could use unload
> dictionaries that were unused for sufficient amount of time (a couple of
> minutes or so). Of course, the question is when exactly would it happen
> (it seems far too expensive to invoke on each dict access, and it should
> happen even when the dicts are not accessed at all).
Yes, I thought about such feature too. Agree, it could be expensive
since we need to scan pg_ts_dict table to get list of dictionaries (we
can't scan dshash_table).
I haven't a good solution yet. I just had a thought to return
max_shared_dictionaries_size. Then we can unload dictionaries (and scan
the pg_ts_dict table) that were accessed a lot time ago if we reached
the size limit.
We can't set exact size limit since we can't release the memory
immediately. So max_shared_dictionaries_size can be renamed to
shared_dictionaries_threshold. If it is equal to "0" then PostgreSQL has
unlimited space for dictionaries.
--
Arthur Zakirov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company
From | Date | Subject | |
---|---|---|---|
Next Message | Chapman Flack | 2019-01-21 16:46:05 | Re: House style for DocBook documentation? |
Previous Message | Andrew Dunstan | 2019-01-21 16:32:06 | Re: [PATCH] pgbench tap tests fail if the path contains a perl special character |