From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Arthur Zakirov <a(dot)zakirov(at)postgrespro(dot)ru> |
Cc: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [PROPOSAL] Shared Ispell dictionaries |
Date: | 2019-02-21 12:45:32 |
Message-ID: | CA+TgmoYiWVvgUrDBJXVG9Crpg=s=Y1BLhMwVd1RZQOq5aF4Ctw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Feb 20, 2019 at 9:33 AM Arthur Zakirov <a(dot)zakirov(at)postgrespro(dot)ru> wrote:
> I'm working on the (b) approach. I thought about a priority queue
> structure. There no such ready structure within PostgreSQL sources
> except binaryheap.c, but it isn't for concurrent algorithms.
I don't see why you need a priority queue or, really, any other fancy
data structure. It seems like all you need to do is somehow set it up
so that a backend which doesn't use a dictionary for a while will
dsm_detach() the segment. Eventually an unused dictionary will have
no remaining references and will go away.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Mithun Cy | 2019-02-21 12:53:29 | Re: BUG #15641: Autoprewarm worker fails to start on Windows with huge pages in use Old PostgreSQL community/pgsql-bugs x |
Previous Message | Robert Haas | 2019-02-21 12:35:59 | Re: Pluggable Storage - Andres's take |