RE: Copy data to DSA area

From: "ideriha(dot)takeshi(at)fujitsu(dot)com" <ideriha(dot)takeshi(at)fujitsu(dot)com>
To: 'Thomas Munro' <thomas(dot)munro(at)gmail(dot)com>
Cc: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, "robertmhaas(at)gmail(dot)com" <robertmhaas(at)gmail(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: RE: Copy data to DSA area
Date: 2019-10-16 03:22:06
Message-ID: OSAPR01MB198577358981A55879414DDFEA920@OSAPR01MB1985.jpnprd01.prod.outlook.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi,

Sorry for waiting.
>Thomas Munro <thomas(dot)munro(at)gmail(dot)com> wrote:
>>What do you think about the following? Even though I know you want to
>>start with much simpler kinds of cache, I'm looking ahead to the lofty
>>end-goal of having a shared plan cache. No doubt, that involves
>>solving many other problems that don't belong in this thread, but please indulge me:
>
>My initial motivation came from shared catcache and relcache but I also think shared
>plan cache is one of the big topics and I'd be very excited if it's come true. Sometimes
>making plan at each backend becomes enormous overhead for speed.
>
>>On Wed, Jul 10, 2019 at 6:03 PM Thomas Munro <thomas(dot)munro(at)gmail(dot)com>
>wrote:
>>> Hmm. I wonder if we should just make ShmContextFree() do nothing!
>>> And make ShmContextAlloc() allocate (say) 8KB chunks (or larger if
>>> needed for larger allocation) and then hand out small pieces from the
>>> 'current' chunk as needed. Then the only way to free memory is to
>>> destroy contexts, but for the use case being discussed, that might
>>> actually be OK. I suppose you'd want to call this implementation
>>> something different, like ShmRegionContext, ShmZoneContext or
>>> ShmArenaContext[1].
>>
>><after sleeping on this>
>>
>>I guess what I said above is only really appropriate for complex things
>>like plans that have their own contexts so that we can delete them
>>easily "in bulk". I guess it's not true for caches of simpler objects
>>like catcache, that don't want a context for each cached thing and want
>>to free objects "retail" (one by one). So I guess you might want
>>something more like your current patch for (say) SharedCatCache, and something
>>like the above-quoted idea for (say) SharedPlanCache or SharedRelCache.

I updated shared memory context for SharedCatCache, which I call
ShmRetailContext. I refactored my previous PoC,
which palloc calls dsa_allocate every time in a retail way.
And I also implemented MemoryContextClone(template_context,
short_lived_parent_context).

>>For an implementation that supports retail free, perhaps you could
>>store the address of the clean-up list element in some extra bytes
>>before the returned pointer, so you don't have to find it by linear
>>search.

ShmRetailContext is supposed to use SharedCatCache.
Here are some features of current CatCache entries:
1. The number of cache entries is generally larger than compared to relcache
and plan cache. This is because relcache is proportional to the number of
tables and indexes. Catcache has much more kinds and some kind like
pg_statistic is proportional to the number of attributes of each table.

2. Cache entry (catctup) is built via only one or two times palloc().

3. When cache entry is evicted from hash table, it is deleted by pfree()
one by one.

Because of my point 1, I'd rather not to have the extra pointer to
clean-up list element. This pointer is allocated per catcache entry
and it would take space. And also the length of clean-up list is not
much big becase of my point 2. So it would be fine with linear search.

And also because of my point 1 again, I didn't create MemoryContext
header for each catalog cache. These memory context header is located
in shared memory and takes space. So I use
ShmRetailContextMoveChunk(), (which I called ChangeToPermShmContext() before),
instead of MemoryContextSetParent(). This moves only chunks
from locally allocated parent to shared parent memory context.

Due to my point 3, I think it's also ok not to have clean-up list in
shared memory in ShmRetailContext. There is no situation to
free chunks all at once.

What do you think about above things?

>>Next, I suppose you don't want to leave holes in the middle of
>>the array, so perhaps instead of writing NULL there, you could transfer the last item
>in the array to this location (with associated concurrency problems).
Done.

ShmZoneContext for SharedPlan and SharedRelCache is not implemented
but I'm going to do it following your points.

Regards,
Takeshi Ideriha

Attachment Content-Type Size
shm_retail_context-v1.patch application/octet-stream 31.4 KB

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message vignesh C 2019-10-16 03:27:08 Re: Ordering of header file inclusion
Previous Message Amit Kapila 2019-10-16 02:40:18 Re: Ordering of header file inclusion