Re: [HACKERS] Clock with Adaptive Replacement

From: Andrey Borodin <x4mmm(at)yandex-team(dot)ru>
To: Yura Sokolov <funny(dot)falcon(at)gmail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Vladimir Sitnikov <sitnikov(dot)vladimir(at)gmail(dot)com>, Peter Geoghegan <pg(at)bowt(dot)ie>, Stephen Frost <sfrost(at)snowman(dot)net>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Konstantin Knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] Clock with Adaptive Replacement
Date: 2018-05-06 08:20:00
Message-ID: AF116EDB-68D7-49FC-B32A-82C623BFF85B@yandex-team.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

> 5 мая 2018 г., в 13:25, Yura Sokolov <funny(dot)falcon(at)gmail(dot)com> написал(а):
>
> 05.05.2018 09:16, Andrey Borodin пишет:
>> Hi!
>>
>>> 4 мая 2018 г., в 16:05, Юрий Соколов <funny(dot)falcon(at)gmail(dot)com>
>>> написал(а):
>>>
>>> I didn't suggest log scale of usages, but rather
>>> "replacement-period based" increment: usage count could be
>>> incremented at most once in NBlocks/32 replaced items. Once it is
>>> incremented, its "replacement time" is remembered, and next
>>> NBlocks/32 replacements usage count of this buffer doesn't
>>> increment. This way, increment is synchronized with replacement
>>> activity.
>>
>> But you loose difference between "touched once" and "actively used".
>> Log scale of usage solves this: usage count grows logarithmically,
>> but drains linearly.
> No, I didn't loose difference. But instead of absolute value (log scale
> or linear) I count how often in time block is used:
> - if buffer were touched 1000 times just after placing into
> shared_buffers should it live 500 times longer than neighbor that were
> touched only 2 times? or 10 times longer? or 5 times longer?
> - but what if that "1000 times" buffer were not touched in next period
> of time, but neighbor were touched again 2 times?
> All effective algorithms answers: "1000 times" buffer should be evicted
> first, but its neighbor is a really hot buffer that should be saved for
> longer period.
It is hard to tell actually what is right decision here. It is better to evict buffer that will not be needed longer, and it is not obvious that is is true for buffer that you called hot.

Assume we have buffer A who is touched 1024 times (and then forgotten forever) and buffer B which is touched 2 times every clock cycle.
A B
Usage count 0x400 0x2
1 Eviction time! 0x100 0x0 E
Usage count 0x100 0x2
2 Eviction time! 0x080 0x0 E
Usage count 0x080 0x2
3 Eviction time! 0x020 0x0 E
Usage count 0x020 0x2
4 Eviction time! 0x00A 0x0 E
Usage count 0x00A 0x2
5 Eviction time! 0x004 0x0 E
Usage count 0x004 0x2
6 Eviction time! 0x001 0x0 E
Usage count 0x001 0x2
7 Eviction time! 0x000 E 0x2
So, buffer used 512 times more survived only 7 stale cycles. Looks fair.

>
> Log scale doesn't solve this. But increment "once in period" solves.
> Especially if block is placed first with zero count (instead of 1 as
> currently).
>
>>> Digging further, I suggest as improvement of GClock algorithm: -
>>> placing new buffer with usage count = 0 (and next NBlock/32
>>> replacements its usage count doesn't increased)
>>> - increment not by 1, but by 8 (it simulates "hot queue" of
>>> popular algorithms) with limit 32.
>>> - scan at most 25 buffers for eviction. If no buffer with zero
>>> usage count found, the least used buffer (among scanned 25) is evicted.
>>> (new buffers are not evicted during their first NBlock/32
>>> replacements).
>>>
>>
>> I do not understand where these numbers come from...
>
> I found this number by testing with several artificial traces found in web.
> I don't claim this number are best. Even on that traces best values may
> vary on cache size: for small cache size increment and limit tends to be
> higher, for huge cache - smaller. But this were most balanced.
>
> And I don't claim those traces are representative for PostgreSQL, that
> is why I'm pushing this discussion to collect more real-world PostgreSQL
> traces and make them public.
>
> And I believe my algorithm is not the best. Clock-Pro and ARC shows
> better results on that traces. Tiny-LFU - cache admission algorithm -
> may be even more efficient (in term of evictions). But results should be
> rechecked with PostgreSQL traces.
>
> My algorithm will be just least invasive for current code, imho.

Here's the demo patch with logarithmic scale. 2 lines changed.

Best regards, Andrey Borodin.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Stas Kelvich 2018-05-06 10:22:57 Re: Global snapshots
Previous Message Pavel Stehule 2018-05-06 06:42:02 Re: citext function overloads for text parameters