Re: Patch: fix lock contention for HASHHDR.mutex

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
Cc: Aleksander Alekseev <a(dot)alekseev(at)postgrespro(dot)ru>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>, Anastasia Lubennikova <a(dot)lubennikova(at)postgrespro(dot)ru>
Subject: Re: Patch: fix lock contention for HASHHDR.mutex
Date: 2016-03-20 23:42:04
Message-ID: CA+TgmoZ=fSf4TD=_pgHx+S1+Va2sExqdz=+YKKsba0ZjdLqHBg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Sun, Mar 20, 2016 at 3:01 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> On Sat, Mar 19, 2016 at 7:02 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> On Sat, Mar 19, 2016 at 12:28 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
>> wrote:
>> > Won't in theory, without patch as well nentries can overflow after
>> > running
>> > for very long time? I think with patch it is more prone to overflow
>> > because
>> > we start borrowing from other free lists as well.
>>
>> Uh, I don't think so. Without the patch, there is just one entries
>> counter and it goes up and down. How would it ever overflow?
>
> I thought it can overflow because we haven't kept any upper limit on
> incrementing it unless the memory finishes (ofcourse that is just a
> theoretical assumption, as the decrements will keep the number in control),
> so are you thinking about the risk of overflow with patch because we have to
> use sum of all the nentries from all the arrays for total or is there any
> thing else which makes you think that changing nentries into arrays of
> nentries can make it prone to overflow?

Well, I mean, perhaps nentries could overflow if you had more than
2^32 elements, but I'm not even positive we support that. If you
assume a fixed table with a million entries, the nentries value can
vary only between 0 and a million. But now split that into a bunch of
separate counters. The increment when you allocate an entry and the
decrement when you put one back don't have to hit the same bucket, so
I'm not sure there's anything that prevents the counter for one bucket
from getting arbitrarily large and the counter for another bucket
getting arbitrarily small while still summing to a value between 0 and
a million.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tomas Vondra 2016-03-21 00:01:34 Re: Parallel Aggregate
Previous Message Robert Haas 2016-03-20 23:36:40 Re: pgbench - allow backslash-continuations in custom scripts