Re: Patch: fix lock contention for HASHHDR.mutex

From: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Aleksander Alekseev <a(dot)alekseev(at)postgrespro(dot)ru>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>, Anastasia Lubennikova <a(dot)lubennikova(at)postgrespro(dot)ru>
Subject: Re: Patch: fix lock contention for HASHHDR.mutex
Date: 2016-03-21 03:29:49
Message-ID: CAA4eK1JqCM=squb1c1DiNL9rA=htv5Xb5rbgRRwUHXniWoLEHA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Mar 21, 2016 at 5:12 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>
> On Sun, Mar 20, 2016 at 3:01 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
wrote:
> > On Sat, Mar 19, 2016 at 7:02 PM, Robert Haas <robertmhaas(at)gmail(dot)com>
wrote:
> >> On Sat, Mar 19, 2016 at 12:28 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
> >> wrote:
> >> > Won't in theory, without patch as well nentries can overflow after
> >> > running
> >> > for very long time? I think with patch it is more prone to overflow
> >> > because
> >> > we start borrowing from other free lists as well.
> >>
> >> Uh, I don't think so. Without the patch, there is just one entries
> >> counter and it goes up and down. How would it ever overflow?
> >
> > I thought it can overflow because we haven't kept any upper limit on
> > incrementing it unless the memory finishes (ofcourse that is just a
> > theoretical assumption, as the decrements will keep the number in
control),
> > so are you thinking about the risk of overflow with patch because we
have to
> > use sum of all the nentries from all the arrays for total or is there
any
> > thing else which makes you think that changing nentries into arrays of
> > nentries can make it prone to overflow?
>
> Well, I mean, perhaps nentries could overflow if you had more than
> 2^32 elements, but I'm not even positive we support that. If you
> assume a fixed table with a million entries, the nentries value can
> vary only between 0 and a million. But now split that into a bunch of
> separate counters. The increment when you allocate an entry and the
> decrement when you put one back don't have to hit the same bucket,
>

This is the point where I think I am missing something about patch. As far
as I can understand, it uses the same freelist index (freelist_idx) for
allocating and putting back the entry, so I think the chance of increment
in one list and decrement in another is there when the value
of freelist_idx is calculated differently for the same input, is it so, or
there is something else in patch which I am missing?

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Alvaro Herrera 2016-03-21 03:34:40 Re: multivariate statistics v14
Previous Message Alvaro Herrera 2016-03-21 03:24:25 Re: Parallel Aggregate