Re: Resource Owner reassign Locks

From: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Amit Kapila <amit(dot)kapila(at)huawei(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Resource Owner reassign Locks
Date: 2012-06-16 02:07:00
Message-ID: CAMkU=1yWzuwsPhAdhKSZa2ShbS2xkOfi9yi_LH=4NtymPqnGdg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, Jun 15, 2012 at 3:29 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Jeff Janes <jeff(dot)janes(at)gmail(dot)com> writes:
>> On Mon, Jun 11, 2012 at 9:30 PM, Amit Kapila <amit(dot)kapila(at)huawei(dot)com> wrote:
>>> MAX_RESOWNER_LOCKS - How did you arrive at number 10 for it. Is there any
>>> specific reason for 10.
>
>> I instrumented the code to record the maximum number of locks held by
>> a resource owner, and report the max when it was destroyed.  (That
>> code is not in this patch).  During a large pg_dump, the vast majority
>> of the resource  owners had maximum locks of 2, with some more at 4
>> and 6.    Then there was one resource owner, for the top-level
>> transaction, at tens or hundreds of thousands (basically one for every
>> lockable object).  There was little between 6 and this top-level
>> number, so I thought 10 was a good compromise, safely above 6 but not
>> so large that searching through the list itself was likely to bog
>> down.
>
>> Also, Tom independently suggested the same number.
>
> FYI, I had likewise suggested 10 on the basis of examining pg_dump's
> behavior.  It might be a good idea to examine a few other use-cases
> before settling on a value.

Looking at the logging output of a "make check" run, there are many
cases where the list would have overflown (max locks was >10), but in
all of them the number of locks held at the time of destruction was
equal to, or only slightly less than, the size of the local lock hash
table. So iterating over a large memorized list would not save much
computational complexity over iterating over the entire hash table
(although the constant factor in iterating over pointers in an array
might be smaller the constant factor for using a hash-iterator).

Looking at pg_dump with more complex structures (table with multiple
toasted columns and multiple unique indexes, and inherited tables)
does use more max locks, but the number doesn't seem to depend on how
many toast and indexes exist. There are very frequently a max of 9
locks occurring when the lock table is large, so that is uncomfortably
close to overflowing. Adding sequences (or at least, using a type of
serial) doesn't seem to increase the max used.

I don't know if there a more principle-based way of approaching this.

There are probably cases where maintaining the list of locks is loss
rather than a gain, but since I don't how to create them I can't
evaluate what the trade off might be to increasing the max.

I'm inclined to increase the max from 10 to 15 to reclaim a margin of
safety, and leave it at that, unless someone can recommend a better
test case.

Cheers,

Jeff

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Bruce Momjian 2012-06-16 02:45:16 Re: [COMMITTERS] pgsql: Run pgindent on 9.2 source tree in preparation for first 9.3
Previous Message Tom Lane 2012-06-16 01:06:21 Re: splitting htup.h