|From:||"Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>|
|To:||"Heikki Linnakangas" <heikki(dot)linnakangas(at)enterprisedb(dot)com>|
|Cc:||<simon(at)2ndQuadrant(dot)com>,<markus(at)bluegap(dot)ch>, <drkp(at)csail(dot)mit(dot)edu>, <pgsql-hackers(at)postgresql(dot)org>|
|Subject:||Re: SSI patch version 14|
|Views:||Raw Message | Whole Thread | Download mbox|
Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com> wrote:
>> (2) The predicate lock and lock target initialization code was
>> initially copied and modified from the code for heavyweight
>> locks. The heavyweight lock code adds 10% to the calculated
>> maximum size. So I wound up doing that for
>> PredicateLockTargetHash and PredicateLockHash, but didn't do it
>> for SerializableXidHassh. Should I eliminate this from the first
>> two, add it to the third, or leave it alone?
> I'm inclined to eliminate it from the first two. Even in
> LockShmemSize(), it seems a bit weird to add a safety margin, the
> sizes of the lock and proclock hashes are just rough estimates
I'm fine with that. Trivial patch attached.
> * You missed that RWConflictPool is sized five times as large as
> SerializableXidHash, and
> * The allocation for RWConflictPool elements was wrong, while the
> estimate was correct.
> With these changes, the estimated and actual sizes match closely,
> so that actual hash table sizes are 50% of the estimated size as
> I fixed those bugs
Thanks. Sorry for missing them.
> but this doesn't help with the buildfarm members with limited
> shared memory yet.
Well, if dropping the 10% fudge factor on those two HTABs doesn't
bring it down far enough (which seems unlikely), what do we do? We
could, as I said earlier, bring down the multiplier for the number
of transactions we track in SSI based on the maximum allowed
connections connections, but I would really want a GUC on it if we
do that. We could bring down the default number of predicate locks
per transaction. We could make the default configuration more
stingy about max_connections when memory is this tight. Other
I do think that anyone using SSI with a heavy workload will need
something like the current values to see decent performance, so it
would be good if there was some way to do this which would tend to
scale up as they increased something. Wild idea: make the
multiplier equivalent to the bytes of shared memory divided by 100MB
clamped to a minimum of 2 and a maximum of 10?
|Next Message||Alexey Klyukin||2011-02-09 15:24:23||Re: arrays as pl/perl input arguments [PATCH]|
|Previous Message||David Fetter||2011-02-09 15:16:19||Re: SSI patch version 14|