On Tue, May 31, 2011 at 9:22 AM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
> The basis for this is that weak locks do not conflict with each other,
> whereas strong locks conflict with both strong and weak locks.
> (There's a couple of special cases which I ignore for now).
> (Using Robert's description of strong/weak locks)
> Since most actions in normal running only require weak locks then we
> see that 99% of the time we don't need to share lock information at
> So the idea is that we have 2 modes of operation: mode (1) when nobody
> is requesting a strong lock we don't share lock information. We switch
> into mode (2) when somebody requests a strong lock and in this mode we
> must share all lock information just as we do now.
> The basic analysis is that we have a way of removing 99% of the
> overhead of lock information sharing. We still have the local lock
> table and we still perform locking, we just don't share that with
> other backends unless we need to. So there is a slight reduction in
> path length and a total avoidance of contention.
> Ideally, we would want to be in mode 2 for a short period of time.
> The difficulty is how to move from mode 1 (non-shared locking) to mode
> 2 (shared locking) and back again?
> A strong lock request causes the mode flip automatically via one of
> these mechanisms:
> 1. signal to each backend causes them to update shared lock
> information (at that point non-conflicting)
> 2. local lock table in shared memory
> 3. files
> 4. other
> The requirement is that the mode be flipped in all backends before we
> process the request for a strong lock.
> The idea is to make the local lock table accessible for occasional use
> in mode switching. Reading the local lock table by its owning backend
> would always be lock free. Locks are only required when modifying the
> local lock table by the owning backend, or when another backend reads
> it. So making the local lock table accessible is not a problem.
You can't actually make the local lock table lock-free to the owning
backend, if other backends are going to be modifying it, or even
However, as discussed upthread, what does seem possible is to allow
each backend to maintain a queue of "weak" locks that are protected by
an LWLock which is normally taken only by the owning backend, except
on those rare occasions when a "strong" lock enters the picture. This
doesn't completely eliminate LWLocks from the picture, but preliminary
tests with my hacked-up, work-in-progress patch shows that it results
in a very large decrease in LWLock *contention*. I'm going to post
the patch once I get it debugged and tested a bit more.
The Enterprise PostgreSQL Company
In response to
pgsql-hackers by date
|Next:||From: email@example.com||Date: 2011-05-31 13:58:30|
|Subject: Re: Getting a bug tracker for the Postgres project|
|Previous:||From: Andrew Dunstan||Date: 2011-05-31 13:43:33|
|Subject: Re: Please test peer (socket ident) auth on *BSD|