2010/12/8 Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>:
> Robert Haas <robertmhaas(at)gmail(dot)com> writes:
>>> Yeah, that was my concern, too, though Tom seems skeptical (perhaps
>>> rightly). šAnd I'm not really sure why the PROCLOCKs need to be in a
>>> hash table anyway - if we know the PROC and LOCK we can surely look up
>>> the PROCLOCK pretty expensively by following the PROC SHM_QUEUE.
>> Err, pretty INexpensively.
> There are plenty of scenarios in which a proc might hold hundreds or
> even thousands of locks. pg_dump, for example. You do not want to be
> doing seq search there.
> Now, it's possible that you could avoid *ever* needing to search for a
> specific PROCLOCK, in which case eliminating the hash calculation
> overhead might be worth it.
That seems like it might be feasible. The backend that holds the lock
ought to be able to find out whether there's a PROCLOCK by looking at
the LOCALLOCK table, and the LOCALLOCK has a pointer to the PROCLOCK.
It's not clear to me whether there's any other use case for doing a
lookup for a particular combination of PROC A + LOCK B, but I'll have
to look at the code more closely.
> Of course, you'd still have to replicate
> all the space-management functionality of a shared hash table.
Maybe we ought to revisit Markus Wanner's wamalloc. Although given
our recent discussions, I'm thinking that you might want to try to
design any allocation system so as to minimize cache line contention.
For example, you could hard-allocate each backend 512 bytes of
dedicated shared memory in which to record the locks it holds. If it
needs more, it allocates additional 512 byte chunks.
The Enterprise PostgreSQL Company
In response to
pgsql-performance by date
|Next:||From: Tom Lane||Date: 2010-12-08 22:02:19|
|Subject: Re: Performance under contention |
|Previous:||From: Bryce Nesbitt||Date: 2010-12-08 20:33:57|
|Subject: Re: hashed subplan 5000x slower than two sequential