|From:||Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>|
|To:||Andres Freund <andres(at)anarazel(dot)de>|
|Cc:||Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>|
|Subject:||Re: pg_sequence catalog|
|Views:||Raw Message | Whole Thread | Download mbox|
Andres Freund <andres(at)anarazel(dot)de> writes:
> On 2016-08-31 12:56:45 -0300, Alvaro Herrera wrote:
>> I was thinking that nextval could grab a shared buffer lock and release
>> immediately, just to ensure no one holds exclusive buffer lock
>> concurrently (which would be used for things like dropping one seq tuple
>> from the page, when a sequence is dropped); then control access to each
>> sequence tuple using LockDatabaseObject. This is a HW lock, heavier
>> than a buffer's LWLock, but it seems better than wasting a full 8kb for
>> each sequence.
> That's going to go be a *lot* slower, I don't think that's ok. I've a
> hard time worrying about the space waste here; especially considering
> where we're coming from.
Improving on the space wastage is exactly the point IMO. If it's still
going to be 8k per sequence on disk (*and* in shared buffers, remember),
I'm not sure it's worth all the work to change things at all.
We're already dealing with taking a heavyweight lock for each sequence
(the relation AccessShareLock). I wonder whether it'd be possible to
repurpose that effort somehow.
Another idea would be to have nominally per-sequence LWLocks (or
spinlocks?) controlling nextval's nontransactional accesses to the catalog
rows, but to map those down to some fixed number of locks in a way similar
to the current fallback implementation for spinlocks, which maps them onto
a fixed number of semaphores. You'd trade off shared memory against
contention while choosing the underlying number of locks.
regards, tom lane
|Next Message||Claudio Freire||2016-08-31 17:08:23||Re: Patch: Write Amplification Reduction Method (WARM)|
|Previous Message||Pavan Deolasee||2016-08-31 16:45:33||Patch: Write Amplification Reduction Method (WARM)|