Skip site navigation (1) Skip section navigation (2)

Built-in connection pool

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: <pgsql-hackers(at)postgresql(dot)org>
Subject: Built-in connection pool
Date: 2010-06-28 15:17:17
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
The thread on Admission Control caused me to realize that one policy
I'd been considering as a possibly useful second tier (execution)
admission control policy was actually more appropriate as a first
tier (simple) admission control policy.  The difference (as defined
in the Hellerstein, Stonebraker & Hamilton paper) is that second
tier policies are enforced after initial query planning, so that
they can make use of information acquired during the planning phase;
first tier policies are enforced before that.
Now, some may not realize it, but PostgreSQL already has a first
tier policy, consisting of the max_connections and
superuser_reserved_connections GUCs, and maybe others.  What I'm
looking at could be considered an extension of that first tier or,
if you tilt your head right, could be considered a built-in
connection pool.  The mechanism would be not much more complex than
My experience with PostgreSQL benchmarks and production use has
indicated that performance falls off when you have more active
connections than required to saturate available resources.  It has
also indicated that we get best results by keeping a relatively
small number of database transactions busy until they complete,
queuing up requests for new transactions so that at saturation, a
new transaction begins as soon as a previous one ends.  "Relatively
small", in my experience, is typically somewhere near (2 *
core_count) + effective_spindle_count.
I think this could be achieved with the following:
When preparing to deal with a new statement:
 - acquire lockX
 - if not a superuser
 - and not in an active transaction
 - and countX >= max_active_transactions
 -   place current process at tail of waitX queue, and block
 -   (lockX would be released while blocked)
 - increment countX
 - release lockX
When preparing to complete (commit or rollback) a transaction:
 - acquire lockX
 - decrement countX
 - if countX < max_active_transactions
 - and waitX queue is not empty
 -   remove process at head of queue and unblock it
 - release lockX
It's possible this code could be place inside of some block of code
which is already locked around transaction creation and completion;
otherwise, I suspect that a  spinlock would suffice.  There's not a
lot going on inside a block of code protected by lockX.
It seems theoretically possible that user locks which span
transactions, if used in a certain way, could cause this scheme to
deadlock.  I'm skeptical that anyone is using them in such a way. 
If we're worried about that, however, there would need to be some
sort of deadlock detection.
Another issue would be how to show this to users.  If we could show
the transaction in pg_locks as having its virtualxid exclusive lock
not granted, with waiting showing as true in pg_stat_activity, that
would make sense to me.
Does this sound at all sane?


pgsql-hackers by date

Next:From: Andrew DunstanDate: 2010-06-28 15:19:19
Subject: Re: pg_dump's checkSeek() seems inadequate
Previous:From: Mike BerrowDate: 2010-06-28 15:08:53
Subject: Issue: Deprecation of the XML2 module 'xml_is_well_formed' function

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group