|From:||Konstantin Knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru>|
|Subject:||Re: Built-in connection pooling|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
I have obtained more results with YCSB benchmark and built-in connection
Explanation of the benchmark and all results for vanilla Postgres and
Mongo are available in Oleg Bartunov presentation about JSON (at the
end of presentation):
as you can see, Postgres shows significant slow down with increasing
number of connections in case of conflicting updates.
Built-in connection pooling can somehow eliminate this problem:
Workload-B (5% of updates) ops/sec:
Session pool size/clients
Here the maximum is obtained near 70 backends which corresponds to the
number of physical cores at the target system.
But for workload A (50% of updates), optimum is achieved at much smaller
number of backends, after which we get very fast performance degradation:
Session pool size
Here the maximum is reached at 32 backends and with 70 backends
performance is 6 times worser.
It means that it is difficult to find optimal size of session pool if we
have varying workload.
If we set it too large, then we get high contention of conflicting
update queries, if it is too small, then we do not utilize all system
resource on read-only or not conflicting queries.
Look like we have to do something with Postgres locking mechanism and
may be implement some contention aware scheduler as described here:
But this is a different story, not related to built-in connection pooling.
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
|Next Message||Arthur Zakirov||2018-02-01 10:24:51||Re: [HACKERS] Bug in to_timestamp().|
|Previous Message||Christoph Berg||2018-02-01 10:08:39||Re: [HACKERS] GnuTLS support|