Re: Built-in connection pooling

From: Vladimir Sitnikov <sitnikov(dot)vladimir(at)gmail(dot)com>
To: Konstantin Knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru>
Cc: pgsql-hackers(at)lists(dot)postgresql(dot)org
Subject: Re: Built-in connection pooling
Date: 2018-02-02 20:20:44
Message-ID: CAB=Je-GXkScQfwPbJ+3oJroBXsNJbQ3+EgnCvnX1mtdmnAaxYQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Konstantin>I do not have explanation of performance degradation in case of
this
particular workload.

A) Mongo Java Client uses a connection-pool of 100 connections by default.
That is it does not follow "connection per client" (in YCSB terms), but it
is capped by 100 connections. I think it can be adjusted by adding
?maxPoolSize=100500 or ?maxpoolsize=100500 to the Mongo URL

I wonder if you could try to vary that parameter and see if it changes
Mongo results.

B) There's a bug in JDBC client of YCSB (it might affect PostgreSQL
results, however I'm not sure if the impact would be noticeable). The
default configuration is readallfields=true, however Jdbc client just
discards the results instead of accessing the columns. I've filed
https://github.com/brianfrankcooper/YCSB/issues/1087 for that.

C) I might miss something, however my local (Macbook) benchmarks show that
PostgreSQL 9.6 somehow uses Limit->Sort->BitmapScan kind of plans.
I have picked a "bad" userid value via auto_explain.
Jdbc client uses prepared statements, so a single bind might spoil the
whole thing causing bad plans for all the values afterwards.
Does it make sense to disable bitmap scan somehow?

For instance:

explain (analyze, buffers) select * From usertable where
YCSB_KEY>='user884845140610037639' order by YCSB_KEY limit 100;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=320.99..321.24 rows=100 width=1033) (actual time=1.408..1.429
rows=100 loops=1)
Buffers: shared hit=140
-> Sort (cost=320.99..321.33 rows=135 width=1033) (actual
time=1.407..1.419 rows=100 loops=1)
Sort Key: ycsb_key
Sort Method: quicksort Memory: 361kB
Buffers: shared hit=140
-> Bitmap Heap Scan on usertable (cost=9.33..316.22 rows=135
width=1033) (actual time=0.186..0.285 rows=167 loops=1)
Recheck Cond: ((ycsb_key)::text >=
'user884845140610037639'::text)
Heap Blocks: exact=137
Buffers: shared hit=140
-> Bitmap Index Scan on usertable_pkey (cost=0.00..9.29
rows=135 width=0) (actual time=0.172..0.172 rows=167 loops=1)
Index Cond: ((ycsb_key)::text >=
'user884845140610037639'::text)
Buffers: shared hit=3
Planning time: 0.099 ms
Execution time: 1.460 ms

vs

explain (analyze, buffers) select * From usertable where
YCSB_KEY>='user184845140610037639' order by YCSB_KEY limit 100;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.28..89.12 rows=100 width=1033) (actual time=0.174..0.257
rows=100 loops=1)
Buffers: shared hit=102
-> Index Scan using usertable_pkey on usertable (cost=0.28..2154.59
rows=2425 width=1033) (actual time=0.173..0.246 rows=100 loops=1)
Index Cond: ((ycsb_key)::text >= 'user184845140610037639'::text)
Buffers: shared hit=102
Planning time: 0.105 ms
Execution time: 0.277 ms

Vladimir

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Geoghegan 2018-02-02 20:23:20 Re: [HACKERS] Parallel tuplesort (for parallel B-Tree index creation)
Previous Message Robert Haas 2018-02-02 20:19:49 Re: ERROR: too many dynamic shared memory segments