Re: Scaling shared buffer eviction

From: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
To: Andres Freund <andres(at)2ndquadrant(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Scaling shared buffer eviction
Date: 2014-10-10 06:58:13
Message-ID: CAA4eK1KGipU-ufOBMs7mj3a_JcMk4PvuQfL+wqnZpYmgn1hm4A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, Oct 10, 2014 at 1:08 AM, Andres Freund <andres(at)2ndquadrant(dot)com>
wrote:
> On 2014-10-09 16:01:55 +0200, Andres Freund wrote:
> >
> > I don't think OLTP really is the best test case for this. Especially not
> > pgbench with relatilvely small rows *and* a uniform distribution of
> > access.
> >
> > Try parallel COPY TO. Batch write loads is where I've seen this hurt
> > badly.
>
>
> just by switching shared_buffers from 1 to 8GB. I haven't tried, but I
> hope that with an approach like your's this might become better.
>
> psql -f /tmp/prepare.sql
> pgbench -P5 -n -f /tmp/copy.sql -c 8 -j 8 -T 100

Thanks for providing the scripts. You haven't specified how much data
is present in 'large' file used in tests. I have tried with different set
of
rows, but I could not see the dip that is present in your data when you
increased shared buffers from 1GB to 8GB, also I don't see any difference
with patch. BTW, why do you think that for such worklaods this patch can
be helpful, according to my understanding it can be helpful mainly for
read mostly workloads when all the data doesn't fit in shared buffers.

Performance Data
-----------------------------------
IBM POWER-8 24 cores, 192 hardware threads
RAM = 492GB

For 500000 rows
----------------------------
Data populated using below statement:
insert into largedata_64
('aaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbb',generate_series(1,500000));
copy largedata_64 to '/tmp/large' binary;

pgbench -P5 -n -f /tmp/copy.sql -c 8 -j 8 -T 100

shared_buffers - 1GB
---------------------------------
progress: 7.0 s, 2.7 tps, lat 2326.645 ms stddev 173.506
progress: 11.5 s, 3.5 tps, lat 2295.577 ms stddev 78.949
progress: 15.8 s, 3.7 tps, lat 2298.217 ms stddev 223.346
progress: 20.4 s, 3.5 tps, lat 2350.187 ms stddev 192.312
progress: 25.1 s, 3.4 tps, lat 2280.206 ms stddev 54.580
progress: 31.9 s, 3.4 tps, lat 2408.593 ms stddev 243.230
progress: 45.2 s, 1.1 tps, lat 5120.151 ms stddev 3913.561
progress: 50.5 s, 1.3 tps, lat 8967.954 ms stddev 3384.229
progress: 52.7 s, 2.7 tps, lat 3883.788 ms stddev 1733.293
progress: 55.6 s, 3.2 tps, lat 2684.282 ms stddev 348.615
progress: 58.2 s, 3.4 tps, lat 2602.355 ms stddev 268.718
progress: 60.8 s, 3.1 tps, lat 2361.937 ms stddev 302.643
progress: 65.3 s, 3.5 tps, lat 2341.903 ms stddev 162.338
progress: 74.1 s, 2.6 tps, lat 2720.182 ms stddev 716.425
progress: 76.4 s, 3.5 tps, lat 3023.234 ms stddev 670.473
progress: 80.4 s, 2.0 tps, lat 2795.323 ms stddev 820.429
progress: 85.6 s, 1.9 tps, lat 4756.217 ms stddev 844.284
progress: 91.9 s, 2.2 tps, lat 3996.001 ms stddev 1301.143
progress: 96.6 s, 3.5 tps, lat 2284.419 ms stddev 85.013
progress: 101.1 s, 3.5 tps, lat 2282.848 ms stddev 71.388
transaction type: Custom query
scaling factor: 1
query mode: simple
number of clients: 8
number of threads: 8
duration: 100 s
number of transactions actually processed: 275
latency average: 2939.784 ms
latency stddev: 1739.974 ms
tps = 2.710138 (including connections establishing)
tps = 2.710208 (excluding connections establishing)

shared_buffers - 8GB
------------------------------------
progress: 6.7 s, 2.7 tps, lat 2349.816 ms stddev 212.889
progress: 11.0 s, 3.5 tps, lat 2257.364 ms stddev 141.148
progress: 15.2 s, 3.8 tps, lat 2209.669 ms stddev 127.101
progress: 21.7 s, 3.7 tps, lat 2159.838 ms stddev 92.205
progress: 25.8 s, 3.9 tps, lat 2221.072 ms stddev 283.362
progress: 30.1 s, 3.5 tps, lat 2179.611 ms stddev 152.741
progress: 39.3 s, 2.1 tps, lat 2768.609 ms stddev 1265.508
progress: 50.9 s, 1.1 tps, lat 9361.388 ms stddev 2657.885
progress: 52.9 s, 1.0 tps, lat 2036.098 ms stddev 3.599
progress: 55.2 s, 4.3 tps, lat 2167.688 ms stddev 91.183
progress: 57.6 s, 3.0 tps, lat 2399.219 ms stddev 173.535
progress: 60.2 s, 4.1 tps, lat 2427.273 ms stddev 198.698
progress: 65.2 s, 3.4 tps, lat 2441.630 ms stddev 123.773
progress: 72.4 s, 2.9 tps, lat 2534.631 ms stddev 254.162
progress: 75.0 s, 3.9 tps, lat 2468.266 ms stddev 221.969
progress: 82.3 s, 3.0 tps, lat 2548.690 ms stddev 404.852
progress: 86.7 s, 1.4 tps, lat 3980.576 ms stddev 1205.743
progress: 92.5 s, 1.4 tps, lat 5174.340 ms stddev 643.415
progress: 97.1 s, 3.7 tps, lat 3252.847 ms stddev 1689.268
progress: 101.8 s, 3.4 tps, lat 2346.690 ms stddev 138.251
transaction type: Custom query
scaling factor: 1
query mode: simple
number of clients: 8
number of threads: 8
duration: 100 s
number of transactions actually processed: 284
latency average: 2856.195 ms
latency stddev: 1740.699 ms
tps = 2.781603 (including connections establishing)
tps = 2.781682 (excluding connections establishing)

For 5000 rows
------------------------
shared_buffers - 1GB
-----------------------------------
progress: 5.0 s, 357.7 tps, lat 22.295 ms stddev 3.511
progress: 10.0 s, 339.0 tps, lat 23.606 ms stddev 4.388
progress: 15.0 s, 323.4 tps, lat 24.733 ms stddev 5.001
progress: 20.0 s, 329.6 tps, lat 24.258 ms stddev 4.407
progress: 25.0 s, 334.3 tps, lat 23.963 ms stddev 4.126
progress: 30.0 s, 337.5 tps, lat 23.699 ms stddev 3.492
progress: 35.2 s, 158.6 tps, lat 37.182 ms stddev 189.946
progress: 40.3 s, 3.9 tps, lat 2587.129 ms stddev 762.231
progress: 45.4 s, 2.4 tps, lat 2525.946 ms stddev 942.428
progress: 50.0 s, 303.7 tps, lat 33.719 ms stddev 137.524
progress: 55.0 s, 331.5 tps, lat 24.122 ms stddev 3.806
progress: 60.0 s, 333.2 tps, lat 24.028 ms stddev 3.340
progress: 65.0 s, 336.1 tps, lat 23.802 ms stddev 3.601
progress: 70.0 s, 209.0 tps, lat 38.263 ms stddev 120.198
progress: 75.2 s, 141.2 tps, lat 54.350 ms stddev 168.274
progress: 80.0 s, 331.0 tps, lat 25.262 ms stddev 31.637
progress: 86.0 s, 10.9 tps, lat 721.991 ms stddev 750.484
progress: 90.0 s, 85.3 tps, lat 95.531 ms stddev 411.560
progress: 95.0 s, 318.0 tps, lat 25.152 ms stddev 3.985
progress: 100.5 s, 241.1 tps, lat 33.061 ms stddev 83.705
transaction type: Custom query
scaling factor: 1
query mode: simple
number of clients: 8
number of threads: 8
duration: 100 s
number of transactions actually processed: 24068
latency average: 33.400 ms
latency stddev: 138.159 ms
tps = 239.444193 (including connections establishing)
tps = 239.450511 (excluding connections establishing)

shared_buffers-8GB
---------------------------------
progress: 5.0 s, 339.3 tps, lat 23.514 ms stddev 3.853
progress: 10.0 s, 332.7 tps, lat 24.033 ms stddev 3.850
progress: 15.0 s, 329.7 tps, lat 24.290 ms stddev 3.236
progress: 20.0 s, 323.7 tps, lat 24.718 ms stddev 3.639
progress: 25.0 s, 338.0 tps, lat 23.650 ms stddev 2.916
progress: 30.0 s, 324.0 tps, lat 24.721 ms stddev 3.365
progress: 36.1 s, 56.7 tps, lat 127.433 ms stddev 530.344
progress: 41.0 s, 3.2 tps, lat 2393.639 ms stddev 469.533
progress: 45.0 s, 91.8 tps, lat 104.049 ms stddev 418.744
progress: 50.0 s, 331.4 tps, lat 24.143 ms stddev 2.398
progress: 55.0 s, 332.7 tps, lat 24.067 ms stddev 2.810
progress: 60.0 s, 331.1 tps, lat 24.136 ms stddev 3.449
progress: 65.0 s, 304.0 tps, lat 26.332 ms stddev 33.693
progress: 70.9 s, 227.6 tps, lat 34.153 ms stddev 121.504
progress: 75.0 s, 295.1 tps, lat 28.236 ms stddev 52.897
progress: 82.2 s, 44.0 tps, lat 160.993 ms stddev 632.587
progress: 85.0 s, 85.3 tps, lat 121.065 ms stddev 432.011
progress: 90.0 s, 325.9 tps, lat 24.581 ms stddev 2.545
progress: 95.0 s, 333.4 tps, lat 23.989 ms stddev 1.709
progress: 100.0 s, 292.2 tps, lat 27.330 ms stddev 41.678
transaction type: Custom query
scaling factor: 1
query mode: simple
number of clients: 8
number of threads: 8
duration: 100 s
number of transactions actually processed: 25039
latency average: 31.955 ms
latency stddev: 136.304 ms
tps = 250.328882 (including connections establishing)
tps = 250.335432 (excluding connections establishing)

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Heikki Linnakangas 2014-10-10 06:58:30 Re: Obsolete reference to _bt_tuplecompare() within tuplesort.c
Previous Message Kyotaro HORIGUCHI 2014-10-10 06:48:56 Re: Escaping from blocked send() reprised.