Re: Speed up Clog Access by increasing CLOG buffers

From: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
To: Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Speed up Clog Access by increasing CLOG buffers
Date: 2016-10-25 04:10:09
Views: Raw Message | Whole Thread | Download mbox
Lists: pgsql-hackers

On Mon, Oct 24, 2016 at 2:48 PM, Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
> On Fri, Oct 21, 2016 at 7:57 AM, Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
>> On Thu, Oct 20, 2016 at 9:03 PM, Tomas Vondra
>> <tomas(dot)vondra(at)2ndquadrant(dot)com> wrote:
>>> In the results you've posted on 10/12, you've mentioned a regression with 32
>>> clients, where you got 52k tps on master but only 48k tps with the patch (so
>>> ~10% difference). I have no idea what scale was used for those tests,
>> That test was with scale factor 300 on POWER 4 socket machine. I think
>> I need to repeat this test with multiple reading to confirm it was
>> regression or run to run variation. I will do that soon and post the
>> results.
> As promised, I have rerun my test (3 times), and I did not see any regression.

Thanks Tomas and Dilip for doing detailed performance tests for this
patch. I would like to summarise the performance testing results.

1. With update intensive workload, we are seeing gains from 23%~192%
at client count >=64 with group_update patch [1].
2. With tpc-b pgbench workload (at 1000 scale factor), we are seeing
gains from 12% to ~70% at client count >=64 [2]. Tests are done on
8-socket intel m/c.
3. With pgbench workload (both simple-update and tpc-b at 300 scale
factor), we are seeing gain 10% to > 50% at client count >=64 [3].
Tests are done on 8-socket intel m/c.
4. To see why the patch only helps at higher client count, we have
done wait event testing for various workloads [4], [5] and the results
indicate that at lower clients, the waits are mostly due to
transactionid or clientread. At client-counts where contention due to
CLOGControlLock is significant, this patch helps a lot to reduce that
contention. These tests are done on on 8-socket intel m/c and
4-socket power m/c
5. With pgbench workload (unlogged tables), we are seeing gains from
15% to > 300% at client count >=72 [6].

There are many more tests done for the proposed patches where gains
are either or similar lines as above or are neutral. We do see
regression in some cases.

1. When data doesn't fit in shared buffers, there is regression at
some client counts [7], but on analysis it has been found that it is
mainly due to the shift in contention from CLOGControlLock to
WALWriteLock and or other locks.
2. We do see in some cases that granular_locking and no_content_lock
patches has shown significant increase in contention on
CLOGControlLock. I have already shared my analysis for same upthread

Attached is the latest group update clog patch.

In last commit fest, the patch was returned with feedback to evaluate
the cases where it can show win and I think above results indicates
that the patch has significant benefit on various workloads. What I
think is pending at this stage is the either one of the committer or
the reviewers of this patch needs to provide feedback on my analysis
[8] for the cases where patches are not showing win.


[1] -
[2] -
[3] -
[4] -
[5] -
[6] -
[7] -
[8] -

With Regards,
Amit Kapila.

Attachment Content-Type Size
group_update_clog_v9.patch application/octet-stream 15.6 KB

In response to


Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2016-10-25 04:17:37 Re: [COMMITTERS] pgsql: Remove extra comma at end of enum list
Previous Message Alvaro Herrera 2016-10-25 02:18:52 Re: emergency outage requiring database restart