Re: Speed up Clog Access by increasing CLOG buffers

From: Dilip Kumar <dilipbalaut(at)gmail(dot)com>
To: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Speed up Clog Access by increasing CLOG buffers
Date: 2016-09-27 10:39:56
Message-ID: CAFiTN-tr_=25EQUFezKNRk=4N-V+D6WMxo7HWs9BMaNx7S3y6w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, Sep 21, 2016 at 8:47 AM, Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
> Summary:
> --------------
> At 32 clients no gain, I think at this workload Clog Lock is not a problem.
> At 64 Clients we can see ~10% gain with simple update and ~5% with TPCB.
> At 128 Clients we can see > 50% gain.
>
> Currently I have tested with synchronous commit=off, later I can try
> with on. I can also test at 80 client, I think we will see some
> significant gain at this client count also, but as of now I haven't
> yet tested.
>
> With above results, what we think ? should we continue our testing ?

I have done further testing with on TPCB workload to see the impact on
performance gain by increasing scale factor.

Again at 32 client there is no gain, but at 64 client gain is 12% and
at 128 client it's 75%, it shows that improvement with group lock is
better at higher scale factor (at 300 scale factor gain was 5% at 64
client and 50% at 128 clients).

8 socket machine (kernel 3.10)
10 min run(median of 3 run)
synchronous_commit=off
scal factor = 1000
share buffer= 40GB

Test results:
----------------

client head group lock
32 27496 27178
64 31275 35205
128 20656 34490

LWLOCK_STATS approx. block count on ClogControl Lock ("lwlock main 11")
--------------------------------------------------------------------------------------------------------
client head group lock
32 80000 60000
64 150000 100000
128 140000 70000

Note: These are approx. block count, I have detailed result of
LWLOCK_STAT, incase someone wants to look into.

LWLOCK_STATS shows that ClogControl lock block count reduced by 25% at
32 client, 33% at 64 client and 50% at 128 client.

Conclusion:
1. I think both LWLOCK_STATS and performance data shows that we get
significant contention reduction on ClogControlLock with the patch.
2. It also shows that though we are not seeing any performance gain at
32 clients, but there is contention reduction with patch.

I am planning to do some more test with higher scale factor (3000 or more).

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Dave Cramer 2016-09-27 11:04:14 Re: PL/Python adding support for multi-dimensional arrays
Previous Message Kyotaro HORIGUCHI 2016-09-27 10:16:12 Re: Fix checkpoint skip logic on idle systems by tracking LSN progress