Skip site navigation (1) Skip section navigation (2)

Re: Speed up Clog Access by increasing CLOG buffers

From: Andres Freund <andres(at)anarazel(dot)de>
To: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
Cc: pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Speed up Clog Access by increasing CLOG buffers
Date: 2016-03-30 23:09:14
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
On 2016-03-28 22:50:49 +0530, Amit Kapila wrote:
> On Fri, Sep 11, 2015 at 8:01 PM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
> wrote:
> >
> > On Thu, Sep 3, 2015 at 5:11 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> > >
> >
> > Updated comments and the patch (increate_clog_bufs_v2.patch)
> > containing the same is attached.
> >
> Andres mentioned to me in off-list discussion, that he thinks we should
> first try to fix the clog buffers problem as he sees in his tests that clog
> buffer replacement is one of the bottlenecks. He also suggested me a test
> to see if the increase in buffers could lead to regression.  The basic idea
> of test was to ensure every access on clog access to be a disk one.  Based
> on his suggestion, I have written a SQL statement which will allow every
> access of CLOG to be a disk access and the query used for same is as below:
> With ins AS (INSERT INTO test_clog_access values(default) RETURNING c1)
> Select * from test_clog_access where c1 = (Select c1 from ins) - 32768 *
> :client_id;
> Test Results
> ---------------------
> HEAD - commit d12e5bb7 Clog Buffers - 32
> Patch-1 - Clog Buffers - 64
> Patch-2 - Clog Buffers - 128
> Patch_Ver/Client_Count 1 64
> HEAD 12677 57470
> Patch-1 12305 58079
> Patch-2 12761 58637
> Above data is a median of 3 10-min runs.  Above data indicates that there
> is no substantial dip in increasing clog buffers.
> Test scripts used in testing are attached with this mail.  In
>, you need to change data_directory path as per your
> m/c, also you might want to change the binary name, if you want to create
> postgres binaries with different names.
> Andres, Is this test inline with what you have in mind?

Yes. That looks good. My testing shows that increasing the number of
buffers can increase both throughput and reduce latency variance. The
former is a smaller effect with one of the discussed patches applied,
the latter seems to actually increase in scale (with increased

I've attached patches to:
0001: Increase the max number of clog buffers
0002: Implement 64bit atomics fallback and optimize read/write
0003: Edited version of Simon's clog scalability patch

WRT 0003 - still clearly WIP - I've:
- made group_lsn pg_atomic_u64*, to allow for tear-free reads
- split content from IO lock
- made SimpleLruReadPage_optShared always return with only share lock
- Implement a different, experimental, concurrency model for
  SetStatusBit using cmpxchg. A define USE_CONTENT_LOCK controls which
  bit is used.

I've tested this and saw this outperform Amit's approach. Especially so
when using a read/write mix, rather then only reads. I saw over 30%
increase on a large EC2 instance with -btpcb-like(at)1 -bselect-only(at)3(dot) But
that's in a virtualized environment, not very good for reproducability.

Amit, could you run benchmarks on your bigger hardware? Both with
USE_CONTENT_LOCK commented out and in?

I think we should go for 1) and 2) unconditionally. And then evaluate
whether to go with your, or 3) from above. If the latter, we've to do
some cleanup :)


Andres Freund

Attachment: 0001-Improve-64bit-atomics-support.patch
Description: text/x-patch (10.4 KB)
Attachment: 0002-Increase-max-number-of-buffers-in-clog-SLRU-to-128.patch
Description: text/x-patch (826 bytes)
Attachment: 0003-Use-a-much-more-granular-locking-model-for-the-clog-.patch
Description: text/x-patch (17.7 KB)

In response to


pgsql-hackers by date

Next:From: Alvaro HerreraDate: 2016-03-30 23:15:29
Subject: Re: Timeline following for logical slots
Previous:From: Alvaro HerreraDate: 2016-03-30 23:09:03
Subject: pgsql: Enable logical slots to follow timeline switches

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group