From: | Dilip Kumar <dilipbalaut(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Speed up Clog Access by increasing CLOG buffers |
Date: | 2016-09-14 16:04:55 |
Message-ID: | CAFiTN-t-VKZTXUdOX_L_X4Nw6bXOX=Fbmm2Oq=PmD4KqCufHBQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Sep 14, 2016 at 8:59 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> Sure, but you're testing at *really* high client counts here. Almost
> nobody is going to benefit from a 5% improvement at 256 clients.
I agree with your point, but here we need to consider one more thing,
that on head we are gaining ~30% with both the approaches.
So for comparing these two patches we can consider..
A. Other workloads (one can be as below)
-> Load on CLogControlLock at commit (exclusive mode) + Load on
CLogControlLock at Transaction status (shared mode).
I think we can mix (savepoint + updates)
B. Simplicity of the patch (if both are performing almost equal in all
practical scenarios).
C. Bases on algorithm whichever seems winner.
I will try to test these patches with other workloads...
> You
> need to test 64 clients and 32 clients and 16 clients and 8 clients
> and see what happens there. Those cases are a lot more likely than
> these stratospheric client counts.
I tested with 64 clients as well..
1. On head we are gaining ~15% with both the patches.
2. But group lock vs granular lock is almost same.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Pavan Deolasee | 2016-09-14 16:19:43 | Re: Vacuum: allow usage of more than 1GB of work mem |
Previous Message | Arthur Silva | 2016-09-14 15:59:16 | Re: Vacuum: allow usage of more than 1GB of work mem |