Re: CLOG contention, part 2

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: CLOG contention, part 2
Date: 2012-02-26 22:53:53
Message-ID: CA+TgmoacOFRSOqb5u=g_Zcs-_OwUY0Vrj4nCiTfkdStT8Ai2zw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Sat, Feb 25, 2012 at 2:16 PM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
> On Wed, Feb 8, 2012 at 11:26 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> Given that, I obviously cannot test this at this point,
>
> Patch with minor corrections attached here for further review.

All right, I will set up some benchmarks with this version, and also
review the code.

As a preliminary comment, Tom recently felt that it was useful to
reduce the minimum number of CLOG buffers from 8 to 4, to benefit very
small installations. So I'm guessing he'll object to an
across-the-board doubling of the amount of memory being used, since
that would effectively undo that change. It also makes it a bit hard
to compare apples to apples, since of course we expect that by using
more memory we can reduce the amount of CLOG contention. I think it's
really only meaningful to compare contention between implementations
that use approximately the same total amount of memory. It's true
that doubling the maximum number of buffers from 32 to 64 straight up
does degrade performance, but I believe that's because the buffer
lookup algorithm is just straight linear search, not because we can't
in general benefit from more buffers.

> pgbench loads all the data in one go, then pretends the data got their
> one transaction at a time. So pgbench with no mods is actually the
> theoretically most unreal imaginable. You have to run pgbench for 1
> million transactions before you even theoretically show any gain from
> this patch, and it would need to be a long test indeed before the
> averaged effect of the patch was large enough to avoid the zero
> contribution from the first million transacts.

Depends on the scale factor. At scale factor 100, the first million
transactions figure to have replaced a sizeable percentage of the rows
already. But I can use your other patch to set up the run. Maybe
scale factor 300 would be good?

>> However, there is a potential fly in the ointment: in other cases in
>> which we've reduced contention at the LWLock layer, we've ended up
>> with very nasty contention at the spinlock layer that can sometimes
>> eat more CPU time than the LWLock contention did.   In that light, it
>> strikes me that it would be nice to be able to partition the
>> contention N ways rather than just 2 ways.  I think we could do that
>> as follows.  Instead of having one control lock per SLRU, have N
>> locks, where N is probably a power of 2.  Divide the buffer pool for
>> the SLRU N ways, and decree that each slice of the buffer pool is
>> controlled by one of the N locks.  Route all requests for a page P to
>> slice P mod N.  Unlike this approach, that wouldn't completely
>> eliminate contention at the LWLock level, but it would reduce it
>> proportional to the number of partitions, and it would reduce spinlock
>> contention according to the number of partitions as well.  A down side
>> is that you'll need more buffers to get the same hit rate, but this
>> proposal has the same problem: it doubles the amount of memory
>> allocated for CLOG.  Of course, this approach is all vaporware right
>> now, so it's anybody's guess whether it would be better than this if
>> we had code for it.  I'm just throwing it out there.
>
> We've already discussed that and my patch for that has already been
> rules out by us for this CF.

I'm not aware that anybody's coded up the approach I'm talking about.
You've proposed splitting this up a couple of ways, but AFAICT they
all boil down to splitting up CLOG into multiple SLRUs, whereas what
I'm talking about is to have just a single SLRU, but with multiple
control locks. I feel that approach is a bit more flexible, because
it could be applied to any SLRU, not just CLOG. But I haven't coded
it, let alone tested it, so I might be all wet.

> I agree with you that we should further analyse CLOG contention in
> following releases but that is not an argument against making this
> change now.

No, but the fact that this approach is completely untested, or at
least that no test results have been posted, is an argument against
it. Assuming this version compiles and works I'll try to see what I
can do about bridging that gap.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Thom Brown 2012-02-26 23:24:31 Re: Command Triggers, patch v11
Previous Message Marko Kreen 2012-02-26 22:19:22 Re: Speed dblink using alternate libpq tuple storage