Re: CLOG contention, part 2

From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: CLOG contention, part 2
Date: 2012-02-27 09:03:14
Message-ID: CA+U5nMJ0hNbQjZ=C+LfL3kp7eGdTYN7WcWh+=EkfY2GVQP1eUA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Sun, Feb 26, 2012 at 10:53 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On Sat, Feb 25, 2012 at 2:16 PM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
>> On Wed, Feb 8, 2012 at 11:26 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>>> Given that, I obviously cannot test this at this point,
>>
>> Patch with minor corrections attached here for further review.
>
> All right, I will set up some benchmarks with this version, and also
> review the code.

Thanks.

> As a preliminary comment, Tom recently felt that it was useful to
> reduce the minimum number of CLOG buffers from 8 to 4, to benefit very
> small installations.  So I'm guessing he'll object to an
> across-the-board doubling of the amount of memory being used, since
> that would effectively undo that change.  It also makes it a bit hard
> to compare apples to apples, since of course we expect that by using
> more memory we can reduce the amount of CLOG contention.  I think it's
> really only meaningful to compare contention between implementations
> that use approximately the same total amount of memory.  It's true
> that doubling the maximum number of buffers from 32 to 64 straight up
> does degrade performance, but I believe that's because the buffer
> lookup algorithm is just straight linear search, not because we can't
> in general benefit from more buffers.

I'm happy if you want to benchmark this against simply increasing clog
buffers. We expect downsides to that, but it is worth testing
nonetheless.

>> pgbench loads all the data in one go, then pretends the data got their
>> one transaction at a time. So pgbench with no mods is actually the
>> theoretically most unreal imaginable. You have to run pgbench for 1
>> million transactions before you even theoretically show any gain from
>> this patch, and it would need to be a long test indeed before the
>> averaged effect of the patch was large enough to avoid the zero
>> contribution from the first million transacts.
>
> Depends on the scale factor.  At scale factor 100, the first million
> transactions figure to have replaced a sizeable percentage of the rows
> already.  But I can use your other patch to set up the run.  Maybe
> scale factor 300 would be good?

Clearly if too much I/O is induced by the test we will see the results
swamped. The patch is aimed at people with bigger databases and lots
of RAM, which is many, many people because RAM is cheap.

So please use a scale factor that the hardware can cope with.

--
 Simon Riggs                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Marko Kreen 2012-02-27 10:20:07 Re: Speed dblink using alternate libpq tuple storage
Previous Message Simon Riggs 2012-02-27 08:50:34 Re: Initial 9.2 pgbench write results