Re: CLOG contention

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: CLOG contention
Date: 2012-01-06 15:48:20
Message-ID: CA+TgmobD76FgkTDo6XDg8D1ysCw887uWFk0LJ8u3850x6LQqEw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Jan 5, 2012 at 5:34 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Robert Haas <robertmhaas(at)gmail(dot)com> writes:
>> On Thu, Jan 5, 2012 at 2:57 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>>> I would be in favor of that, or perhaps some other formula (eg, maybe
>>> the minimum should be less than 8 for when you've got very little shmem).
>
>> I have some results that show that, under the right set of
>> circumstances, 8->32 is a win, and I can quantify by how much it wins.
>>  I don't have any data at all to quantify the cost of dropping the
>> minimum from 8->6, or from 8->4, and therefore I'm reluctant to do it.
>>  My guess is that it's a bad idea, anyway.  Even on a system where
>> shared_buffers is just 8MB, we have 1024 regular buffers and 8 CLOG
>> buffers.  If we reduce the number of CLOG buffers from 8 to 4 (i.e. by
>> 50%), we can increase the number of regular buffers from 1024 to 1028
>> (i.e. by <0.5%).  Maybe you can find a case where that comes out to a
>> win, but you might have to look pretty hard.
>
> I think you're rejecting the concept too easily.  A setup with very
> little shmem is only going to be suitable for low-velocity systems that
> are not pushing too many transactions through per second, so it's not
> likely to need so many CLOG buffers.

Well, if you take the same workload and spread it out over a long
period of time, it will still have just as many CLOG misses or
shared_buffers misses as it had when you did it all at top speed.
Admittedly, you're unlikely to run into the situation where you have
people wanting to do simultaneous CLOG reads than there are buffers,
but you'll still thrash the cache.

> And frankly I'm not that concerned
> about what the performance is like: I'm more concerned about whether
> PG will start up at all without modifying the system shmem limits,
> on systems with legacy values for SHMMAX etc.

After thinking about this a bit, I think the problem is that the
divisor we picked is still too high. Suppose we set num_clog_buffers
= (shared_buffers / 4MB), with a minimum of 4 and maximum of 32. That
way, pretty much anyone who bothers to set shared_buffers to a
non-default value will get 32 CLOG buffers, which should be fine, but
people who are in the 32MB-or-less range can ramp down lower than what
we've allowed in the past. That seems like it might give us the best
of both worlds.

> Shaving a few
> single-purpose buffers to make back what we spent on SSI, for example,
> seems like a good idea to me.

I think if we want to buy back that memory, the best way to do it
would be to add a GUC to disable SSI at startup time.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2012-01-06 15:49:14 Re: Progress on fast path sorting, btree index creation time
Previous Message Robert Haas 2012-01-06 15:28:40 Re: Poorly thought out code in vacuum