Skip site navigation (1) Skip section navigation (2)

Re: High CPU Utilization

From: Joe Uhl <joeuhl(at)gmail(dot)com>
To: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
Cc: Greg Smith <gsmith(at)gregsmith(dot)com>, Gregory Stark <stark(at)enterprisedb(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: High CPU Utilization
Date: 2009-03-20 21:16:47
Message-ID: 5076278C-6198-4E3B-9402-C34B1EB45A90@gmail.com (view raw or flat)
Thread:
Lists: pgsql-performance
On Mar 20, 2009, at 4:58 PM, Scott Marlowe wrote:

> On Fri, Mar 20, 2009 at 2:49 PM, Joe Uhl <joeuhl(at)gmail(dot)com> wrote:
>>
>> On Mar 20, 2009, at 4:29 PM, Scott Marlowe wrote:
>
>>> What does the cs entry on vmstat say at this time?  If you're cs is
>>> skyrocketing then you're getting a context switch storm, which is
>>> usually a sign that there are just too many things going on at  
>>> once /
>>> you've got an old kernel things like that.
>>
>> cs column (plus cpu columns) of vmtstat 1 30 reads as follows:
>>
>> cs    us  sy id wa
>> 11172 95  4  1  0
>> 12498 94  5  1  0
>> 14121 91  7  1  1
>> 11310 90  7  1  1
>> 12918 92  6  1  1
>> 10613 93  6  1  1
>> 9382  94  4  1  1
>> 14023 89  8  2  1
>> 10138 92  6  1  1
>> 11932 94  4  1  1
>> 15948 93  5  2  1
>> 12919 92  5  3  1
>> 10879 93  4  2  1
>> 14014 94  5  1  1
>> 9083  92  6  2  0
>> 11178 94  4  2  0
>> 10717 94  5  1  0
>> 9279  97  2  1  0
>> 12673 94  5  1  0
>> 8058  82 17  1  1
>> 8150  94  5  1  1
>> 11334 93  6  0  0
>> 13884 91  8  1  0
>> 10159 92  7  0  0
>> 9382  96  4  0  0
>> 11450 95  4  1  0
>> 11947 96  3  1  0
>> 8616  95  4  1  0
>> 10717 95  3  1  0
>>
>> We are running on 2.6.28.7-2 kernel.  I am unfamiliar with vmstat  
>> output but
>> reading the man page (and that cs = "context switches per second")  
>> makes my
>> numbers seem very high.
>
> No, those aren't really all that high.  If you were hitting cs
> contention, I'd expect it to be in the 25k to 100k range.  <10k
> average under load is pretty reasonable.
>
>> Our sum JDBC pools currently top out at 400 connections (and we are  
>> doing
>> work on all 400 right now).  I may try dropping those pools down even
>> smaller. Are there any general rules of thumb for figuring out how  
>> many
>> connections you should service at maximum?  I know of the memory
>> constraints, but thinking more along the lines of connections per  
>> CPU core.
>
> Well, maximum efficiency is usually somewhere in the range of 1 to 2
> times the number of cores you have, so trying to get the pool down to
> a dozen or two connections would be the direction to generally head.
> May not be reasonable or doable though.

Thanks for the info.  Figure I can tune our pools down and monitor  
throughput/CPU/IO and look for a sweet spot with our existing  
hardware.  Just wanted to see if tuning connections down could  
potentially help.

I feel as though we are going to have to replicate this DB before too  
long.  We've got an almost identical server doing nothing but PITR  
with 8 CPU cores mostly idle that could be better spent.  Our pgfouine  
reports, though only logging queries that take over 1 second, show  
90%  reads.

I have heard much about Slony, but has anyone used the newer version  
of Mammoth Replicator (or looks to be called PostgreSQL + Replication  
now) on 8.3?  From the documentation, it appears to be easier to set  
up and less invasive but I struggle to find usage information/stories  
online.


In response to

pgsql-performance by date

Next:From: Alvaro HerreraDate: 2009-03-20 21:58:13
Subject: Re: Proposal of tunable fix for scalability of 8.4
Previous:From: Scott MarloweDate: 2009-03-20 20:58:18
Subject: Re: High CPU Utilization

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group