Re: High CPU Load

From: Jérôme BENOIS <benois(at)argia-engineering(dot)fr>
To: Markus Schaber <schabi(at)logix-tt(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: High CPU Load
Date: 2006-09-18 14:44:05
Message-ID: 1158590646.5665.71.camel@localhost.localdomain
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hi Markus,

Le vendredi 15 septembre 2006 à 11:43 +0200, Markus Schaber a écrit :
> Hi, Jérôme,
>
> Jérôme BENOIS wrote:
>
> > max_connections = 512
>
> Do you really have that much concurrent connections? Then you should
> think about getting a larger machine, probably.
>
> You will definitely want to play with commit_delay and commit_siblings
> settings in that case, especially if you have write access.
>
> > work_mem = 65536
> > effective_cache_size = 131072
>
> hmm, 131072*8*1024 + 512*65536*1024 = 35433480192 - thats 33 Gig of
> Memory you assume here, not counting OS usage, and the fact that certain
> queries can use up a multiple of work_mem.

Now i Have 335 concurrent connections, i decreased work_mem parameter to
32768 and disabled Hyper Threading in BIOS. But my CPU load is still
very important.

Tomorrow morning i plan to add 2Giga RAM ... But I don't understand why
my database server worked good with previous version of postgres and
same queries ...

> Even on amachine that big, I'd be inclined to dedicate more memory to
> caching, and less to the backends, unless specific needs dictate it. You
> could try to use sqlrelay or pgpool to cut down the number of backends
> you need.
I used already database pool on my application and when i decrease
number of connection my application is more slow ;-(
>
> > My Server is Dual Xeon 3.06GHz
>
> For xeons, there were rumours about "context switch storms" which kill
> performance.
I disabled Hyper Threading.
> > with 2 Go RAM and good SCSI disks.
>
> For 2 Gigs of ram, you should cut down the number of concurrent backends.
>
> Does your machine go into swap?
No, 0 swap found and i cannot found pgsql_tmp files in $PG_DATA/base/...
>
> Markus
--
Jérôme,

python -c "print '@'.join(['.'.join([w[::-1] for w in p.split('.')]) for
p in 'sioneb(at)gnireenigne-aigra(dot)rf'.split('@')])"

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Guillaume Smet 2006-09-18 15:48:53 Re: High CPU Load
Previous Message Bucky Jordan 2006-09-18 14:37:58 Large tables (was: RAID 0 not as fast as expected)