Skip site navigation (1) Skip section navigation (2)

Re: CPU bound at 99%

From: Craig Ringer <craig(at)postnewspapers(dot)com(dot)au>
To: Erik Jones <erik(at)myemma(dot)com>
Cc: Bryan Buecking <buecking(at)gmail(dot)com>,pgsql-performance(at)postgresql(dot)org
Subject: Re: CPU bound at 99%
Date: 2008-04-22 16:36:35
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance
Erik Jones wrote:

>> max_connections = 2400
> That is WAY too high.  Get a real pooler, such as pgpool, and drop that 
> down to 1000 and test from there.  I see you mentioned 500 concurrent 
> connections.  Are each of those connections actually doing something?  
> My guess that once you cut down on the number actual connections you'll 
> find that each connection can get it's work done faster and you'll see 
> that number drop significantly.

It's not an issue for me - I'm expecting *never* to top 100 concurrent 
connections, and many of those will be idle, with the usual load being 
closer to 30 connections. Big stuff ;-)

However, I'm curious about what an idle backend really costs.

On my system each backend has an RSS of about 3.8MB, and a psql process 
tends to be about 3.0MB. However, much of that will be shared library 
bindings etc. The real cost per psql instance and associated backend 
appears to be 1.5MB (measured with 10 connections using system free RAM 
change) . If I use a little Python program to generate 50 connections 
free system RAM drops by ~45MB and rises by the same amount when the 
Python process exists and the backends die, so the backends presumably 
use less than 1MB each of real unshared RAM.

Presumably the backends will grow if they perform some significant 
queries and are then left idle. I haven't checked that.

At 1MB of RAM per backend that's not a trivial cost, but it's far from 
earth shattering, especially allowing for the OS swapping out backends 
that're idle for extended periods.

So ... what else does an idle backend cost? Is it reducing the amount of 
shared memory available for use on complex queries? Are there some lists 
PostgreSQL must scan for queries that get more expensive to examine as 
the number of backends rise? Are there locking costs?

Craig Ringer

In response to

pgsql-performance by date

Next:From: PFCDate: 2008-04-22 16:45:24
Subject: Re: CPU bound at 99%
Previous:From: Tom LaneDate: 2008-04-22 16:25:36
Subject: Re: CPU bound at 99%

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group