Skip site navigation (1) Skip section navigation (2)

Re: performance config help

From: Pierre Frédéric Caillaud <lists(at)peufeu(dot)com>
To: "Bob Dusek" <redusek(at)gmail(dot)com>, "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: performance config help
Date: 2010-01-14 15:59:15
Message-ID: op.u6i5c1oqcke6l8@soyouz (view raw, whole thread or download thread mbox)
Lists: pgsql-performance
> So, pgBouncer is pretty good.  It doesn't appear to be as good as
> limiting TCON and using pconnect, but since we can't limit TCON in a
> production environment, we may not have a choice.

	Actually, you can : use lighttpd and php/fastcgi.

	Lighttpd handles the network stuff, and funnels/queues any number of  
client connections into a limited number of PHP fastcgi processes. You can  
configure this process pool to your tastes.

	Rather than instanciating 1 PHP interpreter (and 1 postgres) per client  
connection, you can set it up for a max of N PHP procs. If PHP waits a lot  
on IO (you use url fopen, that kind of things) you can set N=5..10 per  
core, but if you don't use that, N=2-3 per core is good. It needs to be  
tuned to your application's need.

	The idea is that if you got enough processes to keep your CPU busy,  
adding more will just fill your RAM, trash your CPU cache, add more  
context swithes, and generally lower your total throughput. Same is true  
for Postgres, too.

	I've switched from apache to lighttpd on a rather busy community site and  
the difference in performance and memory usage were quite noticeable.  
Also, this site used MySQL (argh) so the occasional locking on some MyISAM  
tables would become really itchy unless the number of concurrent processes  
was kept to a manageable level.

	When you bring down your number of postgres processes to some manageable  
level (plot a curve of throughput versus processes and select the  
maximum), if postgres still spends idle time waiting for locks, you'll  
need to do some exploration :

	- use the lock view facility in postgres
	- check your triggers : are you using some trigger that updates a count  
as rows are modified ? This can be a point of contention.
	- check your FKs too.
	- try fsync=off
	- try to put the WAL and tables on a ramdisk.
	If you have even a few % iowait, maybe that hides the fact that 1  
postmaster is fsyncing and perhaps 10 others are waiting on it to finish,  
which doesn't count as iowait...

	- recompile postgres and enable lwlock timing

In response to

pgsql-performance by date

Next:From: Ivan VorasDate: 2010-01-14 15:59:57
Subject: Re: Slow "Select count(*) ..." query on table with 60 Mio. rows
Previous:From: Kevin GrittnerDate: 2010-01-14 15:47:53
Subject: Re: Slow "Select count(*) ..." query on table with 60 Mio. rows

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group