as Gregory wrote, let apache do the job.
The apache does queue a request if all running workers are busy.
1. Split static content.
We have an apache as frontend which serves all static content and
forwards (reverse-proxy) dynamic content to the "backends"
2. Split different types of dynamic content.
We have an apache for all interactive requests - where the user expects
quick responses. We have another apache for non-interactive content such
as downloads and uploads. Whose request, which doesn't do much cpu work
at all (and don't fit to 1.).
3. Limit each apache to achieve a good work load
It is to late if the operation systems tries to schedule the simultan
work load because you have more processes in ready state than free CPUs.
Set MaxClients of the apache for interactive requests that your
server(s) doesn't get overloaded.
You can set MaxClients just to limit the parallel downloads/uploads.
Set MaxClients of the frontend higher. A good settings is when the
interactive requests are queued at the backend apache without reaching
the limit of the request queue.
4. Set max_connection that you don't reach this limit.
Maximum number of connection from interactive backend + maximum number
of connections from non-interactive backend + reserve for the database
5. Check all limits that you never reach a memory limit and you box
starts to swap.
6. Monitor you application well
- Count number of open connections to each apache
- Check the load of the server.
- Check context switches on the PostgreSQL box.
I understand an apache process group as one apache.
Here is a example from our web application
Two frontends - each MaxClients = 1024.
Interactive backend - MaxClients = 35.
non-Interactive backend - MaxClients = 65.
max_connections = 120 (assuming each backend child process has one
With this setting we have even under load normally not more queries
running at the PostgreSQL server as cores are available.
Please note that example should give you only experience for the scale.
We need a long time to find this values for our environment (application
BTW: This can also be setup on a single box. We have customers where
different apache are running on the same server.
There are a number of papers in the web which describe such setups.
Checkout <http://perl.apache.org/docs/1.0/guide/performance.html> for
Gregory Stark schrieb:
> "Honza Novak" <kacerr(at)developers(dot)zlutazimnice(dot)cz> writes:
>> Hi all,
>> i'm looking for correct or at least good enough solution for use of multiple
>> apaches with single postgres database. (apaches are 2.0.x, and postgres is
>> At this moment i'm involved in management of a website where we have large user
>> load on our web servers. Apaches are set up to be able to answer 300 requests
>> at the same time and at the moment we have 4 apaches.
> Do you have 300 processors? Are your requests particularly i/o-bound? Why
> would running 300 processes simultaneously be faster than running a smaller
> number sequentially? It doesn't sound like your systems are capable of
> handling such a large number of requests simultaneously.
> The traditional answer is to separate static content such as images which are
> more i/o-bound onto a separate apache configuration which has a larger number
> of connections, limit the number of connections for the cpu-bound dynamic
> content server, and have a 1-1 ratio between apache dynamic content
> connections and postgres backends. The alternative is to use connection
> pooling. Often a combination of the two is best.
Sven Geisler <sgeisler(at)aeccom(dot)com> Tel +49.30.921017.81 Fax .50
Senior Developer, AEC/communications GmbH & Co. KG Berlin, Germany
In response to
pgsql-performance by date
|Next:||From: Kevin Grittner||Date: 2007-10-24 17:22:35|
|Subject: Re: multiple apaches against single postgresdatabase|
|Previous:||From: Giulio Cesare Solaroli||Date: 2007-10-24 14:25:53|
|Subject: Re: Finalizing commit taking very long|