Skip site navigation (1) Skip section navigation (2)

Re: Parallel queries for a web-application |performance testing

From: "Pierre C" <lists(at)peufeu(dot)com>
To: "Matthew Wakeling" <matthew(at)flymine(dot)org>, "Balkrishna Sharma" <b_ki(at)hotmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Parallel queries for a web-application |performance testing
Date: 2010-06-17 11:12:22
Message-ID: op.vefyqwfweorkce@apollo13 (view raw or flat)
Thread:
Lists: pgsql-performance
> When you set up a server that has high throughput requirements, the last  
> thing you want to do is use it in a manner that cripples its throughput.  
> Don't try and have 1000 parallel Postgres backends - it will process  
> those queries slower than the optimal setup. You should aim to have  
> approximately ((2 * cpu core count) + effective spindle count) number of  
> backends, as that is the point at which throughput is the greatest. You  
> can use pgbouncer to achieve this.

The same is true of a web server : 1000 active php interpreters (each  
eating several megabytes or more) are not ideal for performance !

For php, I like lighttpd with php-fastcgi : the webserver proxies requests  
to a small pool of php processes, which are only busy while generating the  
page. Once the page is generated the webserver handles all (slow) IO to  
the client.

An interesting side effect is that the number of database connections is  
limited to the number of PHP processes in the pool, so you don't even need  
a postgres connection pooler (unless you have lots of php boxes)...

In response to

Responses

pgsql-performance by date

Next:From: Dimitri FontaineDate: 2010-06-17 11:31:51
Subject: Re: Parallel queries for a web-application |performance testing
Previous:From: Matthew WakelingDate: 2010-06-17 09:41:44
Subject: Re: Parallel queries for a web-application |performance testing

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group