On Thu, Dec 16, 2010 at 14:33, selvi88 <selvi(dot)dct(at)gmail(dot)com> wrote:
> I have a requirement for running more that 15000 queries per second.
> Can you please tell what all are the postgres parameters needs to be changed
> to achieve this.
You have not told us anything about what sort of queries they are or
you're trying to do. PostgreSQL is not the solution to all database
problems. If all you have is a dual-core machine then other software
can possibly make better use of the available hardware.
First of all, if they're mostly read-only queries, you should use a
caching layer (like memcache) in front of PostgreSQL. And you can use
replication to spread the load across multiple machines (but you will
get some latency until the updates fully propagate to slaves).
If they're write queries, memory databases (like Redis), or disk
databases specifically optimized for writes (like Cassandra) might be
Alternatively, if you can tolerate some latency, use message queuing
middleware like RabbitMQ to queue up a larger batch and send updates
to PostgreSQL in bulk.
As for optimizing PostgreSQL itself, if you have a high connection
churn then you will need connection pooling middleware in front --
such as pgbouncer or pgpool. But avoiding reconnections is a better
idea. Also, use prepared queries to avoid parsing overheads for every
Obviously all of these choices involve tradeoffs and caveats, in terms
of safety, consistency, latency and application complexity.
In response to
pgsql-performance by date
|Next:||From: phb07||Date: 2010-12-17 16:19:14|
|Subject: Re: Auto-clustering?|
|Previous:||From: Nick Matheson||Date: 2010-12-17 13:51:27|
|Subject: Re: Help with bulk read performance|