Re: performance config help

From: Craig Ringer <craig(at)postnewspapers(dot)com(dot)au>
To: Bob Dusek <redusek(at)gmail(dot)com>
Cc: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: performance config help
Date: 2010-01-14 05:45:05
Message-ID: 4B4EAF61.1030501@postnewspapers.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Bob Dusek wrote:

> So, pgBouncer is pretty good. It doesn't appear to be as good as
> limiting TCON and using pconnect, but since we can't limit TCON in a
> production environment, we may not have a choice.

It may be worth looking into pgpool, as well. If you have a very
cheap-to-connect-to local pool you can use non-persistent connections
(for quick release) and the local pool takes care of maintaining and
sharing out the expensive-to-establish real connections to Pg its self.

If you find you still can't get the throughput you need, an alternative
to adding more hardware capacity and/or more server tuning is to look
into using memcached to satisfy many of the read requests for your app
server. Use some of that 16GB of RAM on the app server to populate a
memcached instance with less-frequently-changing data, and prefer to
fetch things from memcached rather than from Pg. With a bit of work on
data access indirection and on invalidating things in memcached when
they're changed in Pg, you can get truly insane boosts to performance
... and get more real work done in Pg by getting rid of repetitive
queries of relatively constant data.

--
Craig Ringer

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Craig Ringer 2010-01-14 05:47:13 Re: performance config help
Previous Message Craig Ringer 2010-01-14 05:36:54 Re: a heavy duty operation on an "unused" table kills my server