Re: FW: Tx forecast improving harware capabilities.

From: David Hodgkinson <daveh(at)hodgkinson(dot)org>
To: Sebastian Lallana <slallana(at)datatransfer(dot)com(dot)ar>
Cc: <pgsql-performance(at)postgresql(dot)org>
Subject: Re: FW: Tx forecast improving harware capabilities.
Date: 2005-08-18 22:08:29
Message-ID: 701BBF0B-1E25-4BC0-BF7C-0D53EAD2784D@hodgkinson.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance


On 18 Aug 2005, at 16:01, Sebastian Lallana wrote:

> It exists something like this? Does anybody has experience about
> this subject?

I've just been through this with a client with both a badly tuned Pg and
an application being less than optimal.

First, find a benchmark. Just something you can hold on to. For us, it
was the generation time of the site's home page. In this case, 7
seconds.
We looked hard at postgresql.conf, planned the memory usage, sort_memory
and all that. That was a boost. Then we looked at the queries that were
being thrown at the database. Over 200 to build one page! So, a layer
of caching was built into the web server layer. Finally, some frequently
occurring combinations of queries were pushed down into stored procs.
We got the page gen time down to 1.5 seconds AND the server being stable
under extreme stress. So, a fair win.

Thanks to cms for several clues.

So, without understanding your application and were it's taking the
time,
you can't begin to estimate hardware usage.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Jeffrey W. Baker 2005-08-19 06:55:35 Re: Query plan looks OK, but slow I/O - settings advice?
Previous Message Ron 2005-08-18 19:56:53 Re: extremly low memory usage