Re: New server to improve performance on our large and busy DB - advice?

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: <pgsql-performance(at)postgresql(dot)org>, "Carlo Stonebanks" <stonec(dot)register(at)sympatico(dot)ca>
Subject: Re: New server to improve performance on our large and busy DB - advice?
Date: 2010-01-20 20:36:39
Message-ID: 4B5714F7020000250002E896@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

"Carlo Stonebanks" <stonec(dot)register(at)sympatico(dot)ca> wrote:

>> yeah, the values are at the end. Sounds like your vacuum
>> settings are too non-aggresive. Generally this is the vacuum
>> cost delay being too high.
>
> Of course, I have to ask: what's the down side?

If you make it too aggressive, it could impact throughput or
response time. Odds are that the bloat from having it not
aggressive enough is currently having a worse impact.

>> Once the fsm gets too blown out of the water, it's quicker
>> to dump and reload the whole DB than to try and fix it.
>
> My client reports this is what they actualyl do on a monthly
> basis.

The probably won't need to do that with proper configuration and
vacuum policies.

>>> NOTICE: number of page slots needed (4090224) exceeds
>>> max_fsm_pages (204800)
>>> HINT: Consider increasing the configuration parameter
>>> "max_fsm_pages" to a value over 4090224.
>
> Gee, only off by a factor of 20. What happens if I go for this
> number (once again, what's the down side)?

It costs six bytes of shared memory per entry.

http://www.postgresql.org/docs/8.3/interactive/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-FSM

-Kevin

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Greg Smith 2010-01-20 21:18:48 Re: ext4 finally doing the right thing
Previous Message Greg Smith 2010-01-20 20:34:21 Re: a heavy duty operation on an "unused" table kills my server