Re: max_fsm_pages Sanity Check

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "HT" <htlevine(at)ebates(dot)com>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: max_fsm_pages Sanity Check
Date: 2002-12-29 17:18:57
Message-ID: 10105.1041182337@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

"HT" <htlevine(at)ebates(dot)com> writes:
> We have quite large production Postgres 7.2 DB which is out of control in
> terms of disk consumption. We made it thru the holiday shopping season,
> but it isn't over yet. We have taken the DB down once for a vacuum analyze
> but only vacuum'd 2 large tables which took FIVE HOURS WITH NO
> RESULTS.

1. You don't need to take down the DB to do vacuuming.
2. What do you mean by "WITH NO RESULTS"?

> Posts to the newsgroup advised that I crank up the max_fsm_pages. Right
> now it is at roughly 65,000.

> select relname, relpages from pg_class where relkind in ('r', 't', 'i')
> users | 408711
> merchant_sessions | 236333
> batch_load_awaiting | 173785
> orders | 92241

If you have not been vacuuming regularly then these relpages figures
cannot be trusted too much, but it looks to me like you might need
max_fsm_pages nearer to 1 million than 64k. If it's not large enough
to cover all (or at least nearly all) pages with free space, then you'll
have space-leakage problems. What is the tuple update/deletion rate in
these tables, anyway?

Also, you should probably think about updating to 7.3.1 sometime soon.
There's a performance problem in the 7.2.* FSM code that shows up when
a single table has more than ~10000 pages with useful amounts of free
space --- VACUUM takes an unreasonable amount of time to record the free
space.

regards, tom lane

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Tony Ziolkowski 2002-12-29 20:06:34 postsgresql-dump not found in install
Previous Message HT 2002-12-29 06:19:44 max_fsm_pages Sanity Check