Skip site navigation (1) Skip section navigation (2)

Re: Enhancement request

From: Steve Crawford <scrawford(at)pinpointresearch(dot)com>
To: "Campbell, Lance" <lance(at)uiuc(dot)edu>
Cc: posgres support <pgsql-admin(at)postgresql(dot)org>
Subject: Re: Enhancement request
Date: 2007-11-30 20:42:22
Message-ID: 475075AE.1080108@pinpointresearch.com (view raw or flat)
Thread:
Lists: pgsql-admin
Campbell, Lance wrote:
>
> Could you please add to your to do list a schema parameter for vacuum?
>
>  
>
> Example:
>
>  
>
> VACUUM SCHEMA xyz;
>
>  
>
> PostgreSQL would get a list of all of the tables found in the schema.  
> It would then loop through vacuuming each table in the schema.
>
>  
>
> I found today that I did not have max_fsm_pages set high enough in 
> order to vacuum the entire database.  I vacuum daily to avoid 
> problems.  After realizing that I cannot vacuum the entire database 
> now, I then had to do it at the table level.  I had to go through all 
> 13 of my schemas and list each table I found into an SQL vacuum 
> script.  This was a real waist of time.  I will eventually restart 
> PostgreSQL with an increased value for max_fsm_pages so this will not 
> be an issue. 
>
If I understand how PG works, I don't believe this is a problem. Just 
run vacuum verbose analyze to determine the required value, set 
max_fsm_pages to that value, restart and vacuum. I believe the issue 
with too-small max_fsm_pages is that as vacuum locates reusable space, 
it simply runs out of room to save that information thus PG bloats the 
table instead of making use of available space located by vacuum.

Setting max_fsm_pages to a proper value and running vacuum will address 
_future_ bloat - and you may even eventually fill all the bloat back in. 
But the only certain way to remove the bloat that has occurred due to 
insufficient max_fsm_pages is to vacuum full (or cluster as appropriate 
- cluster can be many times faster than vacuum full).

Having said that, I would also find this feature occasionally useful but 
would think something akin to pg_dump's options would be more useful 
with both -n and -t allowing wildcards.

Cheers,
Steve

In response to

pgsql-admin by date

Next:From: Gergely CZUCZYDate: 2007-11-30 20:43:01
Subject: Re: connection limit exceeded
Previous:From: Gergely CZUCZYDate: 2007-11-30 20:38:35
Subject: Re: Enhancement request

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group