Skip site navigation (1) Skip section navigation (2)

Re: Postgres performance slowly gets worse over a month

From: Joshua Daniel Franklin <joshuadfranklin(at)yahoo(dot)com>
To: "Michael G(dot) Martin" <michael(at)vpmonline(dot)com>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: Postgres performance slowly gets worse over a month
Date: 2002-07-26 14:45:37
Message-ID: 20020726144537.92735.qmail@web20002.mail.yahoo.com (view raw or flat)
Thread:
Lists: pgsql-admin
Perhaps I wasn't clear. There really aren't any (daily) "no-longer needed 
tuples", just added ones. I am under the impression that vacuum is just for
freeing up tuples to be re-used, so the only time it needs to be run is after
the 6-monthly tuple massacre, at which time I would also need to set
max_fsm_pages to a huge number. 

--- "Michael G. Martin" <michael(at)vpmonline(dot)com> wrote:
> I believe the more frequently you vacuum, the faster it will go, so that 
> may be the driving factor in deciding.  Personally, each day, I'd add 
> the new tuples then remove the no-longer needed tuples, make sure 
> max_fsm_pages is large enough to handle all the pages removed in the 
> largest table, then run a vacuum analyze on the table or entire 
> database.  Run it each night and it will be nice and fast and you 
> shouldn't ever need to worry about locking the entire table with a 
> vacuum full or spend time to re-create the table and indicies.
> 
> That's what I do which I think is the most automated,maintainance-free 
> solution.  I currently run a lazy vacuum analyze each night after making 
> my large changes.  My tables don't change enough during the day to 
> require mid-day vacuums.
> 
> --Michael
> 
> Joshua Daniel Franklin wrote:
> >
> >>In addition, max_fsm_pages has an impact on how many pages will be
> >>available to be marked as re-usable.  If you have a huge table and
> >>changes are impacting more than the default 10,000 pages this is set to,
> >>you will want to bump this number up.  My problem was I saw my UnUsed
> >>tuples always growing and not being re-used until I bumped this value
> >>up.  As I watched the vacuum verbose output each run, I notices more
> >>than 10k pages were in fact changing between vacuums.
> >>
> >This has made me think about something we've been doing. We've got one
> >db that is used basically read-only; every day ~15000 records are added,
> >but very rarely are any deleted. What we've been doing is just letting it
> >sit until it gets close to too big for the filesystem, then lopping off
> >the earliest 6 months worth of records. The question is, is it best
> >to do this then set the max_fsm_pages to a huge number and vacuum full?
> >Or should I change it so scripts remove the oldest day and vacuum before
> >adding the next days?
> >
> >Or just rebuild the db every time. :)
> >


__________________________________________________
Do You Yahoo!?
Yahoo! Health - Feel better, live better
http://health.yahoo.com

In response to

Responses

pgsql-admin by date

Next:From: Michael G. MartinDate: 2002-07-26 15:10:55
Subject: Re: Postgres performance slowly gets worse over a month
Previous:From: Elielson FontaneziDate: 2002-07-26 14:42:04
Subject: RES: [GENERAL] set DateStyle to 'SQL'

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group