> I played with this tonight writing a small insert/update routine and
> frequent vacuums. Here is what I came up with ( (PostgreSQL) 7.2.1 )
This is some great info, thanks.
> In addition, max_fsm_pages has an impact on how many pages will be
> available to be marked as re-usable. If you have a huge table and
> changes are impacting more than the default 10,000 pages this is set to,
> you will want to bump this number up. My problem was I saw my UnUsed
> tuples always growing and not being re-used until I bumped this value
> up. As I watched the vacuum verbose output each run, I notices more
> than 10k pages were in fact changing between vacuums.
This has made me think about something we've been doing. We've got one
db that is used basically read-only; every day ~15000 records are added,
but very rarely are any deleted. What we've been doing is just letting it
sit until it gets close to too big for the filesystem, then lopping off
the earliest 6 months worth of records. The question is, is it best
to do this then set the max_fsm_pages to a huge number and vacuum full?
Or should I change it so scripts remove the oldest day and vacuum before
adding the next days?
Or just rebuild the db every time. :)
Do You Yahoo!?
Yahoo! Health - Feel better, live better
pgsql-admin by date
|Next:||From: Michael G. Martin||Date: 2002-07-26 13:50:28|
|Subject: Re: Postgres performance slowly gets worse over a month|
|Previous:||From: Richard Gliebe||Date: 2002-07-26 06:59:32|
|Subject: Postgres 7.2.1 on Tru64 V5.1A|