Re: Postgres performance slowly gets worse over a month

From: "Michael G(dot) Martin" <michael(at)vpmonline(dot)com>
To: Joshua Daniel Franklin <joshuadfranklin(at)yahoo(dot)com>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: Postgres performance slowly gets worse over a month
Date: 2002-07-26 13:50:28
Message-ID: 3D4153A4.3080604@vpmonline.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

I believe the more frequently you vacuum, the faster it will go, so that
may be the driving factor in deciding. Personally, each day, I'd add
the new tuples then remove the no-longer needed tuples, make sure
max_fsm_pages is large enough to handle all the pages removed in the
largest table, then run a vacuum analyze on the table or entire
database. Run it each night and it will be nice and fast and you
shouldn't ever need to worry about locking the entire table with a
vacuum full or spend time to re-create the table and indicies.

That's what I do which I think is the most automated,maintainance-free
solution. I currently run a lazy vacuum analyze each night after making
my large changes. My tables don't change enough during the day to
require mid-day vacuums.

--Michael

Joshua Daniel Franklin wrote:

>>I played with this tonight writing a small insert/update routine and
>>frequent vacuums. Here is what I came up with ( (PostgreSQL) 7.2.1 )
>>
>>
>>
>This is some great info, thanks.
>
>
>
>>In addition, max_fsm_pages has an impact on how many pages will be
>>available to be marked as re-usable. If you have a huge table and
>>changes are impacting more than the default 10,000 pages this is set to,
>>you will want to bump this number up. My problem was I saw my UnUsed
>>tuples always growing and not being re-used until I bumped this value
>>up. As I watched the vacuum verbose output each run, I notices more
>>than 10k pages were in fact changing between vacuums.
>>
>>
>>
>This has made me think about something we've been doing. We've got one
>db that is used basically read-only; every day ~15000 records are added,
>but very rarely are any deleted. What we've been doing is just letting it
>sit until it gets close to too big for the filesystem, then lopping off
>the earliest 6 months worth of records. The question is, is it best
>to do this then set the max_fsm_pages to a huge number and vacuum full?
>Or should I change it so scripts remove the oldest day and vacuum before
>adding the next days?
>
>Or just rebuild the db every time. :)
>
>__________________________________________________
>Do You Yahoo!?
>Yahoo! Health - Feel better, live better
>http://health.yahoo.com
>
>---------------------------(end of broadcast)---------------------------
>TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to majordomo(at)postgresql(dot)org)
>
>

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Heni Lolov 2002-07-26 13:52:04 PL/pgSQL
Previous Message Joshua Daniel Franklin 2002-07-26 13:35:36 Re: Postgres performance slowly gets worse over a month