Sean Leach wrote:
> On Feb 24, 2008, at 4:27 PM, Scott Marlowe wrote:
> > Urg. Then I wonder how your indexes are bloating but your table is
> > not... you got autovac running? No weird lock issues? It's a side
> > issue right now since the table is showing as non-bloated (unless
> > you've got a long running transaction and that number is
> WAY off from
> > your vacuum)
> Autovac is running, but probably not tuned. I am looking at my
> max_fsm_pages setting to up as vacuum says, but not sure which value
> to use (all the posts on the web refer to what looks like an old
> vacuum output format), is this the line to look at?
> INFO: "u_counts": found 0 removable, 6214708 nonremovable row
> versions in 382344 pages
> DETAIL: 2085075 dead row versions cannot be removed yet.
> I.e. I need 382344 max_fsm_pages? No weird lock issues that we have
I think the hint and warning are referring to this line:
> 281727 pages contain useful free space.
But you're likely to have other relations in your database that have useful free space too.
What this warning is saying is that at least some of the useful free space in that table will not be re-used for new rows or row versions, because it is impossible for the free space map to have references to all of the pages with usable space, since it is too small to hold that much information.
> So should I do a vacuum full and then hope this doesn't
> happen again?
> Or should I run a VACUUM FULL after each aggregation run?
If your usage pattern results in generating all of that unused space in one transaction, and no further inserts or updates to that table till next time you run the same process, then my guess is that you probably should run a vacuum full on that table after each aggregation run. In that case you wouldn't have to increase max_fsm_pages solely to keep track of large amount of unused space in that table, since you're cleaning it up as soon as you're generating it.
You earlier had 5.5 million row versions, 2 million of them dead but not yet removable, and you said (even earlier) that the table had 3.3 million rows in it.
You now say you've got 6.2 million row versions (with the same 2M dead). So it looks like you're creating new row versions at quite a pace, in which case increasing max_fsm_pages, and not worrying about doing a vacuum full _every_ time is probably a good idea.
Have you checked Scott Marlowe's note:
> > unless you've got a long running transaction
How come those 2 million dead rows are not removable yet? My guess (based on a quick search of the mailing lists) would be that they were generated from your aggregation run, and that a long running transaction started before your aggregation run committed (possibly even before it started), and that transaction is still alive.
Alternatively, it may be a different 2 million dead row versions now than earlier, and may simply be a side effect of your particular usage, and nothing to worry about. (Though it is exactly the same number of rows, which strongly hints at being exactly the same rows.)
At the Datamail Group we value team commitment, respect, achievement, customer focus, and courage. This email with any attachments is confidential and may be subject to legal privilege. If it is not intended for you please advise by reply immediately, destroy it and do not copy, disclose or use it in any way.
This email has been scanned by the DMZGlobal Business Quality
Electronic Messaging Suite.
Please see http://www.dmzglobal.com/services/bqem.htm for details.
In response to
pgsql-performance by date
|Next:||From: Sean Leach||Date: 2008-02-25 21:37:40|
|Subject: Re: Weird issue with planner choosing seq scan|
|Previous:||From: Andrew Lazarus||Date: 2008-02-25 20:56:26|
|Subject: when is a DELETE FK trigger planned?|