Skip site navigation (1) Skip section navigation (2)

Re: Wrong Stats and Poor Performance

From: John A Meinel <john(at)arbash-meinel(dot)com>
To: Pallav Kalva <pkalva(at)deg(dot)cc>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Wrong Stats and Poor Performance
Date: 2004-12-27 20:33:40
Message-ID: 41D071A4.8050504@arbash-meinel.com (view raw or flat)
Thread:
Lists: pgsql-performance
Pallav Kalva wrote:

> John A Meinel wrote:
>
>> Pallav Kalva wrote:
>>
>>> Hi Everybody.
>>>
>>>      I have a table in my production database which gets updated 
>>> regularly and the stats on this table in pg_class  are totally 
>>> wrong.  I used to run vacuumdb on the whole database daily once and 
>>> when i posted the same problem of wrong stats in the pg_class most 
>>> of them from this list and also from postgres docs suggested me to 
>>> run the "vacuum analyze" more frequently on this table.
>>>
>>> I had a setup a cronjob couple of weeks ago to run vacuum analyze 
>>> every 3 hours on this table and still my stats are totally wrong. 
>>> This is affecting the performance of the queries running on this 
>>> table very badly.
>>> How can i fix this problem ?  or is this the standard postgres 
>>> behaviour ?
>>>
>>> Here are the stats from the problem table on my production database
>>>
>>> relpages |  reltuples
>>> ----------+-------------
>>>   168730 | 2.19598e+06
>>>
>>> If i rebuild the same table on dev db and check the stats they are 
>>> totally different, I was hoping that there would be some difference 
>>> in the stats from the production db stats but not at this extent, as 
>>> you can see below there is a huge difference in the stats.
>>>
>>> relpages | reltuples
>>> ----------+-----------
>>>    25230 |    341155
>>>
>>>
>>> Thanks!
>>> Pallav
>>>
>>
>> What version of the database? As I recall, there are versions which 
>> suffer from index bloat if there is a large amount of turnover on the 
>> table. I believe VACUUM FULL ANALYZE helps with this. As does 
>> increasing the max_fsm_pages (after a vacuum full verbose the last 
>> couple of lines can give you an indication of how big max_fsm_pages 
>> might need to be.)
>>
>> Vacuum full does some locking, which means you don't want to do it 
>> all the time, but if you can do it on the weekend, or maybe evenings 
>> or something it might fix the problem.
>>
>> I don't know if you can recover without a vacuum full, but there 
>> might also be something about rebuild index, or maybe dropping and 
>> re-creating the index.
>> John
>> =:->
>
>
> Hi John,
>
>    Thanks! for the reply,  My postgres version is  7.4.2.  since this 
> is on a production database and one of critical table in our system I 
> cant run the vacuum full analyze on this table because of the locks. I 
> recently rebuilt this table from the scratch and recreated all the 
> indexes and after 2-3 weeks the same problem again. My max_fsm_pages 
> are set to the default value due think it might be the problem ? i 
> would like to change it but that involves restarting the postgres 
> database which i cant do at this moment . What is index bloat ?  do 
> you think rebuilding the indexes again might help some extent ?
>
> Pallav
>

I'm going off of what I remember reading from the mailing lists, so 
please search them to find more information. But basically, there are 
bugs in older version of postgres that don't clean up indexes properly. 
So if you add and delete a lot of entries, my understanding is that the 
index still contains entries for the deleted items. Which means that if 
you have a lot of turnover your index keeps growing in size.

 From what I'm hearing you do need to increase max_fsm_pages, but 
without the vacuum full analyze verbose, I don't have any feelings for 
what it needs to be. Probably doing a search through the mailing lists 
for "increase max_fsm_relations max_fsm_pages" (I forgot about the first 
one earlier), should help.

At the end of a "vacuum full analyze verbose" (vfav) it prints out 
something like:
INFO:  free space map: 104 relations, 64 pages stored; 1664 total pages 
needed
DETAIL:  Allocated FSM size: 1000 relations + 20000 pages = 178 kB 
shared memory.

That can be used to understand what you need to set max_fsm_relations 
and max_fsm_pages to. As I understand it, you should run under normal 
load for a while, run "vfav" and look at the pages. Move your max number 
to something closer (you shouldn't jump the whole way). Then run for a 
while again, and repeat. I believe the idea is that when you increase 
the number, you allow a normal vacuum analyze to keep up with the load. 
So the vacuum full doesn't have as much to do. So the requirement is less.

Obviously my example is a toy database, your numbers should be much higher.

John
=:->

In response to

pgsql-performance by date

Next:From: Greg StarkDate: 2004-12-27 20:51:29
Subject: Re: Wrong Stats and Poor Performance
Previous:From: Pallav KalvaDate: 2004-12-27 19:51:21
Subject: Re: Wrong Stats and Poor Performance

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group