Re: System catalog vacuum issues

From: Jim Nasby <jim(at)nasby(dot)net>
To: Vlad Arkhipov <arhipov(at)dc(dot)baikal(dot)ru>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Sergey Konoplev <gray(dot)ru(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: System catalog vacuum issues
Date: 2013-08-14 18:27:23
Message-ID: 520BCC0B.3080307@nasby.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 8/14/13 12:31 AM, Vlad Arkhipov wrote:
> I used to use VACUUM FULL periodically to resolve the issue, but the problem arises again in 2-3 months.
> Here is the statistics (from pgstattuple). I run VACUUM FULL on 2013-08-07.

How much non-temporary DDL do you do? It's possible that you end up with a tuple at the end of the table for a non-temporary object. One of those would stay valid for quite some time, and if you're unlucky then you'll end up with another long-lived row farther down the table, etc, etc.

Depending on how frequently you're creating temp objects, autovac might not be able to keep up. Assuming that a manual vacuum doesn't take too long it might be a good idea to cron a manual vacuum (NOT FULL) of that table once a minute.

> dcdb=# select date, relpages, reltuples, table_len, tuple_count, tuple_percent, dead_tuple_count, dead_tuple_len, free_space, free_percent, autovacuum_count from public.table_statistics where relname = 'pg_attribute' order by date;
> date | relpages | reltuples | table_len | tuple_count | tuple_percent | dead_tuple_count | dead_tuple_len | free_space | free_percent | autovacuum_count
> ------------+----------+-----------+-----------+-------------+---------------+------------------+----------------+------------+--------------+------------------
> 2013-08-08 | 39029 | 109096 | 319725568 | 37950 | 1.66 | 52540 | 7355600 | 296440048 | 92.72 | 6359
> 2013-08-09 | 12382 | 95848 | 101433344 | 38232 | 5.28 | 57443 | 8042020 | 83862864 | 82.68 | 6711
> 2013-08-10 | 11365 | 105073 | 93102080 | 37789 | 5.68 | 65599 | 9183860 | 74483104 | 80 | 7002
> 2013-08-12 | 9447 | 95289 | 77389824 | 37811 | 6.84 | 57154 | 8001560 | 60479736 | 78.15 | 7161
> 2013-08-13 | 47841 | 82877 | 391913472 | 38536 | 1.38 | 30461 | 4264540 | 369093756 | 94.18 | 7347
> 2013-08-14 | 70265 | 104926 | 575610880 | 38838 | 0.94 | 34649 | 4850860 | 546449480 | 94.93 | 7398
> (6 rows)
>
> Autovacuum is running on this table, however it keeps growing.
>
> On 08/06/2013 09:35 PM, Tom Lane wrote:
>> Vlad Arkhipov <arhipov(at)dc(dot)baikal(dot)ru> writes:
>>> On 08/06/2013 04:26 PM, Sergey Konoplev wrote:
>>>> What pgstattuple shows on this table?
>>> dcdb=# select * from pgstattuple('pg_catalog.pg_attribute');
>>> table_len | tuple_count | tuple_len | tuple_percent |
>>> dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space |
>>> free_percent
>>> ------------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------
>>> 6363938816 | 48786 | 6830040 | 0.11 | 1459439 |
>>> 204321460 | 3.21 | 5939017376 | 93.32
>>> (1 row)
>> So the problem isn't so much that you have lots of dead tuples, it's that
>> the file is full of free space. I suspect the key issue is that
>> autovacuum is unable to truncate the file because of too many concurrent
>> accesses. There was a fix in 9.2.3 that was meant to ameliorate that
>> problem, but maybe that's not getting the job done for you. Or maybe the
>> bloat we're looking at is left over from when you were running earlier
>> 9.2.x releases; in which case a one-time VACUUM FULL should fix it.
>>
>> regards, tom lane

--
Jim C. Nasby, Data Architect jim(at)nasby(dot)net
512.569.9461 (cell) http://jim.nasby.net

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2013-08-14 19:31:19 Re: Incorrect information in src/backend/optimizer/README
Previous Message Jim Nasby 2013-08-14 18:23:06 Re: StrategyGetBuffer optimization, take 2