Re: Large pgstat.stat file causes I/O storm

From: Cristian Gafton <gafton(at)rpath(dot)com>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Large pgstat.stat file causes I/O storm
Date: 2008-01-29 21:08:25
Message-ID: Pine.LNX.4.64.0801291557510.19796@alienpad.rpath.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, 29 Jan 2008, Tom Lane wrote:

> (Pokes around in the code...) I think the problem here is that the only
> active mechanism for flushing dead stats-table entries is
> pgstat_vacuum_tabstat(), which is invoked by a VACUUM command or an
> autovacuum. Once-a-day VACUUM isn't gonna cut it for you under those
> circumstances. What you might do is just issue a VACUUM on some
> otherwise-uninteresting small table, once an hour or however often you
> need to keep the stats file bloat to a reasonable level.

I just ran a vacuumdb -a on the box - the pgstat file is still >90MB in
size. If vacuum is supposed to clean up the cruft from pgstat, then I
don't know if we're looking at the right cruft - I kind of expected the
pgstat file to go down in size and the I/O storm to subside, but that is
not happening after vacuum.

I will try to instrument the application to record the oids of the temp
tables it creates and investigate from that angle, but in the meantime is
there any way to reset the stats collector altogether? Could this be a
corrupt stat file that gets read and written right back on every loop
without any sort of validation?

Thanks,

Cristian
--
Cristian Gafton
rPath, Inc.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Ron Mayer 2008-01-29 21:59:05 Re: [PATCHES] Proposed patch: synchronized_scanning GUCvariable
Previous Message Tom Lane 2008-01-29 21:06:59 Re: Transition functions for SUM(::int2), SUM(::int4, SUM(::int8])