Skip site navigation (1) Skip section navigation (2)

Re: too many clog files

From: "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com>
To: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
Cc: alvherre(at)commandprompt(dot)com, duanlg(at)nec-as(dot)nec(dot)com(dot)cn, pgsql-performance(at)postgresql(dot)org, "Matt Smiley" <mss(at)rentrak(dot)com>
Subject: Re: too many clog files
Date: 2008-09-10 17:18:02
Message-ID: dcc563d10809101018u1f77199bpcc134121d7d040ad@mail.gmail.com (view raw or flat)
Thread:
Lists: pgsql-performance
On Wed, Sep 10, 2008 at 8:58 AM, Kevin Grittner
<Kevin(dot)Grittner(at)wicourts(dot)gov> wrote:
>> "Matt Smiley" <mss(at)rentrak(dot)com> wrote:
>> Alvaro Herrera wrote:
>>> Move the old clog files back where they were, and run VACUUM FREEZE
> in
>>> all your databases.  That should clean up all the old pg_clog files,
> if
>>> you're really that desperate.
>>
>> Has anyone actually seen a CLOG file get removed under 8.2 or 8.3?
>
> Some of my high-volume databases don't quite go back to 0000, but this
> does seem to be a problem.  I have confirmed that VACUUM FREEZE on all
> but template0 (which doesn't allow connections) does not clean them
> up.  No long running transactions are present.

I have a pretty high volume server that's been online for one month
and it had somewhere around 53, going back in order to 0000, and it
was recently vacuumdb -az 'ed. Running another one.  No long running
transactions, etc...

In response to

Responses

pgsql-performance by date

Next:From: Mark WongDate: 2008-09-10 17:38:30
Subject: Re: Effects of setting linux block device readahead size
Previous:From: Dimitri FontaineDate: 2008-09-10 17:17:37
Subject: Re: Improve COPY performance for large data sets

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group