From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Michael Goldner" <MGoldner(at)agmednet(dot)com> |
Cc: | pgsql-admin(at)postgresql(dot)org |
Subject: | Re: Is my vacuumdb stuck in a loop? |
Date: | 2008-03-02 16:15:59 |
Message-ID: | 20299.1204474559@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
"Michael Goldner" <MGoldner(at)agmednet(dot)com> writes:
> Am I stuck in a loop, or is this happening because the size of the relation
> is so large that postgres is operating on smaller chunks?
It's removing as many dead rows at a time as it can handle. Arithmetic
suggests that you've got maintenance_work_mem set to 64MB, which would
be enough room to process 11184810 rows per index scanning cycle.
The fact that there are so many dead large objects is what I'd be
worrying about. Does that square with your sense of what you've
removed, or does it suggest you've got a large object leak? Do you
use contrib/lo and/or contrib/vacuumlo to manage them?
The numbers also suggest that you might be removing all or nearly
all of the rows in pg_largeobject. If so, a CLUSTER on it might
be more effective than VACUUM as a one-shot cleanup method.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Goldner | 2008-03-02 16:31:58 | Re: Is my vacuumdb stuck in a loop? |
Previous Message | Michael Goldner | 2008-03-02 14:46:27 | Is my vacuumdb stuck in a loop? |