Re: Why will vacuum not end?

From: Manfred Koizar <mkoi-pg(at)aon(dot)at>
To: "Shea,Dan [CIS]" <Dan(dot)Shea(at)ec(dot)gc(dot)ca>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Why will vacuum not end?
Date: 2004-04-25 20:46:54
Message-ID: 2v7o80hmttci6k5m3ht6ov760r5jbckmcc@email.aon.at
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Sun, 25 Apr 2004 09:05:11 -0400, "Shea,Dan [CIS]" <Dan(dot)Shea(at)ec(dot)gc(dot)ca>
wrote:
>It is set at max_fsm_pages = 1500000 .

This might be too low. Your index has ca. 5 M pages, you are going to
delete half of its entries, and what you delete is a contiguous range of
values. So up to 2.5 M index pages might be freed (minus inner nodes
and pages not completely empty). And there will be lots of free heap
pages too ...

I wrote:
>If you are lucky VACUUM frees half the index pages. And if we assume
>that the most time spent scanning an index goes into random page
>accesses, future VACUUMs will take "only" 30000 seconds per index scan.

After a closer look at the code and after having slept over it I'm not
so sure any more that the number of tuple ids to be removed has only
minor influence on the time spent for a bulk delete run. After the
current VACUUM has finished would you be so kind to run another VACUUM
VERBOSE with only a few dead tuples and post the results here?

Servus
Manfred

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Rob Fielding 2004-04-26 09:01:58 Re: OT: Help with performance problems
Previous Message Manfred Koizar 2004-04-25 20:26:56 Number of pages in a random sample (was: query slows down with more accurate stats)