Re: Using the 8.2 autovacuum values with 8.1

From: Csaba Nagy <nagy(at)ecircle-ag(dot)com>
To: Mark Stosberg <mark(at)summersault(dot)com>
Cc: postgres performance list <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Using the 8.2 autovacuum values with 8.1
Date: 2007-02-23 09:13:31
Message-ID: 1172222011.3772.13.camel@coppola.muc.ecircle.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Thu, 2007-02-22 at 22:53, Mark Stosberg wrote:
> Thanks to everyone for the feedback about vacuuming. It's been very
> useful. The pointers to the pgstattuple and Pgfouine tools were also
> helpful.
>
> I'm now considering the following plan for trying Autovacuuming again
> with 8.1. I'd like any peer review you have to offer of the following:
>
> 1. First, I'll move the settings to match the defaults in 8.2. The ones
> I noticed in particular were:
>
> autovacuum_vacuum_threshold changes: 1000 -> 500
> autovacuum_anayze_threshold changes: 500 -> 250
> autovacuum_scale_factor changes: .4 -> .2
> autovacuum_analyze_scale_factor changes .2 -> .1
>
> 2. Try the vacuum cost delay feature, starting with a 20ms value:
>
> autovacuum_vacuum_cost_delay = 20
>
> 3. Immediately add a row to pg_autovacuum for a huge logging table that
> would be too slow to vacuum usually. We'll still vacuum it once a week
> for good measure by cron.
>
> 4. For good measure, I think I still keep the nightly cron entry that
> does a complete vacuum analyze (except for that large table...).
>
> Seem like a reasonable plan?

You likely don't need the nightly full vacuum run... we also do here a
nightly vacuum beside autovacuum, but not a full one, only for tables
which are big enough that we don't want autovacuum to touch them in high
business time but they have enough change that we want a vacuum on them
frequent enough. I discover them by checking the stats, for example:

SELECT
c.relname,
c.reltuples::bigint as rowcnt,
pg_stat_get_tuples_inserted(c.oid) AS inserted,
pg_stat_get_tuples_updated(c.oid) AS updated,
pg_stat_get_tuples_deleted(c.oid) AS deleted
FROM pg_class c
WHERE c.relkind = 'r'::"char"
GROUP BY c.oid, c.relname, c.reltuples
HAVING pg_stat_get_tuples_updated(c.oid) +
pg_stat_get_tuples_deleted(c.oid) > 1000
ORDER BY pg_stat_get_tuples_updated(c.oid) +
pg_stat_get_tuples_deleted(c.oid) DESC;

The top tables in this list for which the (deleted + updated) / rowcnt
is relatively small but still significant need your attention for
nightly vacuum... the rest is handled just fine by autovacuum.

On the other end of the scale, if you have tables for which the
deletion/update rate is way higher then the row count, that's likely a
hot-spot table which you probably need extra vacuuming during the day.

Cheers,
Csaba.

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message msmbarabino@virgilio.it 2007-02-23 10:10:13 Very slow bytea data extraction
Previous Message ismo.tuononen 2007-02-23 06:18:20 Re: slow update on 1M rows (worse with indexes)