Re: autovacuum suggestions for 500,000,000+ row

From: Jacques Caron <jc(at)directinfos(dot)com>
To: Alex Stapleton <alexs(at)advfn(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: autovacuum suggestions for 500,000,000+ row
Date: 2005-06-20 14:59:29
Message-ID: 6.2.0.14.0.20050620165134.03b61140@pop.interactivemediafactory.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hi,

At 16:44 20/06/2005, Alex Stapleton wrote:
>We never delete
>anything (well not often, and not much) from the tables, so I am not
>so worried about the VACUUM status

DELETEs are not the only reason you might need to VACUUM. UPDATEs are
important as well, if not more. Tables that are constantly updated
(statistics, session data, queues...) really need to be VACUUMed a lot.

>but I am wary of XID wraparound
>nuking us at some point if we don't sort vacuuming out so we VACUUM
>at least once every year ;)

That would give you a maximum average of 31 transactions/sec... Don't know
if that's high or low for you.

> However not running ANALYZE for such huge
>periods of time is probably impacting the statistics accuracy
>somewhat, and I have seen some unusually slow queries at times.
>Anyway, does anyone think we might benefit from a more aggressive
>autovacuum configuration?

ANALYZE is not a very expensive operation, however VACUUM can definitely be
a big strain and take a looooong time on big tables, depending on your
setup. I've found that partitioning tables (at the application level) can
be quite helpful if you manage to keep each partition to a reasonable size
(under or close to available memory), especially if the partitioning scheme
is somehow time-related. YMMV.

Jacques.

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Alex Stapleton 2005-06-20 15:05:56 Re: autovacuum suggestions for 500,000,000+ row tables?
Previous Message Alex Stapleton 2005-06-20 14:44:08 autovacuum suggestions for 500,000,000+ row tables?