Skip site navigation (1) Skip section navigation (2)

How to avoid vacuuming a huge logging table

From: Mark Stosberg <mark(at)summersault(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: How to avoid vacuuming a huge logging table
Date: 2007-02-21 18:02:16
Message-ID: eri1gd$29bq$ (view raw, whole thread or download thread mbox)
Lists: pgsql-performance
Our application has a table that is only logged to, and infrequently
used for reporting. There generally no deletes and updates.

Recently, the shear size (an estimated 36 million rows) caused a serious
problem because it prevented a "vacuum analyze" on the whole database
from finishing in a timely manner.

As I understand, a table with this usage pattern wouldn't need to be
vacuumed anyway.

I'm looking for general advice from people who have faced the same
issue. I'm looking at a number of alternatives:

1. Once a month, we could delete and archive old rows, for possible
re-import later if we need to report on them. It would seem this would
need to be done as proper insert statements for re-importing. (Maybe
there is a solution for that with table partitioning? )

2. We could find a way to exclude the table for vacuuming, and let it
grow even larger. Putting the table in it's own database would
accomplish that, but it would nice to avoid the overhead of a second
database connection.

3. Take a really different approach. Log in CSV format to text files
instead, And only import the date ranges we need "on demand" if a report
is requested on the data.

Thanks for any tips.



pgsql-performance by date

Next:From: Dan HarrisDate: 2007-02-21 18:33:24
Subject: General advice on user functions
Previous:From: Mark StosbergDate: 2007-02-21 17:49:12
Subject: Re: How to debug performance problems

Privacy Policy | About PostgreSQL
Copyright © 1996-2018 The PostgreSQL Global Development Group