Re: best practice to avoid table bloat?

From: "Anibal David Acosta" <aa(at)devshock(dot)com>
To: "'Kevin Grittner'" <Kevin(dot)Grittner(at)wicourts(dot)gov>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: best practice to avoid table bloat?
Date: 2012-08-16 21:10:31
Message-ID: 00c201cd7bf3$93bc4130$bb34c390$@devshock.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Thanks Kevin.
Postgres version is 9.1.4 (lastest)

Every day the table has about 7 millions of new rows.
The table hold the data for 60 days, so approx. the total rows must be
around 420 millions.
Every night a delete process run, and remove rows older than 60 days.

So, the space used by postgres should not be increase drastically because
every day arrive 7 millions of rows but also same quantity is deleted but my
disk get out of space every 4 months.
I must copy tables outside the server, delete local table and create it
again, after this process I got again space for about 4 months.

Maybe is a wrong autovacuum config, but is really complicate to understand
what values are correct to avoid performance penalty but to keep table in
good fit.

I think that autovacuum configuration should have some like "auto-config"
that recalculate every day which is the best configuration for the server
condition

Thanks!

-----Mensaje original-----
De: Kevin Grittner [mailto:Kevin(dot)Grittner(at)wicourts(dot)gov]
Enviado el: jueves, 16 de agosto de 2012 04:52 p.m.
Para: Anibal David Acosta; pgsql-performance(at)postgresql(dot)org
Asunto: Re: [PERFORM] best practice to avoid table bloat?

"Anibal David Acosta" <aa(at)devshock(dot)com> wrote:

> if I have a table that daily at night is deleted about 8 millions of
> rows (table maybe has 9 millions) is recommended to do a vacuum
> analyze after delete completes or can I leave this job to autovacuum?

Deleting a high percentage of the rows should cause autovacuum to deal with
the table the next time it wakes up, so an explicit VACUUM ANALYZE shouldn't
be needed.

> For some reason for the same amount of data every day postgres consume
> a little more.

How are you measuring the data and how are you measuring the space?
And what version of PostgreSQL is this?

-Kevin

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Merlin Moncure 2012-08-16 21:16:59 Re: High Disk write and space taken by PostgreSQL
Previous Message Kevin Grittner 2012-08-16 20:52:23 Re: best practice to avoid table bloat?