From: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | Peter Meszaros <pme(at)prolan(dot)hu> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: database size growing continously |
Date: | 2009-10-29 16:59:48 |
Message-ID: | dcc563d10910290959h24e6eecy7f46b210fb251ae3@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Thu, Oct 29, 2009 at 8:44 AM, Peter Meszaros <pme(at)prolan(dot)hu> wrote:
> Hi All,
>
> I use postgresql 8.3.7 as a huge queue. There is a very simple table
> with six columns and two indices, and about 6 million records are
> written into it in every day continously commited every 10 seconds from
> 8 clients. The table stores approximately 120 million records, because a
> cron job daily deletes those ones are older than 20 day. Autovacuum is
> on and every settings is the factory default except some unrelated ones
> (listen address, authorization). But my database is growing,
> characteristically ~600MByte/day, but sometimes much slower (eg. 10MB,
> or even 0!!!).
Sounds like you're blowing out your free space map. Things to try:
1: delete your rows in smaller batches. Like every hour delete
everything over 20 days so you don't delete them all at once one time
a day.
2: crank up max fsm pages large enough to hold all the dead tuples.
3: lower the autovacuum cost delay
4: get faster hard drives so that vacuum can keep up without causing
your system to slow to a crawl while vacuum is running.
From | Date | Subject | |
---|---|---|---|
Next Message | Andreas Hartmann | 2009-10-29 20:52:33 | Modeling a table with arbitrary columns |
Previous Message | Scott Carey | 2009-10-29 16:43:56 | Re: query planning different in plpgsql? |