From: | Shane Ambler <pgsql(at)Sheeky(dot)Biz> |
---|---|
To: | Michiel Holtkamp <michiel(dot)holtkamp(at)soundintel(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: How to cope with low disk space |
Date: | 2008-02-14 18:24:33 |
Message-ID: | 47B48761.6080506@Sheeky.Biz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Is postgresql the only thing using the disk space/partition?
Have you considered running a cron job to parse df output to "trigger" a
delete when disk usage gets to a set threshold? and thus also account
for any unexpected non-postgresql disk usage.
I would also think you would want to consider the size of the old stored
data when deciding how many records to delete.
> To give you an idea of the figures we are talking about: Say we have
> a 250 GB disk. Normally we would use about 4-8 GB of database.
Given that you normally have 4-8GB of data and you have trouble when a
fault/error causes an excess of 200GB I would also think about
triggering a stop recording under those conditions. If it takes 200GB of
data to automate a data purging then the purging of *all* old records is
going to give you a short time of extra space unless you start purging
the beginning of the current erroneous recording.
I am thinking that a cron job that will email/page/sms you when it hits
50% disk usage would be a better solution that would simply give you a
heads up to find and fix the fault causing the excess usage.
--
Shane Ambler
pgSQL (at) Sheeky (dot) Biz
Get Sheeky @ http://Sheeky.Biz
From | Date | Subject | |
---|---|---|---|
Next Message | Nathan Wilhelmi | 2008-02-14 18:33:44 | Re: Deferred constraints and locks... |
Previous Message | Michael Fuhr | 2008-02-14 17:27:00 | Re: Order of SUBSTR and UPPER in statement |