Re: High Disk write and space taken by PostgreSQL

From: Claudio Freire <klaussfreire(at)gmail(dot)com>
To: J Ramesh Kumar <rameshj1977(at)gmail(dot)com>
Cc: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>, David Barton <dave(at)oneit(dot)com(dot)au>, pgsql-performance(at)postgresql(dot)org
Subject: Re: High Disk write and space taken by PostgreSQL
Date: 2012-08-16 05:50:46
Message-ID: CAGTBQpZ318etmrGEtu8xDJ-RU9RH9PBSvjfGrTi_2WDmfqbDNQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Thu, Aug 16, 2012 at 2:40 AM, J Ramesh Kumar <rameshj1977(at)gmail(dot)com> wrote:
>>>> Ahhh but updates are the basically delete / inserts in disguise, so if
>>>> there's enough, then yes, vacuum full would make a difference.
>
> The table which get update has very less data ie, only has 900 rows. Out of
> 10500 tables, only one table is getting update frequently. Is there any way
> to vacuum a specific table instead of whole database ?

Just let autovacuum figure it out. It's smart enough not to touch
insert-only tables last I checked, and you can set I/O limits to make
sure it doesn't interfere.

If you don't care about possible data corruption if the system
crashes, you can set fsync=off and get many of the performance
benefits. But you don't have ways to reduce disk usage other than
dropping indices (and you may have unused indices, do check their
statistics), and making sure autovacuum is running where it's needed.

A backup/restore or a vacuum full + reindex will get rid of all bloat.
If your DB size goes down considerably after that, you have bloat. If
not, you don't. You can even do that with a single (old) table to
check it out.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2012-08-16 06:01:51 Re: 7k records into Sort node, 4.5m out?
Previous Message Ondrej Ivanič 2012-08-16 05:48:57 Re: High Disk write and space taken by PostgreSQL