Skip site navigation (1) Skip section navigation (2)

Re: Giant Postgresql Database Cluster Folder

From: raghu ram <raghuchennuru(at)gmail(dot)com>
To: Kiruba suthan <kirubasuthan(at)gmail(dot)com>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: Giant Postgresql Database Cluster Folder
Date: 2012-04-10 06:14:08
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-admin
On Tue, Apr 10, 2012 at 11:21 AM, Kiruba suthan <kirubasuthan(at)gmail(dot)com>wrote:

> We are using Postgresql Database server to host DB for an analytical tool
> I am working on. Basically this DB has metrics about files. Row count of
> few tables are more than 18 million. But the content of the tables are
> basic data types like numbers, date & string. No binary data is stored. The
> size of the DB is around 1 GB when taken a full dump.
> We query the DB using complex views to get reports and most of the result
> set of these queries are quite huge (row count in hundreds of thousand or
> in million).
> The size of the DB Cluster Folder varies between 400GB to 600GB which is
> unreasonably huge for the actual data. It is eating up all disk-space in
> the server. When I create a fresh DB from the dump in a new server the size
> of the DB cluster folder is around 2.3 GB which is very reasonable to me.
> Experts,
> Could you help me how to clean up DB Cluster folder and reclaim disk space
> please? And please give me some insight into how data is organized in DB
> Cluster and what should I do to avoid this happening again?
> Size of directories under DB Cluster Folder is mentioned below
> [user(at)server DB_CLUSER_DATA]$ du -ksh *
> 407G    base
> 316K    global
> 49M     pg_clog
> 4.0K    pg_hba.conf
> 4.0K    pg_ident.conf
> 120K    pg_multixact
> 12K     pg_notify
> 32K     pg_stat_tmp
> 88K     pg_subtrans
> 4.0K    pg_tblspc
> 4.0K    pg_twophase
> 4.0K    PG_VERSION
> 129M    pg_xlog
> 20K     postgresql.conf
> 4.0K    postmaster.opts
> 4.0K

Perform VACUUM FULL on entire cluster to reclaim space. This operation will
put Database Objects in Exclusive lock mode,so requesting you to do this
activity in non-peak hours.

Once you completed above maintenance activity,then schedule manual VACUUM
ANALYZE on every day once,so that dead rows space will be reused while
inserting new records into a table.


In response to


pgsql-admin by date

Next:From: raghu ramDate: 2012-04-10 06:19:39
Subject: Re: Why would queries fail with 'could not stat file' after CLUSTER?
Previous:From: Kiruba suthanDate: 2012-04-10 05:51:54
Subject: Giant Postgresql Database Cluster Folder

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group