large duplicated files

From: "Ryan D(dot) Enos" <renos(at)ucla(dot)edu>
To: pgsql-novice(at)postgresql(dot)org
Subject: large duplicated files
Date: 2007-08-17 06:33:40
Message-ID: 46C54144.3080009@ucla.edu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-novice

Hi,
I am very new to postgresql and am not really a programmer of any type.
I use pgsql to manage very large voter databases for political science
research. My problem is that my database is creating large duplicate
files, i.e.: 17398.1, 17398.2, 17398.3, etc. Each is about 1g in size.
I understand that each of these is probably a part of a file that pgsql
created because of a limit on file size and that they may be large
indexes. However, I don't know where these files came from or how to
reclaim the disk space.
I have extensively searched the archives and found that I am not the
first to have this problem. I have followed the suggestions to previous
posters, using a VACUUM FULL command and REINDEX. But nothing reclaims
the disk space. I have tried to see the type of file by using:
select * from pg_class where relfilenode =""
but this returns 0 rows.
How can I reclaim this space and prevent these files from being created
in the future?
Any help would be greatly appreciated.
Thanks.
Ryan

Responses

Browse pgsql-novice by date

  From Date Subject
Next Message Ryan D. Enos 2007-08-17 07:15:13 Re: large duplicated files
Previous Message Jon Jensen 2007-08-16 21:27:38 Re: rogue process maxing cpu and unresponsive to signals