> So the answer is you've got something that's gone hog-wild on creating
> large objects and not deleting them; or maybe the application *is*
> deleting them but pg_largeobject isn't getting vacuumed.
> regards, tom lane
Hi all, thanks for the advice. I ran the script for largefiles and
the largest is 3Gb followed by 1Gb then followed by another 18 files
that total about 3Gb between them. So about 7Gb in total of a 100Gb
partition that has 99Gb used. All this is in the data/base/16450
directory in these large 1Gb files. If I look in the logs for
Postgres I can see a vacuum happening every 20 minutes, in that it
says "autovacuum: processing database "db name" but nothing else. How
do I know if the vacuum is actually doing anything?
What is pg_largeobjects and what can I check with it (sorry did say I
was a real novice).
Really appreciate your help guys.
In response to
pgsql-general by date
|Next:||From: Glyn Astill||Date: 2008-09-30 09:18:41|
|Subject: Re: Replication using slony-I|
|Previous:||From: Ivan Zolotukhin||Date: 2008-09-30 08:58:58|
|Subject: Re: pg_start_backup() takes too long|