From: | "KOOPMAN,JON (A-SantaClara,ex1)" <jon_koopman(at)agilent(dot)com> |
---|---|
To: | "'pgsql-admin(at)postgresql(dot)org'" <pgsql-admin(at)postgresql(dot)org> |
Subject: | LargeObject storage doesn't go away |
Date: | 2000-03-31 22:41:58 |
Message-ID: | 636A5397F77BD311B86B0090278CE58A9274BF@axcs02.cs.itc.hp.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
I recently started playing with the Large Object usage
in PostgreSQL 6.5.3.
Apparently LargeObjects in PostgreSQL are garbage collected
between creation and opening unless the creation and
opening are done within a database transaction. In fact
any large object OID is lost unless the OID is stored in
a table(class) before ending the database transaction.
I've provided patches I made to the libpq++ PgLargeObject
class, and the Java/JDBC examples to make them work
properly with this restriction. See pgsql-patches:
"libpq++ PgLargeObject patch"
"Java/JDBC LargeObject examples patch"
However I've just noticed that these "orphaned" large
objects are still consuming space on my disk! I can't
access them by OID using lo_open() (or variants on this),
but the files:
$PGDATA/base/template1/xinv{OID}
$PGDATA/base/template1/xinx{OID}
still exist for each orphaned OID (and I've done a
"vacuum analyze") on the database.
Can someone explain what's happening, and possibly a
workaround to get rid of these "orphaned" Large Objects
without deleting others that are still tied into a
table(class) in the database (and thus still accessible
via the LargeObject functions/methods).
Thanks,
Jon Koopman
jon_koopman(at)agilent(dot)com
From | Date | Subject | |
---|---|---|---|
Next Message | Tami Taylor | 2000-04-03 18:14:14 | Address and Phone number |
Previous Message | Edward Chase | 2000-03-31 18:05:07 | Re: COPY |