I recently started playing with the Large Object usage
in PostgreSQL 6.5.3.
Apparently LargeObjects in PostgreSQL are garbage collected
between creation and opening unless the creation and
opening are done within a database transaction. In fact
any large object OID is lost unless the OID is stored in
a table(class) before ending the database transaction.
I've provided patches I made to the libpq++ PgLargeObject
class, and the Java/JDBC examples to make them work
properly with this restriction. See pgsql-patches:
"libpq++ PgLargeObject patch"
"Java/JDBC LargeObject examples patch"
However I've just noticed that these "orphaned" large
objects are still consuming space on my disk! I can't
access them by OID using lo_open() (or variants on this),
but the files:
still exist for each orphaned OID (and I've done a
"vacuum analyze") on the database.
Can someone explain what's happening, and possibly a
workaround to get rid of these "orphaned" Large Objects
without deleting others that are still tied into a
table(class) in the database (and thus still accessible
via the LargeObject functions/methods).
pgsql-admin by date
|Next:||From: Tami Taylor||Date: 2000-04-03 18:14:14|
|Subject: Address and Phone number|
|Previous:||From: Edward Chase||Date: 2000-03-31 18:05:07|
|Subject: Re: COPY|