BLOBs and rules/triggers/functions

From: "Badger, Bruce" <bbadger(at)visualnetworks(dot)com>
To: "'pgsql-interfaces(at)postgresql(dot)org'" <pgsql-interfaces(at)postgresql(dot)org>
Subject: BLOBs and rules/triggers/functions
Date: 2000-08-15 17:56:44
Message-ID: 299244323E4CD411951500B0D04944FF4232F5@exmailny00.avesta.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-interfaces

I am developing a PostgreSQL interface for VisualWorks Smalltalk. I have
all the basics going, and I'm truing to get BLOB support sorted out. The
are two components to the interface, the first (which I call the driver) is
an implementation of the frontend/backend protocol which maps a closely as
possible to the manual pages , so there are classes for all the different
kinds of message and variables etc. Then second component is a mapping of
the VisualWorks EXDI (EXternal Database Interface) to PostgreSQL. The EXDI
part sits on top of the driver.

I can manipulate BLOBs from Smalltalk using the driver layer. This involves
calling the lo_x family of functions, and it all seems to work well.

Now, to the problem ... The definition of the EXDI layer demands that people
are able to supply BLOB data as part of a regular insert or update. The
mapping layer I'm writing is told 'here is a ByteArray - convert it to a
form that's OK for SQL'. The EXDI works this way because it suits the
Oracles and Sybases of this world - they *do* allow the bytes to be embedded
in the SQL (or so my reading of the code suggests).

What I do when given a ByteArray is to grab the database connection (another
story in itself) and use the lo_* functions to create the BLOB and write the
ByteArray to PostgreSQL. Then I return the string form of the BLOBs oid to
be included in the SQL. And it works!! The database is updated as
expected. The problem is that I'm making new BLOBs for both inserts and
updates, but not getting rid of any old ones. When converting a ByteArray I
do not have access to schema information, so I don't know what table is
being updated, nor what the oid of the row is, or what the primary key is -
so I can't look up the old.oid(s).

To get around the problem of old BLOBs I'm thinking that I can just get
people to set up a rule (or function or trigger) that will delete the old
blob as the new one is being written. It seems that this should be
possible, but I can't get it to work.

Following are a table and rule create statement & the error I get when I use
them. The questions are: am I trying to do something insane? Am I tackling
it in a reasonable way? and, what's wrong with what's below?

Many thanks ...
=========================================
[bbadger(at)catbert bbadger]$ pg_dump test1
\connect - bbadger
CREATE TABLE "exditest30" (
"name" character varying(30),
"blobdata" oid
);
COPY "exditest30" FROM stdin;
Curly 100524
Moe 100540
Larry 100556
\.
CREATE RULE exditest30_update AS ON UPDATE TO exditest30 DO SELECT
lo_unlink(old.blobdata) AS lo_unlink;
[bbadger(at)catbert bbadger]$
==========================================
test1=# test1=# update exdiTest30 set blobdata = 100524 where name = 'Moe';
ERROR: parser: parse error at or near "test1"
test1=# NOTICE: Caution: DROP TABLE cannot be rolled back, so don't abort
now
test1'# NOTICE: Caution: DROP INDEX cannot be rolled back, so don't abort
now
test1-# NOTICE: mdopen: couldn't open xinv100540: No such file or directory
test1'# NOTICE: RelationIdBuildRelation: smgropen(xinv100540): No such file
or directory
test1'# NOTICE: mdopen: couldn't open xinv100540: No such file or directory
test1-# NOTICE: Caution: DROP TABLE cannot be rolled back, so don't abort
now
test1'# ERROR: RelationClearRelation: relation 100540 deleted while still
in use
test1'# test1=# commit;
test1'# COMMIT

Browse pgsql-interfaces by date

  From Date Subject
Next Message Doug Lau 2000-08-15 19:18:53 JDBC 2.0 Array support
Previous Message Nancy Hui 2000-08-15 17:49:11 Visual Foxpro Upsizing