Apologies if I missed something, but why are you using the rowid in Oracle
instead of your own primary key value (even if you are inserting 10's of
millions of records a day). A number(38) provides a range of magnitude
1E-130 .. 10E125
Using a non rowid PK value would mean that you were not at the mercy of
the database moving your data record elsewhere if it cannot be accomodated
in the original block following an update.
If you use a number(38) PK value instead of a row ID, you are *sure* that
your record is accessible regardless of updates.
Of course if you wanted to avoid building/maintaining the number(38) PK
index then that is a different matter...
Louis LeBlanc said:
> Hey folks. I'm new to the list, and not quite what you'd call a DB
> Guru, so please be patient with me. I'm afraid the lead up here is a
> bit verbose . . .
> I am working on an application that uses very high volume DB
> transactions - in the order of tens of millions per day . . .
> Anyway, the current database which will remain nameless, but begins
> with O and rymes with debacle (sorta), has a problem with high volume
> work when it comes to binary large objects and rowid use (or oid use
> as I understand Postgres uses).
> Here's the problem:
> When a record containing a number of basic types (int, varchar(2048),
> char(32), etc.) as well as a large object is updated with a LOB, we
> used to use rowids returned in the update of the other fields. The
> rowid was then the condition used when selecting the locator for the
> large object.
> Unfortunately, when a rowid is returned, it is always the location of
> the current location, but if the data in question won't fit in the
> current location, it is shifted to another rowid - but the application
> can't really get notification of this. Later, when the object is
> inserted using that rowid, it gets put in the wrong place. The result
> is that sometimes, a request for one object will yield the wrong
> Of course, there are ways to simply update the whole record, object
> and all, but there are bugs in the database version we are using that
> causes the temp tablespace to fill up and not be reused when this
> method is used, so we've had to change some of this stuff around to
> use the key field as the condition, and selecting the locator after
> the initial creation of the empty object. The point is that the
> direct updates of large objects have problems, as do the use of rowids
> in high volume conditions. Regardless, we've had to implement some
> verification checks to ensure the objects don't get swapped - this is
> done by prefixing the object with a 32 bit MD5 hash that is also
> stored in a separate field of the same record.
> I've been looking at the Postgres docs, and I've found the lo_*()
> routines, which appear to use OIDs to write to the object. Is there
> any 'shifting' of records during insert because of space availability?
> And, probably more important, is there a way to do a direct update of
> a large object along with the other fields in a record - without
> having to call the lo_creat() and lo_write() functions? I've done
> some searching in the archives, but can't find anything that appears
> to answer the question.
> If I've been unclear on any of this, I'm sorry, feel free to prod for
> more info.
> Thanks in advance
> Louis LeBlanc leblanc(at)keyslapper(dot)org
> Fully Funded Hobbyist, KeySlapper Extrordinaire :)
> http://www.keyslapper.org Ô¿Ô¬
> Scientists are people who build the Brooklyn Bridge and then buy it.
> -- William Buckley
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to majordomo(at)postgresql(dot)org so that your
> message can get through to the mailing list cleanly
In response to
pgsql-general by date
|Next:||From: Tom Lane||Date: 2004-01-31 19:00:51|
|Subject: Re: Large object insert/update and oid use |
|Previous:||From: Sally Sally||Date: 2004-01-31 18:52:06|
|Subject: Separating a single database into different disks|