Christian Niles <christian(at)unit12(dot)net> writes:
> However, since a versioning system will have a higher number of entries
> compared to a normal storage system, I'm curious if there's any chance
> for data corruption in the case that the DB runs out of OIDs. Ideally,
> the database would raise an exception, and leave the existing data
> untouched. From what I've read in the documentation, OIDs aren't
> guaranteed to be unique, and may cycle. In this case, would the first
> large object after the limit overwrite the first object?
No; instead you'd get a failure during lo_create:
/* Check for duplicate (shouldn't happen) */
elog(ERROR, "large object %u already exists", file_oid);
You could deal with this by retrying lo_create until it succeeds.
However, if you are expecting more than a few tens of millions of
objects, you probably don't want to go this route because the
probability of collision will be too high; you could spend a long time
iterating to find a free OID. Something involving a bigint identifier
would work better.
> Also, would
> the number of large objects available be limited by other database
> objects that use OIDs?
No. Although we use just a single OID sequence generator, each
different kind of system object has a separate unique index (or other
enforcement mechanism), so it doesn't really matter if, say, an OID in
use for a large object is also in use for a table.
regards, tom lane
In response to
pgsql-jdbc by date
|Next:||From: Euler Taveira de Oliveira||Date: 2004-10-25 02:30:58|
|Subject: Translation update: pt_BR|
|Previous:||From: Kris Jurka||Date: 2004-10-24 20:00:28|
|Subject: Re: Problem with fixed length fields. |