Re: Max OID

From: Steve T <steve(at)retsol(dot)co(dot)uk>
To: PostGreSQL <pgsql-novice(at)postgresql(dot)org>
Subject: Re: Max OID
Date: 2008-09-01 13:54:17
Message-ID: 1220277257.3591.46.camel@localhost.localdomain
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-novice

See below.
I'm at a loss here.
I've checked the lo_import / lo_export and I think that I am doing all
this correctly.

The original import uses the server side lo_import in an sql statement.
The large object appears to have been created ok and the oid value
inserted into the document field below. All good so far. The users could
then get the large object out - this used a call to psql and a
\lo_export to get that OID referred to by the document field. This has
been working fine for 2-3 years.
On saturday the live server crashed and was replaced with the backup
server. The live data was pg_dump'd (from an 8.0.3 database) prior to
the crash and pg_restored (into 8.0.8) onto the backup server. Again all
seemed well - no errors reported into the restore and the data looked
good. BUT the document field now contains values of >2billion. Is it
possible that the dump or the restore 'altered' not only the document
field, but also the actual OID of the large object? The data all looks
good, in as much as say the document field points to large object
3123456789 and large object oid 3123456789 is correct (ie it does tie up
to that record). The problem being is that the program that handles the
exporting of the object has never handled OID's bigger than 2billion -
so the document field and its related large object would have to hve
been <=2billion prior to the dump/restore. Does that make sense and is
that feasible?

I have also restored the data into an 8.1.10 database and I see the same
results, ie the document field contains values only >3billion - but they
do seem to correctly correlate to the large objects as retrieved by
\lo_export.

On Mon, 2008-09-01 at 11:01 +0100, Steve T wrote:

> I have a table that is used to control documents:
> recno | integer | not null default
> nextval('document_recno_seq'::regclass)
> foreigntablename | character varying(255) | not null
> foreignrecno | integer | not null
> docforeigntablename | character varying(255) | not null
> docforeignrecno | integer | not null
> docforeigntext | character varying(255) | not null
> documenttyperecno | integer | not null
> version | character varying(20) | not null
> loadeddate | date | not null
> loadedtime | time without time zone | not null
> usersrecno | integer | not null
> title | character varying(40) | not null
> description | character varying(255) |
> doculookupcodesrecno | integer | not null
> document | oid | not null
> suffix | character varying(255) |
>
> Each document is loaded using the large object commands and the OID of
> the load then inserted as a ref number on the document table.
> This has been working fine, but I had a major crash on the server and
> it had to be rebuilt.
> The database had been exported (pg_dump) - so I reimported
> (pg_restore) the database.
> If I now look at the oid (ie document.document) of the blob on the
> numbers are in the range:
> 3159553408
> 3159553409
> 3159553410
> 3159553411
> 3159553412
> 3159553413
> 3159553414
> 3159553415
> these numbers are above the field I use for the OID in the code (ie an
> Integer - catering for 2 billion) - so the program no longer finds the
> document to export.
> Any ideas as to why did these OID's become so large after reimporting
> (I am assuming here that they must have been under 2 billion before or
> else the process could not have worked before)? And what limit is ther
> then on this OID?

In response to

  • Max OID at 2008-09-01 10:01:52 from Steve T

Responses

Browse pgsql-novice by date

  From Date Subject
Next Message Yogesh Sharma 2008-09-01 17:57:31 Opinion from experts
Previous Message Steve T 2008-09-01 10:01:52 Max OID