Re: Max OID

From: Steve T <steve(at)retsol(dot)co(dot)uk>
To: "Harold A(dot)" Giménez "Ch(dot)" <harold(dot)gimenez(at)gmail(dot)com>
Cc: PostGreSQL <pgsql-novice(at)postgresql(dot)org>
Subject: Re: Max OID
Date: 2008-10-17 06:36:12
Message-ID: 1224225372.3598.482.camel@localhost.localdomain
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-novice

Harold,
Thanks for the pointer!

On Thu, 2008-10-16 at 15:15 -0400, Harold A. Giménez Ch. wrote:
> Maybe you need to specify "-o" when pg_dump'ing. From the docs for
> pg_dump:
>
>
>
> -o
>
> --oids
>
> Dump object identifiers (OIDs) as part of the data for every
> table. Use this option if your application references the OID
> columns in some way (e.g., in a foreign key constraint).
> Otherwise, this option should not be used.
>
>
>
>
>
>
>
>
> On Thu, Oct 16, 2008 at 3:04 PM, Steve T <steve(at)retsol(dot)co(dot)uk> wrote:
>
> All,
> Does anyone have an answer to this?
>
> I had thought that by creating a new instance (using initdb)
> and then pg_restoring the dump there that it would reset the
> OIDs, but this seems arbitrary.
>
> If I have a set of large objects and I have a pointer on a
> table to that object, what happens to that pointer when the
> database is dumped and restored? Is there a way I can force
> the restored OID to be < 2Billion?
>
> I had expected the OID's to remain static - ie I load an
> object and it is given OID 1. I then put the value of that OID
> pointer on a table - so that field also now shows 1. I then
> dump that database and restore it elsewhere. I expected the
> OID and the field pointer to have the same value in the
> restored database, but that doesn't appear to be the case.
>
>
>
> On Mon, 2008-09-01 at 14:54 +0100, Steve T wrote:
>
> > See below.
> > I'm at a loss here.
> > I've checked the lo_import / lo_export and I think that I am
> > doing all this correctly.
> >
> > The original import uses the server side lo_import in an sql
> > statement. The large object appears to have been created ok
> > and the oid value inserted into the document field below.
> > All good so far. The users could then get the large object
> > out - this used a call to psql and a \lo_export to get that
> > OID referred to by the document field. This has been working
> > fine for 2-3 years.
> > On saturday the live server crashed and was replaced with
> > the backup server. The live data was pg_dump'd (from an
> > 8.0.3 database) prior to the crash and pg_restored (into
> > 8.0.8) onto the backup server. Again all seemed well - no
> > errors reported into the restore and the data looked good.
> > BUT the document field now contains values of >2billion. Is
> > it possible that the dump or the restore 'altered' not only
> > the document field, but also the actual OID of the large
> > object? The data all looks good, in as much as say the
> > document field points to large object 3123456789 and large
> > object oid 3123456789 is correct (ie it does tie up to that
> > record). The problem being is that the program that handles
> > the exporting of the object has never handled OID's bigger
> > than 2billion - so the document field and its related large
> > object would have to hve been <=2billion prior to the
> > dump/restore. Does that make sense and is that feasible?
> >
> > I have also restored the data into an 8.1.10 database and I
> > see the same results, ie the document field contains values
> > only >3billion - but they do seem to correctly correlate to
> > the large objects as retrieved by \lo_export.
> >
> > On Mon, 2008-09-01 at 11:01 +0100, Steve T wrote:
> >
> > > I have a table that is used to control documents:
> > > recno | integer | not null
> > > default nextval('document_recno_seq'::regclass)
> > > foreigntablename | character varying(255) | not null
> > > foreignrecno | integer | not null
> > > docforeigntablename | character varying(255) | not null
> > > docforeignrecno | integer | not null
> > > docforeigntext | character varying(255) | not null
> > > documenttyperecno | integer | not null
> > > version | character varying(20) | not null
> > > loadeddate | date | not null
> > > loadedtime | time without time zone | not null
> > > usersrecno | integer | not null
> > > title | character varying(40) | not null
> > > description | character varying(255) |
> > > doculookupcodesrecno | integer | not null
> > > document | oid | not null
> > > suffix | character varying(255) |
> > >
> > > Each document is loaded using the large object commands
> > > and the OID of the load then inserted as a ref number on
> > > the document table.
> > > This has been working fine, but I had a major crash on the
> > > server and it had to be rebuilt.
> > > The database had been exported (pg_dump) - so I
> > > reimported (pg_restore) the database.
> > > If I now look at the oid (ie document.document) of the
> > > blob on the numbers are in the range:
> > > 3159553408
> > > 3159553409
> > > 3159553410
> > > 3159553411
> > > 3159553412
> > > 3159553413
> > > 3159553414
> > > 3159553415
> > > these numbers are above the field I use for the OID in the
> > > code (ie an Integer - catering for 2 billion) - so the
> > > program no longer finds the document to export.
> > > Any ideas as to why did these OID's become so large after
> > > reimporting (I am assuming here that they must have been
> > > under 2 billion before or else the process could not have
> > > worked before)? And what limit is ther then on this OID?
>
>
>
> Steve Tucknott
> ReTSol Ltd
>
> DDI: 01323 488548
>
>
>
>
>

Steve Tucknott
ReTSol Ltd

DDI: 01323 488548

In response to

  • Re: Max OID at 2008-10-16 19:15:26 from Harold A. Giménez Ch.

Browse pgsql-novice by date

  From Date Subject
Next Message Jasmin Dizdarevic 2008-10-17 10:46:18 Re: [pgsql-novice] Daily digest v1.2311 (11 messages)
Previous Message Steve T 2008-10-17 06:30:35 Re: Max OID