Re: 4 billion record limit?

From: Chris Bitmead <chrisb(at)nimrod(dot)itg(dot)telstra(dot)com(dot)au>
To: Postgres Users <pgsql-general(at)postgresql(dot)org>
Subject: Re: 4 billion record limit?
Date: 2000-07-27 04:12:41
Message-ID: 397FB6B9.267A1A62@nimrod.itg.telecom.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general pgsql-novice


The Versant ODBMS uses 48 bit oids, and if you do the math I think
you'll find that should last you forever. (It uses an additional 16 bits
to identify the database, but that's another story.).

Any complex scheme to solve this seems like a waste of time. In a couple
of years when you are likely to be running out, you'll probably be
upgrading your computer to a 64bit one with a newer version of postgres,
and then the problem will disappear.

brad wrote:
>
> THe implications of the OIDs not wrapping are immense. We have some extremely
> active databases that will easily reach this limit in two or three years. For
> some applications, such as ecommerce, dumping then reinserting the rows is
> not an option for large databases due to the 24 hours nature of their work.
>
> This is a much more complex problem than it would at first seem as the
> "tripping up" over old records with low OIDs still presents a problem, yet if
> the system is changed to try and manage a list of available OIDs, it will
> then hit performance problems.
>
> Simply waiting for 64bit numbers is rather inelegant and also presumes usage
> parameters for the database... remember Bill Gates saying that he couldn't
> foresee any usage for more than 64MB of RAM? Besides which, PostgreSQL is the
> best DB around... there's a high standard to maintain!
>
> Some initial ideas:
> a) The system remains working the way that it does until a configurable
> cutoff point (% of OIDs remaining), at which point the DBA can either switch
> on some sort of OID "garbage collection" and take the associated performance
> hit, or dump the data and reinsert it to pack the OIDs.
> GARBAGE COLLECTION:
> b) The system could wrap around, keeping an internal pointer of where it is
> in the OID chain. It could scan the OIDs sequentially finding the first free
> OID. It coudl then store that position as the new start point for the next
> time an OID is needed.
> c) An OID compression utility could be writen that doesn't require bringing
> the DB down (but will obviously have a performance it). As running this
> utilty would be a known impact, provision could be made, or the knock
> budgeted for and accounted for while not bringing the entire DB to a halt.
> d) OIDs themselves could form a list. The OID system as it stands now could
> be the "default mode" but if the OIDs run out, then a new OID list is
> started. This means the OID usage would then change to work along the same
> lines that the Intel memory adressing works. Personally I hate it, but it
> does work.
> e) OIDs could be scrapped in favour of some other system.
>
> Well, that's my two pence worth.
>
> Brad
>
> Paul Caskey wrote:
>
> > Tom Lane wrote:
> > >
> > > Chris Bitmead <chrisb(at)nimrod(dot)itg(dot)telstra(dot)com(dot)au> writes:
> > > > Paul Caskey wrote:
> > > >> 1. This implies a hard limit of 4 billion records on a server, right?
> > >
> > > > Basically, yes.
> > >
> > > It's only a hard limit if your application assumes OIDs are unique.
> > > If you don't assume that, then I think it's not a big problem.
> > >
> > > It's possible (though obviously not especially likely) that you might
> > > get OID collisions in the system tables after an OID-counter wraparound.
> >
> > This implies they do wrap around. So they are reused? Chris said no, but
> > you're saying yes.
> >
> > (Maybe they wrap around "by accident", by adding one to MAXINT, which will
> > give zero on an unsigned int, I believe. Will the system choke on zero?
> > Has anyone tested this wraparound?)
> >
> > I will not have 4 billion records in one table or even one database. But
> > on a large server with many databases, it is conceivable to have 4 billion
> > records on one machine. With a lot of insert/delete activity, over a few
> > years, it is certainly conceivable to have 4 billion inserts. If the oids
> > don't wrap, I have a problem. I can ignore it for a long time, but it
> > will loom, like Y2K. :-)
> >
> > Even if they do wrap, if I have some old records lying around with a low
> > OIDs, they will trip me up. Like you said, these are "the outer limits",
> > but I'm thinking ahead.
> >
> > Someone suggested in private that I pg_dump/restore all my data to
> > "repack" the oids which start around 1700 on a fresh database. Thanks for
> > that idea. Also thanks, Tom, for the sanity check Re: terabytes of data
> > with 4 billion records. It's still possible, especially in coming years.
> > It would be a big feather in PG's cap to "fully support" 64-bit platforms
> > such as IRIX and Solaris (finally) and, coming soon to a theater near you,
> > Linux on IA-64.
> >
> > --
> > Paul Caskey paul(at)nmxs(dot)com 505-255-1999
> > New Mexico Software 5041 Indian School NE Albuquerque, NM 87110
> > --

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message bmccoy 2000-07-27 04:16:26 RE: 4 billion record limit?
Previous Message Prasanth A. Kumar 2000-07-27 03:37:07 Re: 4 billion record limit?

Browse pgsql-novice by date

  From Date Subject
Next Message InspireNet Help 2000-07-27 07:54:18 Trigger reliability.
Previous Message Prasanth A. Kumar 2000-07-27 03:37:07 Re: 4 billion record limit?