Re: 4 billion record limit?

From: Bradley Kieser <brad(at)kieser(dot)net>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Paul Caskey <paul(at)nmxs(dot)com>, Chris Bitmead <chrisb(at)nimrod(dot)itg(dot)telstra(dot)com(dot)au>, Postgres Users <pgsql-general(at)postgresql(dot)org>
Subject: Re: 4 billion record limit?
Date: 2000-07-27 10:02:04
Message-ID: E13HkUO-0001IM-00@kieser.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general pgsql-novice

Quoting Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>:

> Paul Caskey <paul(at)nmxs(dot)com> writes:
> >> No doubt about it, you're likely to get a few "duplicate key" errors and
> >> stuff like that. I'm just observing that it's not likely to be a
> >> complete catastrophe, especially not if you don't rely on OIDs to be
> >> unique in your user tables.
>
> > I don't rely on OID uniqueness, but I assumed Postgres does!
>
> Only in the system tables, and not even in all of them. From the
> system's point of view, there's no real need to assign OIDs to
> user table rows at all --- so another possible answer is not to
> do that, unless the user requests it.
>
This changes things a lot. If the rows don't have to have OIDs associated with them
then the 4bn limit is not a transactional limit... in which case there shouldn't be a problem.

> regards, tom lane
>

Bradley Kieser
Director
Kieser.net

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Bradley Kieser 2000-07-27 10:05:36 Re: 4 billion record limit?
Previous Message Bradley Kieser 2000-07-27 09:50:50 RE: 4 billion record limit?

Browse pgsql-novice by date

  From Date Subject
Next Message Nicolas Kizilian 2000-07-27 10:04:36 timestamp and null value
Previous Message julian cowans 2000-07-27 09:53:41 Triggers - temporal