Re: 4 billion record limit?

From: Paul Caskey <paul(at)nmxs(dot)com>
To: Thomas Lockhart <lockhart(at)alumni(dot)caltech(dot)edu>
Cc: Postgres Users <pgsql-general(at)postgresql(dot)org>
Subject: Re: 4 billion record limit?
Date: 2000-07-28 17:06:26
Message-ID: 3981BD92.B5B479E0@nmxs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general pgsql-novice

Thomas Lockhart wrote:
>
> > FWIW, I checked into MySQL, and as far as I can tell, they have nothing
> > like this implicit 4 billion transactional "limit". So maybe competitive
> > spirit will drive the postgres hackers to fix this problem sooner than
> > later. ;-)
>
> We have *never* had a report of someone pushing this 4GB limit, and
> theoretical problems usually go into the long-term development plan, not
> in the "OHMYGODITSBROKEN" list.

That's absolutely true, and I'm sorry for pushing your MySQL button. I'm
not in some sort of panic to get this fixed. I just raised this issue to
(1) see if I was the only one who noticed/cared and (2) learn some details
to see if/what limits were really there.

> Tom Lane wrote:
>
> Hmph. Has anyone ever come close to shoving 4G rows into MySQL? Maybe
> they just haven't documented their limits...

The guantlet has been dropped! Looks like I have my work cut out for me,
this weekend. :-)

Again, just to clarify, I'm not concerned about a database holding 4
billion records at once. I'm concerned about performing 4,000,000,000
inserts and 3,999,999,000 deletes over the course of a few years. I just
ran the numbers and that's about 50 transactions/second on an
international web site active 24 hours/day for 3 years. Put 5 such sites
on one server and you're down to 10 trans/sec per site. Ramp up to 30
trans/sec/site and you're down to 1 year. Juggle these numbers however
you want; it's not that far fetched.

Sure, it has a lot to do with the application. Most databases are read
much more than they're written, and of course a SELECT doesn't chew up an
OID. But every INSERT does. Any application that updates information
frequently can be vulnerable to this problem. With my table structure, it
happens to be much easier to DELETE and INSERT inside a transaction than
to UPDATE. And every "row" of customer-supplied data could turn into 100
INSERTs, so I'm accelerating toward that 4G limit 100 times faster.

Bottom line: I can recycle my own IDs if I have to. But I need to know
whether I should change my table structure and/or code to conserve OIDs,
an internal system identifier particular to the RDBMS I chose.
Considering the main reason I chose Postgres was for scalability, this
seems a little ironic.

--
Paul Caskey paul(at)nmxs(dot)com Software Engineer
New Mexico Software 5041 Indian School NE Albuquerque, NM 87110
--

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Ross J. Reedstrom 2000-07-28 17:06:55 Re: Re: 4 billion record limit?
Previous Message Philip Warner 2000-07-28 16:52:57 Re: pg_dump & performance degradation

Browse pgsql-novice by date

  From Date Subject
Next Message Ross J. Reedstrom 2000-07-28 17:06:55 Re: Re: 4 billion record limit?
Previous Message Keith G. Murphy 2000-07-28 16:48:10 Re: Re: 4 billion record limit?