Re: Berkeley DB...

From: Chris Bitmead <chrisb(at)nimrod(dot)itg(dot)telstra(dot)com(dot)au>
To: "Michael A(dot) Olson" <mao(at)sleepycat(dot)com>
Cc: "Mikheev, Vadim" <vmikheev(at)sectorbase(dot)com>, pgsql-hackers(at)postgreSQL(dot)org
Subject: Re: Berkeley DB...
Date: 2000-05-22 00:10:48
Message-ID: 39287B08.49C7E81E@nimrod.itg.telecom.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

"Michael A. Olson" wrote:

> You get another benefit from Berkeley DB -- we eliminate the 8K limit
> on tuple size. For large records, we break them into page-sized
> chunks for you, and we reassemble them on demand. Neither PostgreSQL
> nor the user needs to worry about this, it's a service that just works.
>
> A single record or a single key may be up to 4GB in size.

That's certainly nice. But if you don't access a BIG column, you have to
retrieve the whole record? A very nice idea of the Postgres TOAST idea
is that you don't. You can have...
CREATE TABLE image (name TEXT, size INTEGER, giganticTenMegImage GIF);
As long as you don't select the huge column you don't lift it off disk.
That's pretty nice. In other databases I've had to do some annoying
refactoring of data models to avoid this.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Chris Bitmead 2000-05-22 00:15:24 Re: Postgresql OO Patch
Previous Message Bruce Momjian 2000-05-22 00:03:57 Re: Last call for comments: fmgr rewrite [LONG]