Re: Large Objects

From: Richard Huxton <dev(at)archonet(dot)com>
To: haukinger(at)gmx(dot)de
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Large Objects
Date: 2007-02-23 08:06:25
Message-ID: 45DEA081.7000803@archonet.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

haukinger(at)gmx(dot)de wrote:
> Hi all !
>
> I'm working on a database that needs to handle insertion of about
> 100000 large objects (50..60GB) a day. It should be able to run 200
> days, so it will become about 10TB eventually, mostly of 200..500KB
> large objects. How does access to large objects work ? I give the oid
> and get the large object... what is done internally ? How (if at all)
> are the oid's indexed ?

Albe's answered your actual question, but I'd wonder if you really want
to do this?

The key question is whether you need to have the actual objects stored
under transactional control. If not, just saving them as files will
prove much more efficient.

--
Richard Huxton
Archonet Ltd

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Albe Laurenz 2007-02-23 08:16:53 Re: PGSQL Locking vs. Oracle's MVCC
Previous Message Richard Huxton 2007-02-23 08:03:25 Re: Supported plpgsql BEFORE ... EACH ROW behavior