Re: Large objects.

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Dmitriy Igrishin <dmitigr(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Large objects.
Date: 2010-09-27 14:50:34
Message-ID: 14109.1285599034@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> According to the documentation, the maximum size of a large object is
> 2 GB, which may be the reason for this behavior.

In principle, since pg_largeobject stores an integer pageno, we could
support large objects of up to LOBLKSIZE * 2^31 bytes = 4TB without any
incompatible change in on-disk format. This'd require converting a lot
of the internal LO access logic to track positions as int64 not int32,
but now that we require platforms to have working int64 that's no big
drawback. The main practical problem is that the existing lo_seek and
lo_tell APIs use int32 positions. I'm not sure if there's any cleaner
way to deal with that than to add "lo_seek64" and "lo_tell64" functions,
and have the existing ones throw error if asked to deal with positions
past 2^31.

In the particular case here, I think that lo_write may actually be
writing past the 2GB boundary, while the coding in lo_read is a bit
different and stops at the 2GB "limit".

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2010-09-27 14:54:34 Re: Improving prep_buildtree used in VPATH builds
Previous Message Robert Haas 2010-09-27 14:41:20 Re: Improving prep_buildtree used in VPATH builds