Re: Fix for large file support

From: Zdenek Kotala <Zdenek(dot)Kotala(at)Sun(dot)COM>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: pgsql-patches(at)postgresql(dot)org
Subject: Re: Fix for large file support
Date: 2007-04-06 14:54:39
Message-ID: 46165F2F.70404@sun.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers pgsql-patches

Andrew Dunstan wrote:
>
>
> Does it mean the maximum field size will grow beyond 1Gb?

No. Because it is limited by varlena size. See
http://www.postgresql.org/docs/8.2/interactive/storage-toast.html

> Or give better performance?

Yes. List of chunks is stored as linked list and for some operation
(e.g. expand) are all chunks opened and their size is checked. On big
tables it takes some time. For example if you have 1TB big table and you
want to add new block you must go and open all 1024 files.

By the way ./configure script performs check for __LARGE_FILE_ support,
but it looks that it is nowhere used.

There could be small time penalty in 64bit arithmetics. However it
happens only if large file support is enabled on 32bit OS.

Zdenek

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Larry Rosenman 2007-04-06 15:00:51 Re: What X86/X64 OS's do we need coverage for?
Previous Message Joshua D. Drake 2007-04-06 14:39:16 Re: Auto Partitioning

Browse pgsql-patches by date

  From Date Subject
Next Message Joshua D. Drake 2007-04-06 16:22:55 Re: Auto Partitioning
Previous Message Joshua D. Drake 2007-04-06 14:39:16 Re: Auto Partitioning