Skip site navigation (1) Skip section navigation (2)

Re: Fix for large file support

From: Zdenek Kotala <Zdenek(dot)Kotala(at)Sun(dot)COM>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: pgsql-patches(at)postgresql(dot)org
Subject: Re: Fix for large file support
Date: 2007-04-06 14:54:39
Message-ID: 46165F2F.70404@sun.com (view raw or flat)
Thread:
Lists: pgsql-hackerspgsql-patches
Andrew Dunstan wrote:
> 
> 
> Does it mean the maximum field size will grow beyond 1Gb? 

No. Because it is limited by varlena size. See 
http://www.postgresql.org/docs/8.2/interactive/storage-toast.html

> Or give better performance?

Yes. List of chunks is stored as linked list and for some operation 
(e.g. expand) are all chunks opened and their size is checked. On big 
tables it takes some time. For example if you have 1TB big table and you 
want to add new block you must go and open all 1024 files.

By the way ./configure script performs check for __LARGE_FILE_ support, 
but it looks that it is nowhere used.
	
There could be small time penalty in 64bit arithmetics. However it 
happens only if large file support is enabled on 32bit OS.

	Zdenek


In response to

Responses

pgsql-hackers by date

Next:From: Larry RosenmanDate: 2007-04-06 15:00:51
Subject: Re: What X86/X64 OS's do we need coverage for?
Previous:From: Joshua D. DrakeDate: 2007-04-06 14:39:16
Subject: Re: Auto Partitioning

pgsql-patches by date

Next:From: Joshua D. DrakeDate: 2007-04-06 16:22:55
Subject: Re: Auto Partitioning
Previous:From: Joshua D. DrakeDate: 2007-04-06 14:39:16
Subject: Re: Auto Partitioning

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group