From: | "scott(dot)marlowe" <scott(dot)marlowe(at)ihs(dot)com> |
---|---|
To: | Greg Spiegelberg <gspiegelberg(at)cranel(dot)com> |
Cc: | <pgsql-admin(at)postgresql(dot)org> |
Subject: | Re: Max file size |
Date: | 2003-07-01 13:24:30 |
Message-ID: | Pine.LNX.4.33.0307010720010.16378-100000@css120.ihs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
On Tue, 1 Jul 2003, Greg Spiegelberg wrote:
> scott.marlowe wrote:
> > On Tue, 1 Jul 2003, mauricio wrote:
> >
> >
> >>Hi,
> >>I'm evaluating some DB and one of the things i'd like to know is the
> >>maximum size of a file that postgres can handle with. cause i'm planning
> >>to have a centalized database the must have some billion records.
> >
> >
> > In it's default configuration Postgresql autosplits table at approximately
> > 1 gigabyte. Therefore, it has no built in limit to table size.
> >
> > If you have an OS that can handle larger files, you can compile postgresql
> > to use larger file sizes. I have seen no great improvement in speed in
> > using one large file for a table over splitting at 1Gig.
>
> Hrm. This all ought to be dependent on record size and operating
> system limits on the number of file descriptors, shouldn't it?
not sure what you mean. The number of file descriptors isn't usually a
big issue unless you've got a default installation of an older OS, the
number of file descriptors won't likely be an issue unless you need to
startup a lot of backends.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2003-07-01 14:11:24 | Re: big tables with lots-o-rows |
Previous Message | scott.marlowe | 2003-07-01 13:00:45 | Re: Max file size |