On Mon, Nov 10, 2008 at 5:24 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> "=?ISO-8859-1?Q?Dilek_K=FC=E7=FCk?=" <dilekkucuk(at)gmail(dot)com> writes:
> > We have a database of about 62000 tables (about 2000 tablespaces) with an
> > index on each table. Postgresql version is 8.1.
> You should probably rethink that schema. A lot of similar tables can be
> folded into one table with an extra key column. Also, where did you get
> the idea that 2000 tablespaces would be a good thing? There's really no
> point in more than one per spindle or filesystem.
> > Although after the initial inserts to about 32000 tables the subsequent
> > inserts are considerable fast, subsequent inserts to more than 32000
> > are very slow.
> This has probably got more to do with inefficiencies of your filesystem
> than anything else --- did you pick one that scales well to lots of
> files per directory?
The database is working on FreeBSD 6.3 with UFS file system. It has 32 GB of
RAM with 2 quadcore Intel Xeon 2.66 GHz processor, and about 11 TB of RAID5
> > This seems to be due to the datatype (integer) of max_files_per_process
> > option in the postgres.conf file which is used to set the maximum number
> > open file descriptors.
> It's not so much the datatype of max_files_per_process as the datatype
> of kernel file descriptors that's the limitation ...
We do not get any system messages related to the kernel file descriptor
limit (like file: table is full) yet we will work again on both the database
schema (tablespaces etc.) and system kernel variables.
> regards, tom lane
In response to
pgsql-admin by date
|Next:||From: NetGraviton||Date: 2008-11-11 14:47:36|
|Previous:||From: Dilek Küçük||Date: 2008-11-11 12:10:06|
|Subject: Re: max_files_per_process limit|