Skip site navigation (1) Skip section navigation (2)

Re: max_files_per_process limit

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Dilek Küçük <dilekkucuk(at)gmail(dot)com>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: max_files_per_process limit
Date: 2008-11-10 15:24:24
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-admin
"=?ISO-8859-1?Q?Dilek_K=FC=E7=FCk?=" <dilekkucuk(at)gmail(dot)com> writes:
> We have a database of about 62000 tables (about 2000 tablespaces) with an
> index on each table. Postgresql version is 8.1.

You should probably rethink that schema.  A lot of similar tables can be
folded into one table with an extra key column.  Also, where did you get
the idea that 2000 tablespaces would be a good thing?  There's really no
point in more than one per spindle or filesystem.

> Although after the initial inserts to about 32000 tables the subsequent
> inserts are considerable fast, subsequent inserts to more than 32000 tables
> are very slow.

This has probably got more to do with inefficiencies of your filesystem
than anything else --- did you pick one that scales well to lots of
files per directory?

> This seems to be due to the datatype (integer) of max_files_per_process
> option in the postgres.conf file which is used to set the maximum number of
> open file descriptors.

It's not so much the datatype of max_files_per_process as the datatype
of kernel file descriptors that's the limitation ...

			regards, tom lane

In response to


pgsql-admin by date

Next:From: Dana HollandDate: 2008-11-10 15:50:09
Subject: installing without shell access
Previous:From: Tomeh, HusamDate: 2008-11-10 15:23:20
Subject: Re: Number of Current Client Connections

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group