We have a database of about 62000 tables (about 2000 tablespaces) with an
index on each table. Postgresql version is 8.1.
Although after the initial inserts to about 32000 tables the subsequent
inserts are considerable fast, subsequent inserts to more than 32000 tables
are very slow.
This seems to be due to the datatype (integer) of max_files_per_process
option in the postgres.conf file which is used to set the maximum number of
open file descriptors.
Is there anything we could do about this max_files_per_process limit or any
other way to speed up inserts to all these tables?
Any suggestions are wellcome.
pgsql-admin by date
|Next:||From: Achilleas Mantzios||Date: 2008-11-10 14:51:16|
|Subject: Re: max_files_per_process limit|
|Previous:||From: Ori Garin||Date: 2008-11-10 09:02:44|
|Subject: Postgres dies on standby server after triggering failover|