Max files per process..

From: "Eamonn Kent" <ekent(at)xsigo(dot)com>
To: <pgsql-admin(at)postgresql(dot)org>
Subject: Max files per process..
Date: 2007-02-20 18:14:13
Message-ID: 9146E3EBBFBCC94D95F95A1C4065348A01282E11@exch01.xsigo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Hi,

We are using Postgres 8.1.4 for an embedded linux device. I have
max_files_per_process unset - so it should take on the default value of
1000. However, it appears that at times, postmaster has 1023 files open
(lsof).

My understanding is that if postgres exceeds this limit, it will result
in a warning message - not an error. That is, postgres will close and
re-open files if needed.

Questions:

- Why is it exceeding this limit - is this a "soft limit"?

- Is so, is there a way to set a hard limit? (or a rule of
thumb - such as if you want postgres to never exceed 1024, set to 900?)

- If at the limit, when it tries to syslog or perform some
other operation, and (or other activity), and that operation needs to
get a new fd - then presumably it could fail. When at or near the limit
can we be sure that no spurios allocation occurs?

According to the postgres documentation:

max_files_per_process (integer)

Sets the maximum number of simultaneously open files allowed to each
server subprocess. The default is 1000. If the kernel is enforcing a
safe per-process limit, you don't need to worry about this setting. But
on some platforms (notably, most BSD systems), the kernel will allow
individual processes to open many more files than the system can really
support when a large number of processes all try to open that many
files. If you find yourself seeing "Too many open files" failures, try
reducing this setting. This option can only be set at server start.

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Tom Lane 2007-02-20 18:39:33 Re: Max files per process..
Previous Message Oliver Jowett 2007-02-20 11:50:14 Re: invalid byte sequence for encoding "UTF8": 0x00