Re: [HACKERS] file descriptors leak?

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Gene Sokolov" <hook(at)aktrad(dot)ru>
Cc: pgsql-hackers(at)postgreSQL(dot)org
Subject: Re: [HACKERS] file descriptors leak?
Date: 1999-11-02 15:18:15
Message-ID: 11404.941555895@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

"Gene Sokolov" <hook(at)aktrad(dot)ru> writes:
> We disconnected all clients and the number of descriptors dropped from 800
> to about 200, which is reasonable. We currently have 3 connections and ~300
> used descriptors. The "lsof -u postgres" is attached.

Hmm, I see a postmaster with 8 open files and one backend with 34.
Doesn't look out of the ordinary to me.

> It seems ok except for a large number of open /dev/null.

I see /dev/null at the stdin/stdout/stderr positions, which I suppose
means that you started the postmaster with -S instead of directing its
output to a logfile.

It is true that on a system that'll let individual processes have as
many open file descriptors as they want, Postgres can soak up a lot.
Over time I'd expect each backend to acquire an FD for practically
every file in the database directory (including system tables and
indexes). So in a large installation you could be looking at thousands
of open files. But the situation you're describing doesn't seem like
it should reach those kinds of numbers.

The number of open files per backend can be constrained by fd.c, but
AFAIK there isn't any way to set a manually-specified upper limit; it's
all automatic. Perhaps there should be a configuration option to add
a limit.

regards, tom lane

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 1999-11-02 15:23:26 Re: AW: [HACKERS] sort on huge table
Previous Message Bruce Momjian 1999-11-02 15:13:45 Re: [HACKERS] change in name of perl?