The Hermit Hacker <scrappy(at)hub(dot)org> writes:
>> An explicit parameter to the postmaster, setting the installation-wide
>> open file count (with default maybe about 50 * MaxBackends) is starting
>> to look like a good answer to me. Comments?
> Okay, if I understand correctly, this would just result in more I/O as far
> as having to close off "unused files" once that 50 limit is reached?
Right, the cost is extra close() and open() kernel calls to release FDs
> Would it be installation-wide, or per-process? Ie. if I have 100 as
> maxbackends, and set it to 1000, could one backend suck up all 1000, or
> would each max out at 10?
The only straightforward implementation is to take the parameter, divide
by MaxBackends, and allow each backend to have no more than that many
files open. Any sort of dynamic allocation would require inter-backend
communication, which is probably more trouble than it's worth to avoid
a few kernel calls.
> (note. I'm running with 192 backends right now,
> and have actually pushed it to run 188 simultaneously *grin*) ...
Lessee, 8192 FDs / 192 backends = 42 per backend. No wonder you were
regards, tom lane
In response to
pgsql-hackers by date
|Next:||From: The Hermit Hacker||Date: 2000-08-28 19:16:25|
|Subject: Re: Re: Too many open files (was Re: spinlock problems
reported earlier) |
|Previous:||From: Alfred Perlstein||Date: 2000-08-28 19:00:07|
|Subject: INHERITS doesn't offer enough functionality|