On 15/12/2009 12:35 PM, Mark Williamson wrote:
> So what happened is, the above update never completed and the Postgresql
> service consumed all available memory. We had to forcefully reboot the
That means your server is misconfigured. PostgreSQL should never consume
all available memory. If it does, you have work_mem and/or
maintenance_work_mem set way too high, and you have VM overcommit
enabled in the kernel. You also have too much swap.
I wouldn't be surprised if you had shared_buffers set too high as well,
and you have no ulimit set on postgresql's memory usage. All those
things add up to "fatal".
A properly configured machine should be able to survive memory
exhaustion caused by a user process fine. Disable VM overcommit, set a
ulimit on postgresql so it can't consume all memory, use a sane amount
of swap, and set sane values for work_mem and maintenance_work_mem.
> Why does Postgresql NOT have a maximum memory allowed setting? We want
> to allocate resources efficiently and cannot allow one customer to
> impact others.
It does. "man ulimit".
The operating system can enforce it much better than PostgreSQL can. If
a Pg bug was to cause Pg to go runaway or try to allocate insane amounts
of RAM, the ulimit would catch it.
I *do* think it'd be nice to have ulimit values settable via
postgresql.conf so that you didn't have to faff about editing init
( TODO item? )
In response to
pgsql-bugs by date
|Next:||From: Nagy Daniel||Date: 2009-12-15 11:19:12|
|Subject: Re: BUG #5238: frequent signal 11 segfaults|
|Previous:||From: Mark Williamson||Date: 2009-12-15 04:35:45|
|Subject: statement_timeout is not cancelling query|