In response to Jessica Richard <rjessil(at)yahoo(dot)com>:
> On a Linux system, if the total memory is 4G and the shmmax is set to 4G, I know it is bad, but how bad can it be? Just trying to understand the impact the "shmmax" parameter can have on Postgres and the entire system after Postgres comes up on this number.
It's not bad by definition. shmmax is a cap on the max that can be used.
Just because you set it to 4G doesn't mean any application is going to
use all of that. With PostgreSQL, the maximum amount of shared memory it
will allocate is governed by the shared_buffers setting in the
It _is_ a good idea to set shmmax to a reasonable size to prevent
a misbehaving application from eating up all the memory on a system,
but I've yet to see PostgreSQL misbehave in this manner. Perhaps I'm
> What is the reasonable setting for shmmax on a 4G total machine?
If you mean what's a reasonable setting for shared_buffers, conventional
wisdom says to start with 25% of the available RAM and increase it or
decrease it if you discover your workload benefits from more or less.
By "available RAM" is meant the free RAM after all other applications
are running, which will be 4G if this machine only runs PostgreSQL, but
could be less if it runs other things like a web server.
Collaborative Fusion Inc.
In response to
pgsql-performance by date
|Next:||From: Scott Marlowe||Date: 2008-07-10 11:23:12|
|Subject: Re: how big shmmax is good for Postgres...|
|Previous:||From: Jessica Richard||Date: 2008-07-10 10:53:40|
|Subject: how big shmmax is good for Postgres...|