|From:||Bruce Momjian <bruce(at)momjian(dot)us>|
|To:||Alexey Klyukin <alexk(at)hintbits(dot)com>|
|Cc:||Alexey Vasiliev <leopard_ne(at)inbox(dot)ru>, pgsql-performance(at)postgresql(dot)org|
|Subject:||Re: Why shared_buffers max is 8GB?|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
On Wed, Apr 2, 2014 at 11:38:57AM +0200, Alexey Klyukin wrote:
> In most cases 8GB should be enough even for the servers with hundreds of GB of
> data, since the FS uses the rest of the memory as a cache (make sure you give a
> hint to the planner on how much memory is left for this with the
> effective_cache_size), but the exact answer is a matter of performance testing.
> Now, the last question would be what was the initial justification for the 8GB
> barrier, I've heard that there were a lock congestion when dealing with huge
> pool of buffers, but I think that was fixed even in the pre-9.0 era.
The issue in earlier releases was the overhead of managing more then 1
million 8k buffers. I have not seen any recent tests to confirm that
overhead is still significant.
A larger issue is that going over 8GB doesn't help unless you are
accessing more than 8GB of data in a short period of time. Add to that
the problem if potentially dirtying all the buffers and flushing it to a
now-smaller kernel buffer cache, and you can see why the 8GB limit is
I do think this merits more testing against the current Postgres source
+ Everyone has their own god. +
|Next Message||Bruce Momjian||2014-04-09 16:42:23||Re: PGSQL, checkpoints, and file system syncs|
|Previous Message||Tom Lane||2014-04-09 01:39:40||Re: Performance regressions in PG 9.3 vs PG 9.0|