> -----Original Message-----
> From: Tapio Pitkäranta [mailto:Tapio(dot)Pitkaranta(at)relex(dot)fi]
> Sent: Monday, March 21, 2011 7:53 AM
> To: Nicholson, Brad (Toronto, ON, CA)
> Cc: pgsql-admin(at)postgresql(dot)org; Devrim GÜNDÜZ
> Subject: RE: [ADMIN] Memory limits and PostgreSQL 8.4 - Over 63 GB of
> memory for DB?
> Hello Brad,
> Thank you for this information.
> We have database tables that are around 50-100 GB each (table). While
> processing such tables, it seems to be crucial that the table fits into
> memory (especially if the database table is not on a SSD drive).
> Until now we have thought "shared_buffers" parameter should be more
> than the size of the biggest table (that requires this kind of batch
> Do you think it does not matter what size we set the "shared_buffers"
> parameter, as long as the server has enough memory? (Even if the single
> table is this size: 50-100 GB)
> Why are large shared buffers not recommended?
There is the potential to dirty a lot buffers and have huge pauses at checkpoint time when those get flushed to disk. Remember though that unused memory is going to go to your filesystem cache and there is very good chance that a lot of the tables you are accessing are still going to be in memory.
By all means, test it out. It may be that it is fine with your workload, particularly if it is read only. But be prepared for it to not work out as expected.
In response to
pgsql-admin by date
|Next:||From: Jaime Casanova||Date: 2011-03-22 19:11:59|
|Subject: Re: Hot-standby/Reporting database.|
|Previous:||From: Nicholson, Brad (Toronto, ON, CA)||Date: 2011-03-22 13:57:48|
|Subject: Re: pg_stat_all_tables column value reseting problem.|