Re: DB cache size strategies

From: "scott(dot)marlowe" <scott(dot)marlowe(at)ihs(dot)com>
To: "Ed L(dot)" <pgsql(at)bluepolka(dot)net>
Cc: Martijn van Oosterhout <kleptog(at)svana(dot)org>, <pgsql-general(at)postgresql(dot)org>
Subject: Re: DB cache size strategies
Date: 2004-02-10 22:48:50
Message-ID: Pine.LNX.4.33.0402101547470.29897-100000@css120.ihs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Tue, 10 Feb 2004, Ed L. wrote:

> On Tuesday February 10 2004 1:42, Martijn van Oosterhout wrote:
> > I generally give Postgresql about 64-128MB of shared memory, which covers
> > all of the system tables and the most commonly used small tables. The
> > rest of the memory (this is a 1GB machine) I leave for the kernel to
> > manage for the very large tables.
>
> Interesting. Why leave very large tables to the kernel instead of the db
> cache? Assuming a dedicated DB server and a DB smaller than available RAM,
> why not give the DB enough RAM to get the entire DB into the DB cache?
> (Assuming you have the RAM).

Because the kernel is more efficient (right now) at caching large data
sets.

With the ARC cache manager that will likely wend it's way into 7.5, it's
quite a likely possibility that postgresql will be able to efficiently
handle a larger cache, but it will still be a shared memory cache, and
those are still usually much slower than the kernel's cache.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message scott.marlowe 2004-02-10 23:06:07 Re: Join query on 1M row table slow
Previous Message scott.marlowe 2004-02-10 22:46:38 Re: Join query on 1M row table slow