Re: basic question (shared buffers vs. effective cache

From: "scott(dot)marlowe" <scott(dot)marlowe(at)ihs(dot)com>
To: Sally Sally <dedeb17(at)hotmail(dot)com>
Cc: <pgsql-general(at)postgresql(dot)org>
Subject: Re: basic question (shared buffers vs. effective cache
Date: 2004-05-10 16:27:33
Message-ID: Pine.LNX.4.33.0405101016430.16482-100000@css120.ihs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Mon, 10 May 2004, Sally Sally wrote:

> I have a very basic question on the two parameters shared buffers and
> effective cache size. I have read articles on what each is about etc. But I
> still think I don't quite grasp what these settings mean (especially in
> relation to each other). Since these two settings seem crucial for
> performance can somebody explain to me the relationship/difference between
> these two settings and how they deal with shared memory.

shared_buffers is the amount of space postgresql can use as temp memory
space to put together result sets. It is not intended as a cache, and
once the last backend holding open a buffer space shuts down, the
information in that buffer is lost. If you're working on several large
data sets in a row, the buffer currently operates FIFO when dumping old
references to make room for the incoming data.

Contrast this to the linux or BSD kernels, which cache everything they can
in the "spare" memory of the computer. This cache is maintained until
some other process requests enough memory to make the kernel give up some
of the otherwise unused memory, or something new pushes out something old.
A lot of tuning has gone into this cache to make it fast when handling
large amounts of data, and it caches, of course, more than just
postgresql's data, it caches all the data for everything hitting the hard
drives. If you're on a machine that is mostly a postgresql box, then it
is likely that most of this memory is being used for postgresql, but on a
box running apache / ldap / postgresql / etc... the percentage used for
postgresql will be lower, maybe 75% or so.

The important point here is that caching is the job of the kernel,
buffering is the job of the database. I.e. holding onto data that got
accessed 30 minutes ago is the kernel's job, holding onto data that we're
processing RIGHT NOW is postgresql's job.

Because of this splitting of the jobs as it were, it is usually best to
have postgresql's buffers be a fraction of the size of the kernel caches
on the machine, otherwise it is quite likely that all calls for data not
in postgresql's buffers will result in a disk read, not a kernel cache
hit, since ramping up postgresql's buffers to be as large or larger than
the kernel cache will result in the data you need almost being guaranteed
to be flushed out of the kernel by the time it's been flushed out of
postgresql. Since Postgresql's buffer access methods are inherently
slower than those of the kernel, and they don't seem to scale real well,
allocating too much shared_buffers is a "bad thing".

Now, effective_cache_size sets nothing other than itself. I.e. it
allocates nothing in memory. It is pretty much a big course setting knob
that tells the planner about how much memory the kernel is using to cache
its data, and therefore lets the planner make a rough guesstimate of how
likely an access is to hit memory cache versus having to hit the hard
drives. Since random accesses in memory are only slightly more expensive
than seq scans in memory, higher effective_cache_size favors random
accesses.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message mike 2004-05-10 16:29:48 Rows to columns query
Previous Message Andrew Sullivan 2004-05-10 16:09:39 Re: vacuumdb is failing with NUMBER OF INDEX TUPLES NOT