Re: per table random-page-cost?

From: Greg Stark <gsstark(at)mit(dot)edu>
To: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>
Cc: marcin mank <marcin(dot)mank(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: per table random-page-cost?
Date: 2009-10-19 23:39:16
Message-ID: 407d949e0910191639k6bc9d71bu2c5638a260ce13a3@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Oct 19, 2009 at 2:54 PM, Kevin Grittner
<Kevin(dot)Grittner(at)wicourts(dot)gov> wrote:
> How about calculating an effective percentage based on other
> information.  effective_cache_size, along with relation and database
> size, come to mind.

I think previous proposals for this have fallen down when you actually
try to work out a formula for this. The problem is that you could have
a table which is much smaller than effective_cache_size but is never
in cache due to it being one of many such tables.

I think it would still be good to have some naive kind of heuristic
here as long as it's fairly predictable for DBAs.

But the long-term strategy here I think is to actually have some way
to measure the real cache hit rate on a per-table basis. Whether it's
by timing i/o operations, programmatic access to dtrace, or some other
kind of os interface, if we could know the real cache hit rate it
would be very helpful.

Perhaps we could extrapolate from the shared buffer cache percentage.
If there's a moderately high percentage in shared buffers then it
seems like a reasonable supposition to assume the filesystem cache
would have a similar distribution.

--
greg

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2009-10-20 00:29:33 Re: per table random-page-cost?
Previous Message Greg Stark 2009-10-19 23:29:05 Re: per table random-page-cost?