Re: Performance

From: tv(at)fuzzy(dot)cz
To: "Claudio Freire" <klaussfreire(at)gmail(dot)com>
Cc: "Tomas Vondra" <tv(at)fuzzy(dot)cz>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Performance
Date: 2011-04-14 08:23:26
Message-ID: 5c6c67e9f0c4abed2b7ac84e83fe1f32.squirrel@sq.gransy.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

> On Thu, Apr 14, 2011 at 1:26 AM, Tomas Vondra <tv(at)fuzzy(dot)cz> wrote:
>> Workload A: Touches just a very small portion of the database, to the
>> 'active' part actually fits into the memory. In this case the cache hit
>> ratio can easily be close to 99%.
>>
>> Workload B: Touches large portion of the database, so it hits the drive
>> very often. In this case the cache hit ratio is usually around RAM/(size
>> of the database).
>
> You've answered it yourself without even realized it.
>
> This particular factor is not about an abstract and opaque "Workload"
> the server can't know about. It's about cache hit rate, and the server
> can indeed measure that.

OK, so it's not a matter of tuning random_page_cost/seq_page_cost? Because
tuning based on cache hit ratio is something completely different (IMHO).

Anyway I'm not an expert in this field, but AFAIK something like this
already happens - btw that's the purpose of effective_cache_size. But I'm
afraid there might be serious fail cases where the current model works
better, e.g. what if you ask for data that's completely uncached (was
inactive for a long time). But if you have an idea on how to improve this,
great - start a discussion in the hackers list and let's see.

regards
Tomas

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Florian Weimer 2011-04-14 10:09:19 Re: Linux: more cores = less concurrency.
Previous Message Václav Ovsík 2011-04-14 08:11:52 Re: poor execution plan because column dependence