Re: reducing random_page_cost from 4 to 2 to force index scan

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Jesper Krogh <jesper(at)krogh(dot)cc>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: reducing random_page_cost from 4 to 2 to force index scan
Date: 2011-05-16 14:34:12
Message-ID: BANLkTikTWeZxWQFxM1SJg=0=OtCkCX_-dA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Mon, May 16, 2011 at 12:49 AM, Jesper Krogh <jesper(at)krogh(dot)cc> wrote:
>> To me it seems like a robust and fairly trivial way to to get better
>> numbers. The
>> fear is that the OS-cache is too much in flux to get any stable numbers
>> out
>> of it.
>
> Ok, it may not work as well with index'es, since having 1% in cache may very
> well mean that 90% of all requested blocks are there.. for tables in should
> be more trivial.

Tables can have hot spots, too. Consider a table that holds calendar
reservations. Reservations can be inserted, updated, deleted. But
typically, the most recent data will be what is most actively
modified, and the older data will be relatively more (though not
completely) static, and less frequently accessed. Such examples are
common in many real-world applications.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2011-05-16 15:46:45 Re: reducing random_page_cost from 4 to 2 to force index scan
Previous Message Robert Haas 2011-05-16 14:31:51 Re: [PERFORMANCE] expanding to SAN: which portion best to move