Re: reducing random_page_cost from 4 to 2 to force index scan

From: Jesper Krogh <jesper(at)krogh(dot)cc>
To: Greg Smith <greg(at)2ndquadrant(dot)com>
Cc: Craig Ringer <craig(at)postnewspapers(dot)com(dot)au>, Cédric Villemain <cedric(dot)villemain(dot)debian(at)gmail(dot)com>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: reducing random_page_cost from 4 to 2 to force index scan
Date: 2011-05-16 04:41:58
Message-ID: 4DD0AB16.5060808@krogh.cc
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 2011-05-16 03:18, Greg Smith wrote:
> You can't do it in real-time. You don't necessarily want that to
> even if it were possible; too many possibilities for nasty feedback
> loops where you always favor using some marginal index that happens
> to be in memory, and therefore never page in things that would be
> faster once they're read. The only reasonable implementation that
> avoids completely unstable plans is to scan this data periodically
> and save some statistics on it--the way ANALYZE does--and then have
> that turn into a planner input.

Would that be feasible? Have process collecting the data every now-and-then
probably picking some conservative-average function and feeding
it into pg_stats for each index/relation?

To me it seems like a robust and fairly trivial way to to get better
numbers. The
fear is that the OS-cache is too much in flux to get any stable numbers out
of it.

--
Jesper

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Jesper Krogh 2011-05-16 04:49:20 Re: reducing random_page_cost from 4 to 2 to force index scan
Previous Message Greg Smith 2011-05-16 01:18:06 Re: reducing random_page_cost from 4 to 2 to force index scan