Re: merge>hash>loop

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Markus Schaber <schabi(at)logix-tt(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: merge>hash>loop
Date: 2006-04-18 23:38:33
Message-ID: 20064.1145403513@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Markus Schaber <schabi(at)logix-tt(dot)com> writes:
> An easy first approach would be to add a user tunable cache probability
> value to each index (and possibly table) between 0 and 1. Then simply
> multiply random_page_cost with (1-that value) for each scan.

That's not the way you'd need to use it. But on reflection I do think
there's some merit in a "cache probability" parameter, ranging from zero
(giving current planner behavior) to one (causing the planner to assume
everything is already in cache from prior queries). We'd have to look
at exactly how such an assumption should affect the cost equations ...

regards, tom lane

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message patrick keshishian 2006-04-19 01:02:27 Planner doesn't chose Index - (slow select)
Previous Message Jim C. Nasby 2006-04-18 23:22:26 Re: merge>hash>loop