Re: per table random-page-cost?

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "marcin mank" <marcin(dot)mank(at)gmail(dot)com>, "Robert Haas" <robertmhaas(at)gmail(dot)com>
Cc: "PostgreSQL-development" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: per table random-page-cost?
Date: 2009-10-19 21:54:47
Message-ID: 4ADC99D7020000250002BB4E@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Robert Haas <robertmhaas(at)gmail(dot)com> wrote:

> I've been wondering if it might make sense to have a
> "random_page_cost" and "seq_page_cost" setting for each TABLESPACE,
> to compensate for the fact that different media might be faster or
> slower, and a percent-cached setting for each table over top of
> that.

[after recovering from the initial cringing reaction...]

How about calculating an effective percentage based on other
information. effective_cache_size, along with relation and database
size, come to mind. How about the particular index being considered
for the plan? Of course, you might have to be careful about working
in TOAST table size for a particular query, based on the columns
retrieved.

I have no doubt that there would be some major performance regressions
in the first cut of anything like this, for at least *some* queries.
The toughest part of this might be to get adequate testing to tune it
for a wide enough variety of real-life situations.

-Kevin

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message marcin mank 2009-10-19 22:17:55 Re: per table random-page-cost?
Previous Message Greg Stark 2009-10-19 21:27:20 Re: per table random-page-cost?