Rod Taylor <pg(at)rbt(dot)ca> writes:
>> One objection to this is that after moving "off the gold standard" of
>> 1.0 = one page fetch, there is no longer any clear meaning to the
>> cost estimate units; you're faced with the fact that they're just an
>> arbitrary scale. I'm not sure that's such a bad thing, though. For
>> instance, some people might want to try to tune their settings so that
>> the estimates are actually comparable to milliseconds of real time.
> Any chance that the correspondence to time could be made a part of the
> design on purpose and generally advise people to follow that rule?
We might eventually get to that point, but I'm hesitant to try to do it
immediately. For one thing, I really *don't* want to get bug reports
from newbies complaining that the cost estimates are always off by a
factor of X. (Not but what we haven't gotten some of those anyway :-()
In the short term I see us sticking to the convention that seq_page_cost
is 1.0 in a "typical" database, while anyone who's really hot to try to
make the other happen is free to experiment.
> If we could tell people to run *benchmark* and use those numbers
> directly as a first approximation tuning, it could help quite a bit
> for people new to PostgreSQL experiencing poor performance.
We don't have such a benchmark ... if we did, we could have told
people how to use it to set the variables already. I'm very very
suspicious of any suggestion that it's easy to derive appropriate
numbers for these settings from one magic benchmark.
regards, tom lane
In response to
pgsql-hackers by date
|Next:||From: Joshua D. Drake||Date: 2006-06-03 00:48:03|
|Subject: Re: COPY (query) TO file|
|Previous:||From: Tom Lane||Date: 2006-06-02 23:21:05|
|Subject: Ye olde "failed to initialize lc_messages" gotcha|