| From: | Tomas Vondra <tv(at)fuzzy(dot)cz> |
|---|---|
| To: | pgsql-performance(at)postgresql(dot)org |
| Subject: | Re: Performance |
| Date: | 2011-04-26 18:54:34 |
| Message-ID: | 4DB714EA.6090106@fuzzy.cz |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
Dne 26.4.2011 07:35, Robert Haas napsal(a):
> On Apr 13, 2011, at 6:19 PM, Tomas Vondra <tv(at)fuzzy(dot)cz> wrote:
>> Yes, I've had some lectures on non-linear programming so I'm aware that
>> this won't work if the cost function has multiple extremes (walleys /
>> hills etc.) but I somehow suppose that's not the case of cost estimates.
>
> I think that supposition might turn out to be incorrect, though. Probably
> what will happen on simple queries is that a small change will make no
> difference, and a large enough change will cause a plan change. On
> complex queries it will approach continuous variation but why
> shouldn't there be local minima?
Aaaah, damn! I was not talking about cost estimates - those obviously do
not have this feature, as you've pointed out (thanks!).
I was talking about the 'response time' I mentioned when describing the
autotuning using real workload. The idea is to change the costs a bit
and then measure the average response time - if the overall performance
improved, do another step in the same direction. Etc.
I wonder if there are cases where an increase of random_page_cost would
hurt performance, and another increase would improve it ... And I'm not
talking about individual queries, I'm talking about overall performance.
regards
Tomas
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Kevin Grittner | 2011-04-26 21:37:15 | Re: reducing random_page_cost from 4 to 2 to force index scan |
| Previous Message | Josh Berkus | 2011-04-26 17:57:04 | Re: Time to put theory to the test? |