> there is a table with 3000 rows, and a custom index with a function.
> This query gives negative cost:
> select distinct name
> from negativeCostBugReport t_
> where noaccent(t_.name) like 'B%' limit 10
Hm, interesting. The culprit seems to be the part of cost_index that
estimates the per-tuple cost of evaluating filter conditions. It's
trying to do that by taking the baserestrictcost (here, that'll be
exactly the cost of the filter condition noaccent(name) ~~ 'B%')
and subtracting what cost_qual_eval says is the cost of the index
conditions. Normally that works all right, but here you have a very
expensive function that appears once in the filter and twice in the
indexquals, leading to a negative value for per-tuple CPU cost.
Even if we had only one indexqual derived from the filter condition,
we'd not be getting the right answer here, because actually the filter
condition *does* have to be evaluated at runtime, since it doesn't
exactly match the indexqual. I think this code probably dates to before
we had any notion of deriving simplified indexquals from special
filter conditions; it's not really right at all for such cases.
I think what we're going to need here is a real determination of exactly
which quals will actually have to be evaluated at runtime. The code is
trying to let that determination be postponed until createplan time, but
maybe we can't get away with that.
I'll see about fixing this for 9.2, but I doubt we'll consider
backpatching it. You should probably back off the cost assigned to the
noaccent function as a workaround.
regards, tom lane
In response to
pgsql-bugs by date
|Next:||From: Tom Lane||Date: 2012-04-11 01:04:00|
|Subject: Re: BUG #6578: Deadlock in libpq after upgrading from 8.4.7 to 8.4.11 |
|Previous:||From: Istvan Endredy||Date: 2012-04-10 15:02:56|
|Subject: Re: BUG #6579: negative cost in a planning|