On Thu, Apr 16, 2009 at 10:11 AM, Kevin Grittner
> Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> Bear in mind that those limits exist to keep you from running into
>> exponentially increasing planning time when the size of a planning
>> problem gets big. "Raise 'em to the moon" isn't really a sane
>> It might be that we could get away with raising them by one or two
>> the general improvement in hardware since the values were last
>> at; but I'd be hesitant to push the defaults further than that.
> I also think that there was a change somewhere in the 8.2 or 8.3 time
> frame which mitigated this. (Perhaps a change in how statistics were
> scanned?) The combination of a large statistics target and higher
> limits used to drive plan time through the roof, but I'm now seeing
> plan times around 50 ms for limits of 20 and statistics targets of
> 100. Given the savings from the better plans, it's worth it, at least
> in our case.
> I wonder what sort of testing would be required to determine a safe
> installation default with the current code.
Well, given all the variables, maybe we should instead bet targeting
plan time, either indirectly vi estimated values, or directly by
allowing a configurable planning timeout, jumping off to alternate
approach (nestloopy style, or geqo) if available.
In response to
pgsql-performance by date
|Next:||From: Tom Lane||Date: 2009-04-16 15:36:34|
|Subject: Re: Shouldn't the planner have a higher cost for reverse index scans? |
|Previous:||From: Kevin Grittner||Date: 2009-04-16 14:11:14|
|Subject: Re: Really dumb planner decision|