Re: Really dumb planner decision

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Merlin Moncure" <mmoncure(at)gmail(dot)com>, "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: "Matthew Wakeling" <matthew(at)flymine(dot)org>,<gryzman(at)gmail(dot)com>, "Robert Haas" <robertmhaas(at)gmail(dot)com>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Really dumb planner decision
Date: 2009-04-16 14:11:14
Message-ID: 49E6F632.EE98.0025.0@wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Bear in mind that those limits exist to keep you from running into
> exponentially increasing planning time when the size of a planning
> problem gets big. "Raise 'em to the moon" isn't really a sane
strategy.
> It might be that we could get away with raising them by one or two
given
> the general improvement in hardware since the values were last
looked
> at; but I'd be hesitant to push the defaults further than that.

I also think that there was a change somewhere in the 8.2 or 8.3 time
frame which mitigated this. (Perhaps a change in how statistics were
scanned?) The combination of a large statistics target and higher
limits used to drive plan time through the roof, but I'm now seeing
plan times around 50 ms for limits of 20 and statistics targets of
100. Given the savings from the better plans, it's worth it, at least
in our case.

I wonder what sort of testing would be required to determine a safe
installation default with the current code.

-Kevin

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Merlin Moncure 2009-04-16 14:44:03 Re: Really dumb planner decision
Previous Message Tom Lane 2009-04-16 13:49:28 Re: Really dumb planner decision