On Thu, Feb 11, 2010 at 13:25, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>wrote:
> 2010/2/11 Bart Samwel <bart(at)samwel(dot)tk>:
> > Perhaps this could be based on a (configurable?) ratio of observed
> > time and projected execution time. I mean, if planning it the first time
> > took 30 ms and projected execution time is 1 ms, then by all means NEVER
> > re-plan. But if planning the first time took 1 ms and resulted in a
> > projected execution time of 50 ms, then it's relatively cheap to re-plan
> > every time (cost increase per execution is 1/50 = 2%), and the potential
> > gains are much greater (taking a chunk out of 50 ms adds up quickly).
> It could be a good idea. I don't belive to sophisticate methods. There
> can be a very simply solution. The could be a "limit" for price. More
> expensive queries can be replaned every time when the price will be
> over limit.
I guess the required complexity depends on how variable planning costs are.
If planning is typically <= 2 ms, then a hard limit on estimated price is
useful and can be set as low as (the equivalent of) 15 ms. However, if
planning costs can be 50 ms, then the lowest reasonable "fixed" limit is
quite a bit larger than that -- and that does not solve the problem reported
earlier in this thread, where a query takes 30 ms using a generic plan and 1
ms using a specialized plan.
Anyhow, I have no clue how much time the planner takes. Can anybody provide
any statistics in that regard?
In response to
pgsql-hackers by date
|Next:||From: Robert Haas||Date: 2010-02-11 12:41:12|
|Subject: Re: Avoiding bad prepared-statement plans.|
|Previous:||From: Simon Riggs||Date: 2010-02-11 12:27:28|
|Subject: Re: Re: [COMMITTERS] pgsql: Make standby server
continuously retry restoring the next WAL|