"Simon Riggs" <simon(at)2ndquadrant(dot)com> writes:
> On Wed, 2007-11-28 at 14:48 +0100, Csaba Nagy wrote:
>> In fact an even more useful option would be to ask the planner to throw
>> error if the expected cost exceeds a certain threshold...
> Well, I've suggested it before:
> statement_cost_limit on pgsql-hackers, 1 March 2006
> Would people like me to re-write and resubmit this patch for 8.4?
> Tom's previous concerns were along the lines of "How would know what to
> set it to?", given that the planner costs are mostly arbitrary numbers.
Hm, that's only kind of true.
Since 8.mumble seq_page_cost is itself configurable meaning you can adjust the
base unit and calibrate all the parameters to be time in whatever unit you
But even assuming you haven't so adjusted seq_page_cost and all the other
parameters to match the numbers aren't entirely arbitrary. They represent time
in units of "however long a single sequential page read takes".
Obviously few people know how long such a page read takes but surely you would
just run a few sequential reads of large tables and set the limit to some
multiple of whatever you find.
This isn't going to precise to the level of being able to avoid executing any
query which will take over 1000ms. But it is going to be able to catch
unconstrained cross joins or large sequential scans or such.
Ask me about EnterpriseDB's RemoteDBA services!
In response to
pgsql-performance by date
|Next:||From: Brad Nicholson||Date: 2007-11-29 15:10:54|
|Subject: 7.4 Checkpoint Question|
|Previous:||From: cluster||Date: 2007-11-29 11:07:37|
|Subject: Re: Query only slow on first run|