On 11/29/07, Gregory Stark <stark(at)enterprisedb(dot)com> wrote:
> "Simon Riggs" <simon(at)2ndquadrant(dot)com> writes:
> > On Wed, 2007-11-28 at 14:48 +0100, Csaba Nagy wrote:
> >> In fact an even more useful option would be to ask the planner to throw
> >> error if the expected cost exceeds a certain threshold...
> > Tom's previous concerns were along the lines of "How would know what to
> > set it to?", given that the planner costs are mostly arbitrary numbers.
> Hm, that's only kind of true.
> Obviously few people know how long such a page read takes but surely you would
> just run a few sequential reads of large tables and set the limit to some
> multiple of whatever you find.
> This isn't going to precise to the level of being able to avoid executing any
> query which will take over 1000ms. But it is going to be able to catch
> unconstrained cross joins or large sequential scans or such.
Isn't that what statement_timeout is for? Since this is entirely based
on estimates, using arbitrary fuzzy numbers for this seems fine to me;
precision isn't really the goal.
In response to
pgsql-performance by date
|Next:||From: Csaba Nagy||Date: 2007-11-30 10:29:43|
|Subject: Re: TB-sized databases|
|Previous:||From: Robert Treat||Date: 2007-11-30 09:15:09|
|Subject: Re: Training Recommendations|