Peter Eisentraut <peter_e(at)gmx(dot)net> writes:
> I don't recall, has it ever been considered to compare the number of
> actual result rows against the estimate computed by the optimizer and then
> draw some conclusions from it? Both numbers should be easily available.
It's been suggested, but doing anything with the knowledge that you
guessed wrong seems to be an AI project, the more so as the query gets
more complex. I haven't been able to think of anything very productive
to do with such a comparison (no, I don't like any of your suggestions
;-)). Which parameter should be tweaked on the basis of a bad result?
If the real problem is not a bad parameter but a bad model, will the
tweaker remain sane, or will it drive the parameters to completely
The one thing that we *can* recommend unreservedly is running ANALYZE
more often, but that's just a DB administration issue, not something you
need deep study of the planner results to discover. In 7.2, both VACUUM
and ANALYZE should be sufficiently cheap/noninvasive that people can
just run them in background every hour-or-so...
regards, tom lane
In response to
pgsql-hackers by date
|Next:||From: Mikheev, Vadim||Date: 2001-07-05 19:46:44|
|Subject: RE: Re: Buffer access rules, and a probable bug |
|Previous:||From: Peter Eisentraut||Date: 2001-07-05 17:37:09|
|Subject: Checking query results against selectivity estimate|