Skip site navigation (1) Skip section navigation (2)

Re: Checking query results against selectivity estimate

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Peter Eisentraut <peter_e(at)gmx(dot)net>
Cc: PostgreSQL Development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Checking query results against selectivity estimate
Date: 2001-07-05 18:36:21
Message-ID: 3846.994358181@sss.pgh.pa.us (view raw or flat)
Thread:
Lists: pgsql-hackers
Peter Eisentraut <peter_e(at)gmx(dot)net> writes:
> I don't recall, has it ever been considered to compare the number of
> actual result rows against the estimate computed by the optimizer and then
> draw some conclusions from it?  Both numbers should be easily available.

It's been suggested, but doing anything with the knowledge that you
guessed wrong seems to be an AI project, the more so as the query gets
more complex.  I haven't been able to think of anything very productive
to do with such a comparison (no, I don't like any of your suggestions
;-)).  Which parameter should be tweaked on the basis of a bad result?
If the real problem is not a bad parameter but a bad model, will the
tweaker remain sane, or will it drive the parameters to completely
ridiculous values?

The one thing that we *can* recommend unreservedly is running ANALYZE
more often, but that's just a DB administration issue, not something you
need deep study of the planner results to discover.  In 7.2, both VACUUM
and ANALYZE should be sufficiently cheap/noninvasive that people can
just run them in background every hour-or-so...

			regards, tom lane

In response to

pgsql-hackers by date

Next:From: Mikheev, VadimDate: 2001-07-05 19:46:44
Subject: RE: Re: Buffer access rules, and a probable bug
Previous:From: Peter EisentrautDate: 2001-07-05 17:37:09
Subject: Checking query results against selectivity estimate

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group