> > So, the selectivity that a search for the most common value would
> > have is a reasonable estimate for the selectivity of a search for any
> > value. That's a bogus assumption in this case --- but it's hard to
> > justify making any other assumption in general.
> Other db's usually use the value count(*) / nunique for the light weight
> This makes the assumptoin that the distinct index values are evenly
> That is on average a correct assumption, whereas our assumption on average
> overestimates the number of rows returned.
> I am not sure we have a nunique info though.
Yes, that's the problem. Figuring out the number of uniques is hard,
expecially with no index.
Bruce Momjian | http://www.op.net/~candle
maillist(at)candle(dot)pha(dot)pa(dot)us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
In response to
pgsql-hackers by date
|Next:||From: Bruce Momjian||Date: 1999-07-28 15:45:23|
|Subject: Re: [HACKERS] row reuse while UPDATE and vacuum analyze problem|
|Previous:||From: Bruce Momjian||Date: 1999-07-28 15:42:30|
|Subject: Re: [HACKERS] double opens|