|From:||Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>|
|To:||David Gould <daveg(at)sonic(dot)net>|
|Cc:||Alexander Kuzmenkov <a(dot)kuzmenkov(at)postgrespro(dot)ru>, pgsql-hackers(at)lists(dot)postgresql(dot)org, Alina Alexeeva <alexeeva(at)adobe(dot)com>, Ullas Lakkur Raghavendra <lakkurra(at)adobe(dot)com>|
|Subject:||Re: [patch] BUG #15005: ANALYZE can make pg_class.reltuples inaccurate.|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
David Gould <daveg(at)sonic(dot)net> writes:
> I'm confused at this point, I provided a patch that addresses this and a
> test case. We seem to be discussing everything as if we first noticed the
> issue. Have you reviewed the patch and and attached analysis and tested it?
> Please commment on that?
I've looked at the patch. The questions in my mind are
(1) do we really want to go over to treating ANALYZE's tuple density
result as gospel, contradicting the entire thrust of the 2011 discussion?
(2) what should we do in the VACUUM case? Alexander's argument seems
to apply with just as much force to the VACUUM case, so either you
discount that or you conclude that VACUUM needs adjustment too.
> This tables reltuples is 18 times the actual row count. It will never converge
> because with 50000953 pages analyze can only adjust reltuples by 0.0006 each time.
But by the same token, analyze only looked at 0.0006 of the pages. It's
nice that for you, that's enough to get a robust estimate of the density
everywhere; but I have a nasty feeling that that won't hold good for
everybody. The whole motivation of the 2011 discussion, and the issue
that is also seen in some other nearby discussions, is that the density
can vary wildly.
regards, tom lane
|Next Message||Tels||2018-03-02 22:21:28||Re: [HACKERS] [POC] Faster processing at Gather node|
|Previous Message||Peter Eisentraut||2018-03-02 22:07:58||Re: prokind column (was Re: [HACKERS] SQL procedures)|