From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
Cc: | Zeugswetter Andreas DAZ SD <ZeugswetterA(at)spardat(dot)at>, pgsql-hackers(at)postgreSQL(dot)org |
Subject: | Re: Stopgap solution for table-size-estimate updating problem |
Date: | 2004-11-28 22:35:53 |
Message-ID: | 21662.1101681353@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Simon Riggs <simon(at)2ndquadrant(dot)com> writes:
> Given we expect an underestimate, can we put in a correction factor
> should the estimate get really low...sounds like we could end up
> choosing nested joins more often when we should have chosen merge joins.
One possibility: vacuum already knows how many tuples it removed. We
could set reltuples equal to, say, the mean of the number-of-tuples-
after-vacuuming and the number-of-tuples-before. In a steady state
situation this would represent a fairly reasonable choice. In cases
where the table size has actually decreased permanently, it'd take a few
cycles of vacuuming before reltuples converges to the new value, but that
doesn't seem too bad.
A standalone ANALYZE should still do what it does now, though, I think;
namely set reltuples to its best estimate of the current value.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2004-11-28 22:53:38 | Re: Adding a suffix array index |
Previous Message | Thomas Hallgren | 2004-11-28 22:23:29 | Re: Status of server side Large Object support? |