On Thu, Mar 15, 2012 at 08:22:24AM +0200, Peter Eisentraut wrote:
> On ons, 2012-03-14 at 17:36 -0400, Bruce Momjian wrote:
> > Well, I have not had to make major adjustments to pg_upgrade since 9.0,
> > meaning the code is almost complete unchanged and does not require
> > additional testing for each major release. If we go down the road of
> > dumping stats, we will need to adjust for stats changes and test this to
> > make sure we have made the proper adjustment for every major release.
> I think this could be budgeted under keeping pg_dump backward
> compatible. You have to do that anyway for each catalog change, and so
> doing something extra for a pg_statistic change should be too shocking.
Well, the big question is whether the community wants to buy into that
workload. It isn't going to be possible for me to adjust the statistics
dump/restore code based on the changes someone makes unless I can fully
understand the changes by looking at the patch.
I think we have two choices --- either migrate the statistics, or adopt
my approach to generating incremental statistics quickly. Does anyone
see any other options?
In an ideal world, analyze would generate minimal statistics on all
tables/databases, then keep improving them until they are the default,
but that is unlikely to happen because of the code complexity involved.
My powers-of-10 approach is probably the best we are going to do in the
My current idea is to apply the incremental analyze script patch to 9.2,
and blog about the patch and supply downloadable versions of the script
people can use on 9.1 and get feedback on improvements.
Bruce Momjian <bruce(at)momjian(dot)us> http://momjian.us
+ It's impossible for everything to be true. +
In response to
pgsql-hackers by date
|Next:||From: Andrew Dunstan||Date: 2012-03-15 15:15:42|
|Subject: Re: pg_upgrade and statistics|
|Previous:||From: Dimitri Fontaine||Date: 2012-03-15 14:52:23|
|Subject: Re: Command Triggers, patch v11|