Re: Speeding up pg_upgrade

From: Stephen Frost <sfrost(at)snowman(dot)net>
To: Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>
Cc: Alexander Kukushkin <cyberdemn(at)gmail(dot)com>, Bruce Momjian <bruce(at)momjian(dot)us>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Speeding up pg_upgrade
Date: 2017-12-07 16:04:13
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


* Alvaro Herrera (alvherre(at)alvh(dot)no-ip(dot)org) wrote:
> Stephen Frost wrote:
> > * Alexander Kukushkin (cyberdemn(at)gmail(dot)com) wrote:
> > > 2 ANALYZE phase is a pain. I think everybody agrees with it.
> > >
> > > 2.5 Usually ANALYZE stage 1 completes quite fast and performance becomes
> > > reasonable, except one case: some of the columns might have non default
> > > statistics target.
> >
> > Ok, if the stage-1 is very fast and performance is reasonable enough
> > after that then perhaps it's not so bad to keep it as-is for now and
> > focus on the dump/restore time. That said, we should certainly also
> > work on improving this too.
> It seems pretty clear to me that we should somehow transfer stats from
> the old server to the new one. Shouldn't it just be a matter of
> serializing the MCV/histogram/ndistinct values, then have capabilities
> to load on the new server? I suppose it'd just be used during binary
> upgrade, but the point seems painful enough for a lot of users.
> Obviously it would not be the raw contents of pg_statistic{,_ext}, but
> rather something a bit higher-level.

Right, I think that's what Bruce was getting at and certainly makes
sense to me as well. I agree that it's a definite pain point for
people. One complication is going to be custom data types, of course..



In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2017-12-07 16:17:55 Re: pgsql: When VACUUM or ANALYZE skips a concurrently dropped table, log i
Previous Message Alvaro Herrera 2017-12-07 16:02:32 Re: Speeding up pg_upgrade