Re: Speeding up pg_upgrade

From: Bruce Momjian <bruce(at)momjian(dot)us>
To: Dave Page <dpage(at)pgadmin(dot)org>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Speeding up pg_upgrade
Date: 2017-12-05 14:22:57
Message-ID: 20171205142257.GC25023@momjian.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Dec 5, 2017 at 11:16:26PM +0900, Dave Page wrote:
> Hi
>
> On Tue, Dec 5, 2017 at 11:01 PM, Bruce Momjian <bruce(at)momjian(dot)us> wrote:
>
> As part of PGConf.Asia 2017 in Tokyo, we had an unconference topic about
> zero-downtime upgrades.  After the usual discussion of using logical
> replication, Slony, and perhaps having the server be able to read old
> and new system catalogs, we discussed speeding up pg_upgrade.
>
> There are clusters that take a long time to dump the schema from the old
> cluster and recreate it in the new cluster.  One idea of speeding up
> pg_upgrade would be to allow pg_upgrade to be run in two stages:
>
> 1.  prevent system catalog changes while the old cluster is running, and
> dump the old cluster's schema and restore it in the new cluster
>
> 2.  shut down the old cluster and copy/link the data files
>
>
> When we were discussing this, I was thinking that the linking could be done in
> phase 1 too, as that's potentially slow on a very large schema.

Uh, good point! You can create the hard links while system system is
running, no problem! It would only be copy that can't be done while the
system is running. Of course a big question is whether hard linking
takes any measurable time.

--
Bruce Momjian <bruce(at)momjian(dot)us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Stephen Frost 2017-12-05 14:23:49 Re: Speeding up pg_upgrade
Previous Message Bruce Momjian 2017-12-05 14:21:37 Re: Speeding up pg_upgrade