Re: Best practise for upgrade of 24GB+ database

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "francis picabia" <fpicabia(at)gmail(dot)com>, "Brad (Toronto ON CA) Nicholson" <bnicholson(at)hp(dot)com>
Cc: "pgsql-admin(at)postgresql(dot)org" <pgsql-admin(at)postgresql(dot)org>
Subject: Re: Best practise for upgrade of 24GB+ database
Date: 2012-01-20 21:42:09
Message-ID: 4F198B510200002500044A64@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

francis picabia <fpicabia(at)gmail(dot)com> wrote:

> That's great information. 9.0 is introducing streaming
> replication, so that is another option I'll look into.

We upgrade multi-TB databases in just a couple minutes using
pg_upgrade using the hard-link option. That doesn't count
post-upgrade vacuum/analyze time, but depending on your usage you
might get away with analyzing a few tables before letting users in,
and doing the database-wide vacuum analyze while the database is in
use.

One of the other options might be better for you, but this one has
worked well for us.

-Kevin

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message Brian Fehrle 2012-01-21 00:13:08 buffers_backend climbing during data importing, bad thing or no biggie?
Previous Message Kevin Grittner 2012-01-20 21:34:46 Re: Meta data information on tables