Re: minimizing downtime when upgrading

From: Kenneth Downs <ken(at)secdat(dot)com>
To: snacktime <snacktime(at)gmail(dot)com>
Cc: "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: Re: minimizing downtime when upgrading
Date: 2006-06-16 10:40:59
Message-ID: 44928ABB.9070905@secdat.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

snacktime wrote:

> Anyone have any tips for minimizing downtime when upgrading? So far
> we have done upgrades during scheduled downtimes. Now we are getting
> to the point where the time required for a standard dump/restore is
> just too long. What have others done when downtime is critical? The
> only solution we have been able to come up with is to migrate the data
> on a per user basis to a new database server. Each user is a
> merchant, and the data in the database is order data. Migrating one
> merchant at a time will keep the downtime per merchant limited to just
> the time it takes to migrate the data for that merchant, which is
> acceptable.

AFAIK it has always been the case that you should expect to have to dump
out your databases and reload them for version upgrades.

Is anybody over at the dev team considering what an onerous burden this
is? Is anyone considering doing away with it?

Attachment Content-Type Size
ken.vcf text/x-vcard 186 bytes

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Douglas McNaught 2006-06-16 11:03:25 Re: VACUUMing sometimes increasing database size /
Previous Message Dave Page 2006-06-16 10:14:43 Re: postgres password