Re: minimizing downtime when upgrading

From: snacktime <snacktime(at)gmail(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: minimizing downtime when upgrading
Date: 2006-06-16 17:16:43
Message-ID: 1f060c4c0606161016m6e3a11fct1b8b79c1e96808d3@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 6/16/06, Richard Huxton <dev(at)archonet(dot)com> wrote:

> The other option would be to run replication, e.g. slony to migrate from
> one version to another. I've done it and it works fine, but it will mean
> slony adding its own tables to each database. I'd still do it one
> merchant at a time, but that should reduce your downtime to seconds.
>

I'll have to take another look at slony, it's been a while. Our
database structure is a bit non standard. Being a payment gateway, we
are required to have a separation of data between merchants, which
means not mixing data from different merchants in the same table.
So what we do is every user has their own schema, with their own set
of tables. Yes I know that's not considered the best practice design
wise, but separate databases would have caused even more issues, and
as it turns out there are some advantages to the separate schema
approach that we never thought of. Last time I looked at slony you
have to configure it for each individual table you want replicated.
We have around 50,000 tables, and more are added on a daily basis.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Bruno Wolff III 2006-06-16 17:30:15 Re: table has many to many relationship with itself - how
Previous Message Chander Ganesan 2006-06-16 16:46:37 Re: Omitting tablespace creation from pg_dumpall...