Skip site navigation (1) Skip section navigation (2)

Re: multiple servers updating from/to one

From: Chris Browne <cbbrowne(at)acm(dot)org>
To: pgsql-admin(at)postgresql(dot)org
Subject: Re: multiple servers updating from/to one
Date: 2005-03-30 16:18:31
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-admin
aramefarpado(at)gmail(dot)com (Marco Carvalho) writes:
> Hi, all
> I need to do this scenario:
> Web Application           Web Application             Web Application
>            |                                  |                       
>               |
>   pgsql server 1           pgsql server 2              pgsql server 3
>            |                                  |                       
>               |
>            |                                  |                       
>               |
>            | adsl                          | adsl                     
>          | adsl
>            |                                  |                       
>               |
>            |                                  |                       
>               |
>               -------------------------------------------------------
>                                               |
>                              pgsql server ("master")
> I need various servers in different locations running exactly some
> things, one Web Application and one pgsql server, all synchronizing
> with "master" server through unstable adsl connections.
> So, If I do one insert in server 1, this data goes to "master" and is
> available to other servers, and vice-versa, and if adsl connection
> broken, Web Application still working with local server and when
> connection is restored both databases are synchronized.
> Is this possible?
> If it's true, what tools I need?

The only system I am aware of that can support this sort of usage
scenario may be PeerDirect's replication system.

The replication systems "traditionally available" for use with
PostgreSQL do not directly support your desire for a multimaster
approach; systems like Slony-I, Mammoth Replicator, eRServer, and such
require that there be just one master system and that replicas be

The Slony-II system being worked on is a multimaster system, but
requires synchronization of "live" servers so that it specifically
does NOT support the "operate while disconnected, and merge in changes
upon return" that you are suggesting.

There is _possibly_ a way to get Slony-I to do what you want, if your
application is suitably malleable...

In Slony-I, it is necessary for one and only one host to be the
"master" or "origin" for each table.  But it supports the notion of
having replication sets (e.g. - sets of tables that are being
replicated) that have differing origins.

Thus, organizing those sets into their own schemas, you might set

create schema master;
create table ( stuff );
create table ( stuff );
create table master.ap ( stuff );
create table master.users ( stuff );
create table master.inventory ( stuff );

create schema app1;
create table ( stuff );
create table ( stuff );
create table app1.ap ( stuff );
create table app1.users ( stuff );
create table app1.inventory ( stuff );

create schema app2;
create table ( stuff );
create table ( stuff );
create table app2.ap ( stuff );
create table app2.users ( stuff );
create table app2.inventory ( stuff );

create schema app3;
create table ( stuff );
create table ( stuff );
create table app3.ap ( stuff );
create table app3.users ( stuff );
create table app3.inventory ( stuff );

You'd set up the respective servers as the "origins" for the four
replication sets that naturally fall out of this.

Some process on the "master" server would be responsible for
synchronizing the sets, perhaps by adding some additional table that
feeds back what the app servers should update...  

In effect:

  "I pulled orders 123, 577, and 899 from app1, and put them into the
  master, so feel free to delete them..."

  This would be encoded in a new "master" table, perhaps thus:

  insert into master.downstream_changes (server, table, id) values ('app1', 'orders', '123');
  insert into master.downstream_changes (server, table, id) values ('app1', 'orders', '577');
  insert into master.downstream_changes (server, table, id) values ('app1', 'orders', '899');

Thus, the app1, app2, and app3 schemas would just contain transient
data that, once processed on the master, would get deleted.

Slony-I isn't particularly happy about hosts that stay disconnected a
lot of the time, so I'd hate to set this up...
(format nil "~S(at)~S" "cbbrowne" "")
"The  present  need for  security  products far exceeds  the number of
individuals    capable of    designing  secure  systems. Consequently,
industry  has resorted to  employing folks  and purchasing "solutions"
from vendors that shouldn't be let near a project involving securing a
system."  -- Lucky Green

In response to

pgsql-admin by date

Next:From: Anjan DaveDate: 2005-03-30 16:30:21
Subject: compilation problem with readline
Previous:From: John DeSoiDate: 2005-03-30 16:14:38
Subject: Re: problems with user rights

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group