Best way to replicate to large number of nodes

From: Brian Peschel <brianp(at)occinc(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: Best way to replicate to large number of nodes
Date: 2010-04-21 20:41:26
Message-ID: 4BCF62F6.7080107@occinc.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

I have a replication problem I am hoping someone has come across before
and can provide a few ideas.

I am looking at a configuration of on 'writable' node and anywhere from
10 to 300 'read-only' nodes. Almost all of these nodes will be across a
WAN from the writable node (some over slow VPN links too). I am looking
for a way to replicate as quickly as possible from the writable node to
all the read-only nodes. I can pretty much guarantee the read-only
nodes will never become master nodes. Also, the updates to the writable
node are bunched and at known times (ie only updated when I want it
updated, not constant updates), but when changes occur, there are a lot
of them at once.

We have use Slony-I for other nodes. But these are all 1 master, 2
slave configurations (where either slave could become the master). But
some of our admins are worried about trying to maintain a very large
size cluster (ie schema changes).

I took a look at the wiki
(http://wiki.postgresql.org/wiki/Replication%2C_Clustering%2C_and_Connection_Pooling)
and nothing really jumped at me. It sounded like pgpool or Mammoth
might be interesting, but I was hoping someone would have some opinions
before I randomly start trying stuff.

Thanks in advance,
Brian

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Mark Watson 2010-04-21 20:51:59 Re: Avoiding surrogate keys
Previous Message Martin Gainty 2010-04-21 20:37:44 Re: Avoiding surrogate keys