[SPAM] Re: Best way to replicate to large number of nodes

From: Ben Chobot <bench(at)silentmedia(dot)com>
To: Brian Peschel <brianp(at)occinc(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: [SPAM] Re: Best way to replicate to large number of nodes
Date: 2010-04-22 15:12:42
Message-ID: 8B36C1FC-1BCE-4A7A-9FAC-C64EC6B1416A@silentmedia.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Apr 21, 2010, at 1:41 PM, Brian Peschel wrote:

> I have a replication problem I am hoping someone has come across before and can provide a few ideas.
>
> I am looking at a configuration of on 'writable' node and anywhere from 10 to 300 'read-only' nodes. Almost all of these nodes will be across a WAN from the writable node (some over slow VPN links too). I am looking for a way to replicate as quickly as possible from the writable node to all the read-only nodes. I can pretty much guarantee the read-only nodes will never become master nodes. Also, the updates to the writable node are bunched and at known times (ie only updated when I want it updated, not constant updates), but when changes occur, there are a lot of them at once.

Two things you didn't address are the acceptable latency of keeping the read-only nodes in sync with the master - can they be different for a day? A minute? Do you need things to stay synchronous? Also, how big is your dataset? A simple pg_dump and some hot scp action after you batched updates might be able to solve your problem.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Scott Marlowe 2010-04-22 15:18:07 Re: How to read the execution Plan
Previous Message Jaime Casanova 2010-04-22 14:58:53 Re: How to read the execution Plan