Andrew Sullivan wrote:
> I should have stated that differently. First, you're right that if
> you don't know where to look or what to look for, you can easily be
> unaware of nodes being out of sync. What's not a problem with Slony
> is that the nodes can get out of internally consistent sync state: if
> you have a node that is badly lagged, at least it represents, for
> sure, an actual point in time of the origin set's history. Some of
> the replication systems aren't as careful about this, and it's
> possible to get the replica into a state that never happened on the
> origin. That's much worse, in my view.
> In addition, it is not possible that Slony's system tables report the
> replica as being up to date without them actually being so, because
> the system tables are updated in the same transaction as the data is
> sent. It's hard to read those tables, however, because you have to
> check every node and understand all the states.
Yes, and nicely explained!
> (on Londiste DDL + slave chaining)...
> Well, those particular features -- which are indeed the source of much
> of the complexity in Slony -- were planned in from the beginning.
> Londiste aimed to be simpler, so it would be interesting to see
> whether those features could be incorporated without the same
Yeah, that's the challenge!
Personally I would like DDL to be possible without any special wrappers
or precautions, as the usual (accidental) breakage I end up looking at
in Slony is because someone (or an app's upgrade script) has performed
an ALTER TABLE directly on the master schema...
In response to
pgsql-performance by date
|Next:||From: Matthew Wakeling||Date: 2009-04-08 10:35:51|
|Subject: Re: plpgsql arrays |
|Previous:||From: Bruce Momjian||Date: 2009-04-07 22:49:22|
|Subject: Re: 8.4 Performance improvements: was Re: Proposal
of tunable fix for scalability of 8.4|