Re: Synchronous replication - patch status inquiry

From: Dimitri Fontaine <dfontaine(at)hi-media(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>, Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Bruce Momjian <bruce(at)momjian(dot)us>, fazool mein <fazoolmein(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Synchronous replication - patch status inquiry
Date: 2010-09-02 13:51:18
Message-ID: 87vd6ouucp.fsf@hi-media-techno.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> Tell that to the DBA. I bet s/he knows what "all standbys" means.
> The fact that the system doesn't know something doesn't make it
> unimportant.

Well as a DBA I think I'd much prefer to attribute "votes" to each
standby so that each ack is weighted. Let me explain in more details the
setup I'm thinking about.

The transaction on the master wants a certain "service level" (async,
recv, fsync, replay) and a certain number of votes. As proposed earlier,
the standby would feedback the last XID known locally in each state
(received, synced, replayed) and its current weight, and the master
would arbitrate given those information.

That's highly flexible, you can have slaves join the party at any point
in time, and change 2 user GUC (set by session, transaction, function,
database, role, in postgresql.conf) to setup the service level target
you want to ensure, from the master.

(We could go as far as wanting fsync:2,replay:1 as a service level.)

From that you have either the "fail when slave disappear" and the
"please don't shut the service down if a slave disappear" settings, per
transaction, and per slave too (that depends on its weight, remember).

(You can setup the slave weights as powers of 2 and have the service
level be masks to allow you to choose precisely which slave will ack
your fsync service level, and you can switch this slave at run time
easily — sounds cleverer, but sounds also easier to implement given
the flexibility it gives — precedents in PostgreSQL? the PITR and WAL
Shipping facilities are hard to use, full of traps, but very
flexible).

You can even give some more weight to one slave while you're maintaining
another so that the master just don't complain.

I see a need for very dynamic *and decentralized* replication topology
setup, I fail to see a need for a centralized registration based setup.

> I agree that we don't absolutely need standby registration for some
> really basic version of synchronous replication. But I think we'd be
> better off biting the bullet and adding it.

What does that mechanism allow us to implement we can't do without?
--
dim

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Simon Riggs 2010-09-02 14:06:29 Re: Synchronous replication - patch status inquiry
Previous Message Michael Haggerty 2010-09-02 13:40:25 Re: git: uh-oh