using pg_comparator as replication

From: Erik Aronesty <erik(at)q32(dot)com>
To: pgsql-hackers(at)postgresql(dot)org
Subject: using pg_comparator as replication
Date: 2009-11-05 15:48:48
Message-ID: ccd588d90911050748r23d908bek751b36056ff6c3fb@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

An update on pg_comparator as an efficient way to do master-slave
replication.

I have been using it for 2 years on a "products" table that has grown from
12,000 rows to 24,000 rows. There are 3 slaves and 1 master. It is sync'ed
every 10 minutes.

It has never failed or caused problems.

On 23039 rows, with under 100 rows changed, over a 3mbit internet
connection, and on a the sync takes 3.3 seconds, 0.94 seconds of which is
CPU time (1.86 GHZ intel dual core). Most of the time is waiting for the
network. And that could be sped up considerably with compression (maybe
5-10 times for my data)... I don't think the postgres communications
protocol considers compression an option.

I do not synchronize all the columns ... just the 15 most important ones

Average number of bytes per row is 284

Primary Key is an autoincrement integer id

Databases are all on the internet at cheap colocation centers with suppsedly
10mbit high speed connections that realistically get about 3 mbit.

I ship a backup and restore of the table every week... in case there are
tons of changes and the system burps when there are too many.... I also
schedule scripts that might make lots of changes to happen before the
dump/restore.

In my 15 years as a DBA, I have never had "replication" (which some say this
isn't, and I say that's a matter of how you define it) work so well.

(Apologies for the hasty post with the wrong subject... please
ignore/delete)

Browse pgsql-hackers by date

  From Date Subject
Next Message Jan Wieck 2009-11-05 15:55:39 Re: Shall we just get rid of plpgsql's RENAME?
Previous Message Erik Aronesty 2009-11-05 15:45:24 Re: [SQL] Case Preservation disregarding case