Re: [HACKERS] database replication

From: "Damond Walker" <dwalker(at)black-oak(dot)com>
To: "Ryan Kirkpatrick" <pgsql(at)rkirkpat(dot)net>
Cc: <pgsql-hackers(at)postgreSQL(dot)org>
Subject: Re: [HACKERS] database replication
Date: 1999-12-26 15:10:41
Message-ID: 002201bf4fb3$623f0220$b263a8c0@vmware98.walkers.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

>
> I too have been thinking about this some over the last year or
>two, just trying to find a quick and easy way to do it. I am not so
>interested in replication, as in synchronization, as in between a desktop
>machine and a laptop, so I can keep the databases on each in sync with
>each other. For this sort of purpose, both the local and remote databases
>would be "idle" at the time of syncing.
>

I don't think it would matter if the databases are idle or not to be
honest with you. At any single point in time when you replicate I'd figure
that the database would be in a consistent state. So, you should be able to
replicate (or sync) a remote database that is in use. After all, you're
getting a snapshot of the database as it stands at 8:45 PM. At 8:46 PM it
may be totally different...but the next time syncing takes place those
changes would appear in your local copy.

The one problem you may run into is if the remote host is running a
large batch process. It's very likely that you will get 50% of their
changes when you replicate...but then again, that's why you can schedule the
event to work around such things.

> How about a single, seperate table with the fields of 'database',
>'tablename', 'oid', 'last_changed', that would store the same data as your
>PGR_TIME field. It would be seperated from the actually data tables, and
>therefore would be totally transparent to any database interface
>applications. The 'oid' field would hold each row's OID, a nice, unique
>identification number for the row, while the other fields would tell which
>table and database the oid is in. Then this table can be compared with the
>this table on a remote machine to quickly find updates and changes, then
>each differences can be dealt with in turn.
>

The problem with OID's is that they are unique at the local level but if
you try and use them between servers you can run into overlap. Also, if a
database is under heavy use this table could quickly become VERY large. Add
indexes to this table to help performance and you're taking up even more
disk space.

Using the PGR_TIME field with an index will allow us to find rows which
have changed VERY quickly. All we need to do now is somehow programatically
find the primary key for a table so the person setting up replication (or
syncing) doesn't have to have an indepth knowledge of the schema in order to
setup a syncing schedule.

>
> I like this idea, better than any I have come up with yet. Though,
>how are you going to handle DELETEs?
>

Oops...how about defining a trigger for this? With deletion I guess we
would have to move a flag into another table saying we deleted record 'X'
with this primary key from this table.

>
> Yea, this is indeed the sticky part, and would indeed require some
>fine-tunning. Basically, the way I see it, is if the two timestamps for a
>single row do not match (or even if the row and therefore timestamp is
>missing on one side or the other altogether):
> local ts > remote ts => Local row is exported to remote.
> remote ts > local ts => Remote row is exported to local.
> local ts > last sync time && no remote ts =>
> Local row is inserted on remote.
> local ts < last sync time && no remote ts =>
> Local row is deleted.
> remote ts > last sync time && no local ts =>
> Remote row is inserted on local.
> remote ts < last sync time && no local ts =>
> Remote row is deleted.
>where the synchronization process is running on the local machine. By
>exported, I mean the local values are sent to the remote machine, and the
>row on that remote machine is updated to the local values. How does this
>sound?
>

The replication part will be the most complex...that much is for
certain...

I've been writing systems in Lotus Notes/Domino for the last year or so
and I've grown quite spoiled with what it can do in regards to replication.
It's not real-time but you have to gear your applications to this type of
thing (it's possible to create documents, fire off email to notify people of
changes and have the email arrive before the replicated documents do).
Replicating large Notes/Domino databases takes quite a while....I don't see
any kind of replication or syncing running in a blink of an eye.

Having said that, a good algo will have to be written to cut down on
network traffic and to keep database conversations down to a minimum. This
will be appreciated by people with low bandwidth connections I'm sure
(dial-ups, fractional T1's, etc).

> Or run manually for my purposes. Also, maybe follow it
>with a vacuum run on both sides for all databases, as this is going to
>potenitally cause lots of table changes that could stand with a cleanup.
>

What would a vacuum do to a system being used by many people?

> No, not at all. Though it probably should be remaned from
>replication to synchronization. The former is usually associated with a
>continuous stream of updates between the local and remote databases, so
>they are almost always in sync, and have a queuing ability if their
>connection is loss for span of time as well. Very complex and difficult to
>implement, and would require hacking server code. :( Something only Sybase
>and Oracle have (as far as I know), and from what I have seen of Sybase's
>replication server support (dated by 5yrs) it was a pain to setup and get
>running correctly.

It could probably be named either way...but the one thing I really don't
want to do is start hacking server code. The PostgreSQL people have enough
to do without worrying about trying to meld anything I've done to their
server. :)

Besides, I like the idea of having it operate as a stand-alone product.
The only PostgreSQL feature we would require would be triggers and
plpgsql...what was the earliest version of PostgreSQL that supported
plpgsql? Even then I don't see the triggers being that complex to boot.

> I also like the idea of using Python. I have been using it
>recently for some database interfaces (to PostgreSQL of course :), and it
>is a very nice language to work with. Some worries about performance of
>the program though, as python is only an interpreted lanuage, and I have
>yet to really be impressed with the speed of execution of my database
>interfaces yet.

The only thing we'd need for Python is the Python extensions for
PostgreSQL...which in turn requires libpq and that's about it. So, it
should be able to run on any platform supported by Python and libpq. Using
TK for the interface components will require NT people to get additional
software from the 'net. At least it did with older version of Windows
Python. Unix folks should be happy....assuming they have X running on the
machine doing the replication or syncing. Even then I wrote a curses based
Python interface awhile back which allows buttons, progress bars, input
fields, etc (I called it tinter and it's available at
http://iximd.com/~dwalker). It's a simple interface and could probably be
cleaned up a bit but it works. :)

> Anyway, it sound like a good project, and finally one where I
>actually have a clue of what is going on, and the skills to help. So, if
>you are interested in pursing this project, I would be more than glad to
>help. TTYL.
>

That would be a Good Thing. Have webspace somewhere? If I can get
permission from the "powers that be" at the office I could host a website on
our (Domino) webserver.

Damond

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Ansley, Michael 1999-12-26 20:29:19 Unlimited query length - the final chapter (aka pg_dump)
Previous Message Tom Lane 1999-12-26 01:35:31 dubious improvement in new psql