From: | Chris Bitmead <chrisb(at)nimrod(dot)itg(dot)telstra(dot)com(dot)au> |
---|---|
To: | Erich <hh(at)cyberpass(dot)net> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Replication options in Postgres |
Date: | 2000-08-01 01:20:55 |
Message-ID: | 398625F7.C22A345C@nimrod.itg.telecom.com.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I guess if you don't do deletes then something like selecting all the
records with an oid greater than the last replication cycle would
find the most recent additions.
Erich wrote:
>
> I am setting up a system that processes transactions, and it needs to
> be highly reliable. Once a transaction happens, it can never be
> lost. This means that there needs to be real-time off-site
> replication of data. I'm wondering what's the best way to do this.
>
> One thing that might simplify this system is that I _never_ use UPDATE
> or DELETE. The only thing I ever do with the database is INSERT. So
> this might make replication a little easier.
>
> I think I have a few possibilities:
>
> 1. In my PHP code, I have functions like
> inserttransaction(values...). I could just modify inserttransaction()
> so that it runs the same query (the INSERT) on two or more DB
> servers. This would probably work ok.
>
> 2. I could write triggers for all my tables, so that when there is an
> INSERT, the trigger does the same INSERT on the other server. Any
> ideas for an efficient way to do this?
>
> 3. Any other tricks?
>
> I don't need mirroring. There will be one master and one or more
> slaves, and the only thing the slaves will do is store backup data.
> The most important thing is that I can't lose a single transaction.
>
> Thanks,
>
> e
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2000-08-01 01:30:10 | Re: postgres perl DBI |
Previous Message | Erich | 2000-08-01 00:45:06 | Replication options in Postgres |