Re: Move data from DB2 to Postgres any software/solutions/approach?

From: Chris Browne <cbbrowne(at)acm(dot)org>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Move data from DB2 to Postgres any software/solutions/approach?
Date: 2010-06-07 15:47:51
Message-ID: 87d3w2vo7s.fsf@cbbrowne-laptop.afilias-int.info
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

dm(dot)aeqa(at)gmail(dot)com (DM) writes:
> It is not real time, updates every 5 mins should be fine. 
>
> But the DB2 database is real busy and its real performance based. 

The book "Scalable Internet Architectures" (by Theo Schlossnagle) has
an example of how to build a trigger-based replication system copying
data from an Oracle database to Postgres.

It basically tracks PK values for tuples changed/deleted (which is
what the old RServer and eRServer replication systems for Postgres
did), allowing a process to come in afterwards and pull data over to
the replica.

I presume that DB2 has enough functionality to let you run triggers to
capture which tuples changed, and when. Given that, it shouldn't be
super-difficult to do what you need.
--
select 'cbbrowne' || '@' || 'cbbrowne.com';
http://cbbrowne.com/info/slony.html
"MS apparently now has a team dedicated to tracking problems with
Linux and publicizing them. I guess eventually they'll figure out
this back fires... ;)" -- William Burrow <aa126(at)DELETE(dot)fan(dot)nb(dot)ca>

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Greg Smith 2010-06-07 16:23:44 Re: create index concurrently - duplicate index to reduce time without an index
Previous Message erobles 2010-06-07 14:52:29 Re: Connection's limit in SCO OpenServer 5.0.7 and pg 8.3.11 (no more than 94 connections)