Thank you for the answer. Good to know about this enterprise DB feature.
I´ll follow using pgloader.
On Sat, Apr 26, 2008 at 10:14 PM, Jonah H. Harris <jonah(dot)harris(at)gmail(dot)com>
> On Sat, Apr 26, 2008 at 9:25 AM, Adonias Malosso <malosso(at)gmail(dot)com>
> > I´d like to know what´s the best practice to LOAD a 70 milion rows, 101
> > columns table
> > from ORACLE to PGSQL.
> The fastest and easiest method would be to dump the data from Oracle
> into CSV/delimited format using something like ociuldr
> (http://www.anysql.net/en/ociuldr.html) and load it back into PG using
> pg_bulkload (which is a helluva lot faster than COPY). Of course, you
> could try other things as well... such as setting up generic
> connectivity to PG and inserting the data to a PG table over the
> database link.
> Similarly, while I hate to see shameless self-plugs in the community,
> the *fastest* method you could use is dblink_ora_copy, contained in
> EnterpriseDB's PG+ Advanced Server; it uses an optimized OCI
> connection to COPY the data directly from Oracle into Postgres, which
> also saves you the intermediate step of dumping the data.
> Jonah H. Harris, Sr. Software Architect | phone: 732.331.1324
> EnterpriseDB Corporation | fax: 732.331.1301
> 499 Thornall Street, 2nd Floor | jonah(dot)harris(at)enterprisedb(dot)com
> Edison, NJ 08837 | http://www.enterprisedb.com/
In response to
pgsql-performance by date
|Next:||From: PFC||Date: 2008-04-28 21:38:18|
|Subject: Postgres Benchmark looking for maintainer|
|Previous:||From: Radhika S||Date: 2008-04-28 20:58:59|
|Subject: Re: Replication Syatem|