Re: Copying large tables with DBLink

From: Michael Fuhr <mike(at)fuhr(dot)org>
To: Chris Hoover <revoohc(at)sermonaudio(dot)com>
Cc: PostgreSQL Admin <pgsql-admin(at)postgresql(dot)org>
Subject: Re: Copying large tables with DBLink
Date: 2005-03-24 19:50:59
Message-ID: 20050324195059.GA13146@winnie.fuhr.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

On Thu, Mar 24, 2005 at 01:59:44PM -0500, Chris Hoover wrote:
>
> Has anyone had problems with memory exhaustion and dblink? We were
> trying to use dblink to convert our databases to our new layout, and had
> our test server lock up several times when trying to copy a table that
> was significantly larger than our memory and swap.

Hmmm...doesn't dblink use libpq, and doesn't libpq fetch the entire
result set before doing anything with it? If so, then that could
explain the memory exhaustion.

> Basically where were doing an insert into <table> select * from
> dblink('dbname=olddb','select * from large_table) as t_large_table(table
> column listing);
>
> Does anyone know of a way around this?

How about using pg_dump to dump the original table and restore it
into the new table? If you just want the table's contents without
the table definition then you could use the -a (--data-only) option.

Another possibility would be to write a function that uses a cursor:
dblink_open() and a loop that calls dblink_fetch() until you reach
the end of the result set. I think that wouldn't have a memory
exhaustion problem (but test it to be sure).

--
Michael Fuhr
http://www.fuhr.org/~mfuhr/

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message Kris Kiger 2005-03-24 21:21:12 Very worried about this
Previous Message Tom Lane 2005-03-24 19:40:22 Re: Copying large tables with DBLink