Copying large tables with DBLink

From: "Chris Hoover" <revoohc(at)sermonaudio(dot)com>
To: "PostgreSQL Admin" <pgsql-admin(at)postgresql(dot)org>
Subject: Copying large tables with DBLink
Date: 2005-03-24 18:59:44
Message-ID: 42430E20.1010600@sermonaudio.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Has anyone had problems with memory exhaustion and dblink? We were
trying to use dblink to convert our databases to our new layout, and had
our test server lock up several times when trying to copy a table that
was significantly larger than our memory and swap.

Basically where were doing an insert into <table> select * from
dblink('dbname=olddb','select * from large_table) as t_large_table(table
column listing);

Does anyone know of a way around this? The problem we were trying to
solve is the fact that due to new column additions that are populated
during the conversion, our db's are doubling in space. This is
requiring lengthy vacuum fulls to reclaim the space and making for a
very long conversion time.

We are very concerned that this system lockup could happen on our
production boxes since we have several db's that are very large and
probably have tables larger than our memory on the systems.

Anyway, any ideas on how to get around this, or how we might speed it up
and not use so much space would be appreciated.

Chris

PG 7.3.4

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Joe Conway 2005-03-24 19:21:10 Re: Copying large tables with DBLink
Previous Message John DeSoi 2005-03-24 18:36:09 Re: Admin tools with the ability to alter a field type?