Re: Copying large tables with DBLink

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Chris Hoover" <revoohc(at)sermonaudio(dot)com>
Cc: "PostgreSQL Admin" <pgsql-admin(at)postgresql(dot)org>
Subject: Re: Copying large tables with DBLink
Date: 2005-03-24 19:40:22
Message-ID: 5020.1111693222@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

"Chris Hoover" <revoohc(at)sermonaudio(dot)com> writes:
> Has anyone had problems with memory exhaustion and dblink? We were
> trying to use dblink to convert our databases to our new layout, and had
> our test server lock up several times when trying to copy a table that
> was significantly larger than our memory and swap.

You're not going to be able to do that with dblink, nor any other
set-returning function, because the current implementation of SRFs
always materializes the entire function result in temporary memory/swap.

Consider something like
pg_dump -t srctab srcdb | psql destdb
instead.

regards, tom lane

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message Michael Fuhr 2005-03-24 19:50:59 Re: Copying large tables with DBLink
Previous Message Joe Conway 2005-03-24 19:21:10 Re: Copying large tables with DBLink