From: | Joe Conway <mail(at)joeconway(dot)com> |
---|---|
To: | Chris Hoover <revoohc(at)sermonaudio(dot)com> |
Cc: | PostgreSQL Admin <pgsql-admin(at)postgresql(dot)org> |
Subject: | Re: Copying large tables with DBLink |
Date: | 2005-03-24 19:21:10 |
Message-ID: | 42431326.2010908@joeconway.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Chris Hoover wrote:
> Has anyone had problems with memory exhaustion and dblink? We were
> trying to use dblink to convert our databases to our new layout, and had
> our test server lock up several times when trying to copy a table that
> was significantly larger than our memory and swap.
> Basically where were doing an insert into <table> select * from
> dblink('dbname=olddb','select * from large_table) as t_large_table(table
> column listing);
>
> Does anyone know of a way around this?
dblink just uses libpq, and libpq reads the entire result into memory.
There is no direct way around that that I'm aware of. You could,
however, use a cursor, and fetch/manipulate rows in more reasonably
sized groups.
HTH,
Joe
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2005-03-24 19:40:22 | Re: Copying large tables with DBLink |
Previous Message | Chris Hoover | 2005-03-24 18:59:44 | Copying large tables with DBLink |