Large PG_DUMPs going into memory?

From: "Mike Rogers" <temp6453(at)hotmail(dot)com>
To: <pgsql-bugs(at)postgresql(dot)org>
Subject: Large PG_DUMPs going into memory?
Date: 2002-05-06 02:08:37
Message-ID: OE29d92bXCH5n00zepD00003ac2@hotmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs pgsql-patches

I am dumping a rather large PostgreSQL database with 895,000 rows or so.
running 'pg_dump -d databse -u --table=tbl -f ./tbl.sql' makes a 425MB file.
The problem is that whether the database is remote or local [i have tried
running it on the DB server itself with the same result], it takes up a good
half gigabyte of RAM for the course of the dump. Why does it load all of
this into memory on the client machine rather than output it as it comes in.
On one server, this has caused the machine to swap out a good 250MB to disk,
running up the system load to very high numbers. The database server seems
to serve it as any other request, but the client server that is running
pg_dump seems to be struggling.
My main question is not that it does it but why this is how it is done?
Isn't it more efficient to dump it out to the file or STDOUT (if -f isn't
specified) rather than load the entire result set into memory? It's not
sorting it or anything.

Any help would be appreciated.
--
Mike

In response to

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Tom Lane 2002-05-06 02:26:28 Re: Large PG_DUMPs going into memory?
Previous Message Vladimir Zolotykh 2002-05-05 09:03:00 Bad timestamp external representation 'Sun 05 May 11:53:44.731416 2002 EEST'

Browse pgsql-patches by date

  From Date Subject
Next Message Tom Lane 2002-05-06 02:26:28 Re: Large PG_DUMPs going into memory?
Previous Message Manfred Koizar 2002-05-05 15:29:24 Reduce per tuple overhead (bitmap)