Skip site navigation (1) Skip section navigation (2)

Large PG_DUMPs going into memory?

From: "Mike Rogers" <temp6453(at)hotmail(dot)com>
To: <pgsql-bugs(at)postgresql(dot)org>
Subject: Large PG_DUMPs going into memory?
Date: 2002-05-06 02:08:37
Message-ID: OE29d92bXCH5n00zepD00003ac2@hotmail.com (view raw or flat)
Thread:
Lists: pgsql-bugspgsql-patches
    I am dumping a rather large PostgreSQL database with 895,000 rows or so.
running 'pg_dump -d databse -u --table=tbl -f ./tbl.sql' makes a 425MB file.
The problem is that whether the database is remote or local [i have tried
running it on the DB server itself with the same result], it takes up a good
half gigabyte of RAM for the course of the dump.  Why does it load all of
this into memory on the client machine rather than output it as it comes in.
On one server, this has caused the machine to swap out a good 250MB to disk,
running up the system load to very high numbers.  The database server seems
to serve it as any other request, but the client server that is running
pg_dump seems to be struggling.
    My main question is not that it does it but why this is how it is done?
Isn't it more efficient to dump it out to the file or STDOUT (if -f isn't
specified) rather than load the entire result set into memory?  It's not
sorting it or anything.

Any help would be appreciated.
--
Mike


In response to

Responses

pgsql-bugs by date

Next:From: Tom LaneDate: 2002-05-06 02:26:28
Subject: Re: Large PG_DUMPs going into memory?
Previous:From: Vladimir ZolotykhDate: 2002-05-05 09:03:00
Subject: Bad timestamp external representation 'Sun 05 May 11:53:44.731416 2002 EEST'

pgsql-patches by date

Next:From: Tom LaneDate: 2002-05-06 02:26:28
Subject: Re: Large PG_DUMPs going into memory?
Previous:From: Manfred KoizarDate: 2002-05-05 15:29:24
Subject: Reduce per tuple overhead (bitmap)

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group