From: | Martijn van Oosterhout <kleptog(at)svana(dot)org> |
---|---|
To: | pgsql-patches(at)postgresql(dot)org |
Subject: | [PATCH] Prevent pg_dump running out of memory |
Date: | 2001-08-27 14:03:51 |
Message-ID: | 20010828000351.C32309@svana.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-patches |
Just a patch to clean up a bug in pg_dump whose sole purpose is to confuse
users. Why should -d crash pg_dump just because you have a big table? I
couldn't find this listed anywhere, not even on the TODO list. So if some
change to the library fixed this, I apologise.
This patch replaces the simple SELECT * with a cursor that fetches 1,000 rows
at a time. The 1,000 was chosen because it was small enough to test but I
think realisitically 10,000 wouldn't be too much.
Also, it seems there is no regression test for pg_dump. Is this intentional
or has noone come up with a good way to test it?
http://svana.org/kleptog/pgsql/pgsql-pg_dump.patch (also attached)
Please CC any replies.
P.S. For those people waiting for the timing patch, I'm just dealing with a
little issue involving getting a flag from ExplainOneQuery to ExecInitNode.
I think I may have an answer but it needs testing.
--
Martijn van Oosterhout <kleptog(at)svana(dot)org>
http://svana.org/kleptog/
> It would be nice if someone came up with a certification system that
> actually separated those who can barely regurgitate what they crammed over
> the last few weeks from those who command secret ninja networking powers.
Attachment | Content-Type | Size |
---|---|---|
pgsql-pg_dump.patch | text/plain | 5.1 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Larry Rosenman | 2001-08-27 14:09:10 | Re: official submission for OU8/UnixWare sharedlib patch |
Previous Message | Karel Zak | 2001-08-27 07:54:50 | Re: [PATCHES] Re: Re: nocreatetable for 7.1.2 [patch] |