Just a patch to clean up a bug in pg_dump whose sole purpose is to confuse
users. Why should -d crash pg_dump just because you have a big table? I
couldn't find this listed anywhere, not even on the TODO list. So if some
change to the library fixed this, I apologise.
This patch replaces the simple SELECT * with a cursor that fetches 1,000 rows
at a time. The 1,000 was chosen because it was small enough to test but I
think realisitically 10,000 wouldn't be too much.
Also, it seems there is no regression test for pg_dump. Is this intentional
or has noone come up with a good way to test it?
http://svana.org/kleptog/pgsql/pgsql-pg_dump.patch (also attached)
Please CC any replies.
P.S. For those people waiting for the timing patch, I'm just dealing with a
little issue involving getting a flag from ExplainOneQuery to ExecInitNode.
I think I may have an answer but it needs testing.
Martijn van Oosterhout <kleptog(at)svana(dot)org>
> It would be nice if someone came up with a certification system that
> actually separated those who can barely regurgitate what they crammed over
> the last few weeks from those who command secret ninja networking powers.
pgsql-patches by date
|Next:||From: Larry Rosenman||Date: 2001-08-27 14:09:10|
|Subject: Re: official submission for OU8/UnixWare sharedlib patch|
|Previous:||From: Karel Zak||Date: 2001-08-27 07:54:50|
|Subject: Re: [PATCHES] Re: Re: nocreatetable for 7.1.2 [patch]|