Fix O(N^2) behavior in pg_dump when many objects are in dependency loops.
Combining the loop workspace with the record of already-processed objects
might have been a cute trick, but it behaves horridly if there are many
dependency loops to repair: the time spent in the first step of findLoop()
grows as O(N^2). Instead use a separate flag array indexed by dump ID,
which we can check in constant time. The length of the workspace array
is now never more than the actual length of a dependency chain, which
should be reasonably short in all cases of practical interest. The code
is noticeably easier to understand this way, too.
Per gripe from Mike Roest. Since this is a longstanding performance bug,
backpatch to all supported versions.
src/bin/pg_dump/pg_dump_sort.c | 119 ++++++++++++++++++++--------------------
1 files changed, 59 insertions(+), 60 deletions(-)
pgsql-committers by date
|Next:||From: Peter Eisentraut||Date: 2012-04-01 23:40:33|
|Subject: pgsql: Fix recently introduced typo in NLS file lists|
|Previous:||From: Tom Lane||Date: 2012-03-31 18:43:07|
|Subject: pgsql: Fix O(N^2) behavior in pg_dump for large numbers of ownedsequen|