Bug report #53

From: "rob" <rob(at)cabrion(dot)com>
To: <pgsql-bugs(at)postgresql(dot)org>
Cc: <vev(at)hub(dot)org>
Subject: Bug report #53
Date: 2000-10-27 13:27:43
Message-ID: 003701c04019$b07b3b00$4100fd0a@cabrion.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

I am confirming this bug, though it appears that the order of creation is
not the issue. If you split the dump files into schema statements and copy
data statements and run them sequentially as two different jobs everything
works just fine.

The issue appears to be that create sequence (maybe create table) statements
aren't getting "committed" in a timely manner (i.e. before the copy starts).

Here is a copy of #53's description:

dumping a table then reloading it on a new database fails if there are
sequences involved because the sequence isn't created before the copy. After
the copy, the sequence's value isn't updated.

Here is the perl code I use to split the restores:

die 'supply dump file as first param' if not $ARGV[0];
open IN, "<$ARGV[0]";
open OUT, ">$ARGV[0].schema.sql";
open OUT2, ">$ARGV[0].data.sql";

$junk = <IN>;# drop first line with user/pass stuff;
while (<IN>) {
$flag = 1 if /^COPY/;
if ($flag) {
print OUT2;
} else {
print OUT;
}
};

Hope this helps isolate the issue.

--rob

Browse pgsql-bugs by date

  From Date Subject
Next Message Jessica Ord 2000-10-27 15:21:22 Problem with group by command
Previous Message Ken Smith 2000-10-27 06:28:33 MySQL/PostgreSQL discrepancy