Problem with pg_upgrade

From: Payal Singh <payals1(at)umbc(dot)edu>
To: pgsql-bugs(at)postgresql(dot)org
Subject: Problem with pg_upgrade
Date: 2012-07-05 15:20:44
Message-ID: CAK4ounw9CmG0_vUwwtsr2QXXhOJJ8E9g5mw-vuXAo57KSTQFDg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

Hello,

I am trying to use pg_upgrade to upgrade data from 9.1.4 to 9.2beta2.
Although the upgrade completed successfully, vacuumdb --all --analyze-only
gives an error. I tried upgrading two binary backups of the same production
database from different days, and both returned the exact same error . The
output I got for both trials is as follows:

First trial:

1. postgres(at)sparedb1:/data/pg$ sh analyze_new_cluster.sh
2. This script will generate minimal optimizer statistics rapidly
3. so your system is usable, and then gather statistics twice more
4. with increasing accuracy. When it is done, your system will
5. have the default level of optimizer statistics.
6.
7. If you have used ALTER TABLE to modify the statistics target for
8. any tables, you might want to remove them and restore them after
9. running this script because they will delay fast statistics
generation.
10.
11. If you would like default statistics as quickly as possible, cancel
12. this script and run:
13. vacuumdb --all --analyze-only
14.
15. Generating minimal optimizer statistics (1 target)
16. --------------------------------------------------
17. vacuumdb: vacuuming database "functionx"
18. vacuumdb: vacuuming database "postgres"
19. vacuumdb: vacuuming of database "postgres" failed: ERROR: could not
access status of transaction 46675125
20. DETAIL: Could not open file "pg_clog/002C": No such file or
directory.
21.
22. The server is now available with minimal optimizer statistics.
23. Query performance will be optimal once this script completes.
24.
25. Generating medium optimizer statistics (10 targets)
26. ---------------------------------------------------
27. vacuumdb: vacuuming database "functionx"
28. vacuumdb: vacuuming database "postgres"
29. vacuumdb: vacuuming of database "postgres" failed: ERROR: could not
access status of transaction 46675125
30. DETAIL: Could not open file "pg_clog/002C": No such file or
directory.
31.
32. Generating default (full) optimizer statistics (100 targets?)
33. -------------------------------------------------------------
34. vacuumdb: vacuuming database "functionx"
35. vacuumdb: vacuuming database "postgres"
36. vacuumdb: vacuuming of database "postgres" failed: ERROR: could not
access status of transaction 46675125
37. DETAIL: Could not open file "pg_clog/002C": No such file or
directory.
38.
39. Done
40. postgres(at)sparedb1:/data/pg$

Second Trial:

1. postgres(at)sparedb1:/data/pg$ /opt/pgbrew/9.2beta2}/bin/pg_upgrade -d
/data/pg/9.1 -D /data/pg/9.2 -b /opt/pgbrew/9.1.4/bin -B /opt/pgbrew/9
.2beta2}/bin
2. Performing Consistency Checks
3. -----------------------------
4. Checking current, bin, and data directories ok
5. Checking cluster versions ok
6. Checking database user is a superuser ok
7. Checking for prepared transactions ok
8. Checking for reg* system OID user data types ok
9. Checking for contrib/isn with bigint-passing mismatch ok
10. Creating catalog dump ok
11. Checking for prepared transactions ok
12. Checking for presence of required libraries ok
13.
14. If pg_upgrade fails after this point, you must re-initdb the
15. new cluster before continuing.
16.
17. Performing Upgrade
18. ------------------
19. Analyzing all rows in the new cluster ok
20. Freezing all rows on the new cluster ok
21. Deleting new commit clogs ok
22. Copying old commit clogs to new server ok
23. Setting next transaction ID for new cluster ok
24. Resetting WAL archives ok
25. Setting frozenxid counters in new cluster ok
26. Creating databases in the new cluster ok
27. Adding support functions to new cluster ok
28. Restoring database schema to new cluster ok
29. Removing support functions from new cluster ok
30. Copying user relation files
31. ok
32. Setting next OID for new cluster ok
33. Creating script to analyze new cluster ok
34. Creating script to delete old cluster ok
35.
36. Upgrade Complete
37. ----------------
38. Optimizer statistics are not transferred by pg_upgrade so,
39. once you start the new server, consider running:
40. analyze_new_cluster.sh
41.
42. Running this script will delete the old cluster's data files:
43. delete_old_cluster.sh
44. postgres(at)sparedb1:/data/pg$ echo $PATH
45. /opt/pgbrew/9.2beta2}
/bin:/home/postgres/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/opt/dell/srvadmin/bin
46. postgres(at)sparedb1:/data/pg$ ls
47. 9.1 analyze_new_cluster.sh delete_old_cluster.sh walarchive
48. 9.2 backups loadable_libraries.txt
49. postgres(at)sparedb1:/data/pg$ /opt/pgbrew/9.2beta2}/bin/pg_ctl status
50. pg_ctl: server is running (PID: 17421)
51. /opt/pgbrew/9.2beta2}/bin/postgres "-D" "/data/pg/9.2"
52. postgres(at)sparedb1:/data/pg$ vacuumdb --all --analyze-only
53. vacuumdb: vacuuming database "functionx"
54. vacuumdb: vacuuming database "postgres"
55. vacuumdb: vacuuming of database "postgres" failed: ERROR: could not
access status of transaction 46675125
56. DETAIL: Could not open file "pg_clog/002C": No such file or
directory.
57. postgres(at)sparedb1:/data/pg$

Regards,
Payal Singh

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Craig Ringer 2012-07-05 16:15:06 Re: Problem with pg_upgrade
Previous Message Craig Ringer 2012-07-05 15:09:39 Re: BUG #6718: Cannot delete, create or check existence of extension