Steven Rosenstein <srosenst(at)us(dot)ibm(dot)com> writes:
> I did as instructed, and fired up the standalone backend. I then started
> VACUUM. About four days later, the standalone backend terminated with the
> WARNING: terminating connection because of crash of another server process
> DETAIL: The postmaster has commanded this server process to roll back the
> current transaction and exit, because another server process exited
> abnormally and possibly corrupted shared memory.
> HINT: In a moment you should be able to reconnect to the database and
> repeat your command.
> CONTEXT: writing block 465 of relation 1663/16384/863912
Ugh. Something sent the standalone backend a SIGQUIT signal. You need
to find out what did that.
> I used lsof to monitor which files the backend was actually working on. It
> took two of the four days for it to vacuum a single table with 43
> one-gigabyte extents. I have one table with over 300 extents. I'm looking
> at a vacuum process which can ultimately take weeks (if not months) to
Yipes. You are just using plain VACUUM, right, not VACUUM FULL?
Have you checked that vacuum_cost_delay isn't enabled?
> Bottom line. Is there *any* way of faking out the 1 million transaction
> limit which prevents the postmaster from running, long enough for me to use
> pg_dump to rescue the data?
In 8.1 those limits are all hard-wired; you'd need to modify
SetTransactionIdLimit() in src/backend/access/transam/varsup.c
and recompile. Might be worth doing, if you think these tables
have been bloated by a complete lack of vacuuming.
regards, tom lane
In response to
pgsql-admin by date
|Next:||From: Tino Schwarze||Date: 2008-01-25 19:23:42|
|Subject: Re: Recovering a database in danger of transaction wrap-around|
|Previous:||From: Steven Rosenstein||Date: 2008-01-25 18:53:57|
|Subject: Recovering a database in danger of transaction wrap-around|