|From:||Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>|
|To:||andrea gelmini <andrea(dot)gelmini(at)linux(dot)it>|
|Views:||Raw Message | Whole Thread | Download mbox|
andrea gelmini <andrea(dot)gelmini(at)linux(dot)it> writes:
> vacuumdb -a -v -f -z
> NOTICE: Analyzing author
> FATAL 2: open of /home/postgres/db/pg_clog/0000 failed: No such file or directo
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> connection to server was lost
> vacuumdb: vacuum freecddb failed
> i've got errors like this using vacuum with and without '-f', with db
> working and with db doing nothing else than vacuum.
> sometimes i've got errors of duplicated key (already know problem), but
> that time i didn't 'cut & paste' the error believing it was my fault by
> some mistake in my schema.
> now, i can investigate deeply, but i need your opinion if it is something
> to do or not. as i said, it takes longer to reproduce this (and maybe i'm
> doing something wrong).
Yup, it looks like a bug to me. Apparently a CLOG segment has been
recycled too soon. We just found a bug of that ilk in sequence
processing, but VACUUM doesn't touch sequences, so apparently you have
a different bug. Please submit details.
Since CLOG segments normally hold a million transactions each, it'll
necessarily take a long time to reproduce any problem of this kind.
If you don't mind doing an initdb, you could reduce the CLOG segment
size to make it easier to try to reproduce the problem. In
#define CLOG_XACTS_PER_SEGMENT 0x100000
to 0x10000 (I think that's about as small as you can make it without
breaking anything). That gives you a shot at a problem every 64K
regards, tom lane
|Next Message||Grant Johnson||2002-01-11 19:02:44||Re: createlang|
|Previous Message||Tom Lane||2002-01-11 16:51:19||Re: Bug #559: MACADDR type & 00:00:00:00:00:00|