Re: Fwd: Postgres update

From: Denis Perchine <dyp(at)perchine(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Fwd: Postgres update
Date: 2000-07-29 05:03:54
Message-ID: 0007291209400V.18993@dyp.perchine.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hello Tom,

> >>>> NOTICE: FlushRelationBuffers(pg_largeobject, 515): block 504 is referenced (private 0, global 1)
> >>>> FATAL 1: VACUUM (repair_frag): FlushRelationBuffers returned -2
>
> > I get this after the following:
>
> > NOTICE: !!! write error seems permanent !!!
> > NOTICE: !!! now kill all backends and reset postmaster !!!
> > ERROR: cannot write block 175 of ix_q_b_1 [webmailstation] blind
> > pqReadData() -- backend closed the channel unexpectedly.
>
> Oh, that's interesting. The NOTICEs are coming out of AbortBufferIO()
> which is invoked during error processing (in other words, I bet the
> ERROR actually happened first. It's a libpq artifact that the NOTICEs
> are presented first on the client side. If you are keeping the
> postmaster log output you could confirm the sequence of events by
> looking in the log). The backend shutdown is then forced by
> AbortBufferIO().
>
> AbortBufferIO() seems rather badly designed, but given that it forces
> a database-wide restart, I'm not sure how this could relate to the
> later FlushRelationBuffers problem. The restart should get rid of the
> old buffers anyway.
>
> > This was the command which should create unique index.
>
> Was the index on the same table that FlushRelationBuffers later had
> trouble with (ie, "pg_largeobject")?
>
> What version are you running, anyway? There is no "pg_largeobject"
> in either 6.5 or current AFAIK.

:-))) Sorry. Just to concatenate the pieces...
I use modified 7.0.2. I applied my patch for largeobject (that one with files in hash dirs).
That's why you can see pg_largeobject. But this is not an issue here. That patch modifies
only large object related stuff.

I get vacuum error first on pg_largeobject. Later index was automaticaly recreated (I have a cron job)
and all became fine.

And when I replied on your mail I get an error in table queue. It started when I noticed that
postmaster starts to eat up memory. I shut it down and look at the log. The last query was update
on queue table. I tried to vacuum the table and get the same error as in the last time.
Then I droped index and recreate it and all became fine.

When later I go through the reports of cron (I do dropping and recreateing of indices each day)
I found out the error message during recreating the index for this table. That is all.

--
Sincerely Yours,
Denis Perchine

----------------------------------
E-Mail: dyp(at)perchine(dot)com
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
----------------------------------

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Philip Warner 2000-07-29 05:58:17 Re: pg_dump & performance degradation
Previous Message Tom Lane 2000-07-29 04:57:46 Re: pg_dump & performance degradation