Re: error_severity of brin work item

From: Justin Pryzby <pryzby(at)telsasoft(dot)com>
To: Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: error_severity of brin work item
Date: 2020-11-25 17:23:56
Message-ID: 20201125172356.GF24052@telsasoft.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Nov 23, 2020 at 04:39:57PM -0300, Alvaro Herrera wrote:
> I think this formulation (attached v3) has fewer moving parts.
>
> However, now that I did that, I wonder if this is really the best
> approach to solve this problem. Maybe instead of doing this at the BRIN
> level, it should be handled at the autovac level, by having the worker
> copy the work-item to local memory and remove it from the shared list as
> soon as it is in progress. That way, if *any* error occurs while trying
> to execute it, it will go away instead of being retried for all
> eternity.
>
> Preliminary patch for that attached as autovacuum-workitem.patch.
>
> I would propose to clean that up to apply instead of your proposed fix.

I don't know why you said it would be retried for eternity ?
I think the perform_work_item() TRY/CATCH avoids that.

Also .. I think your idea doesn't solve my issue, that REINDEX CONCURRENTLY is
causing vacuum to leave errors in my logs.

I check that the first patch avoids the issue and the 2nd one does not.

--
Justin

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Dean Rasheed 2020-11-25 18:25:27 Re: proposal: possibility to read dumped table's name from file
Previous Message Bruce Momjian 2020-11-25 17:03:48 Re: Prevent printing "next step instructions" in initdb and pg_upgrade