From: | Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org> |
---|---|
To: | Justin Pryzby <pryzby(at)telsasoft(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: error_severity of brin work item |
Date: | 2020-11-23 19:39:57 |
Message-ID: | 20201123193957.GA21810@alvherre.pgsql |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I think this formulation (attached v3) has fewer moving parts.
However, now that I did that, I wonder if this is really the best
approach to solve this problem. Maybe instead of doing this at the BRIN
level, it should be handled at the autovac level, by having the worker
copy the work-item to local memory and remove it from the shared list as
soon as it is in progress. That way, if *any* error occurs while trying
to execute it, it will go away instead of being retried for all
eternity.
Preliminary patch for that attached as autovacuum-workitem.patch.
I would propose to clean that up to apply instead of your proposed fix.
Attachment | Content-Type | Size |
---|---|---|
v3-0001-Avoid-errors-in-brin-summarization.patch | text/x-diff | 2.9 KB |
autovacuum-workitem.patch | text/x-diff | 1.1 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Gierth | 2020-11-23 19:48:29 | mark/restore failures on unsorted merge joins |
Previous Message | Tom Lane | 2020-11-23 19:24:16 | Re: enable_incremental_sort changes query behavior |