Re: POC: Cleaning up orphaned files using undo logs

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Kuntal Ghosh <kuntalghosh(dot)2007(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cleaning up orphaned files using undo logs
Date: 2019-07-05 14:09:42
Message-ID: CA+Tgmoavy8hyi0K=cnrvQF9rb09ya4XvxYss5uS4hx_xVPpcZg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Jun 25, 2019 at 4:00 PM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> Fair enough. I have implemented it based on next_retry_at and use
> constant time 10s for the next retry. I have used define instead of a
> GUC as all the other constants for similar things are defined as of
> now. One thing to note is that we want the linger time (defined as
> UNDO_WORKER_LINGER_MS) for a undo worker to be more than failure retry
> time (defined as UNDO_FAILURE_RETRY_DELAY_MS) as, otherwise, the undo
> worker can exit before retrying the failed requests.

Uh, I think we want exactly the opposite. We want the workers to exit
before retrying, so that there's a chance for other databases to get
processed, I think. Am I confused?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Jeff Janes 2019-07-05 14:13:25 mcv compiler warning
Previous Message Peter Eisentraut 2019-07-05 13:07:31 Re: using explicit_bzero