Re: wrong fds used for refilenodes after pg_upgrade relfilenode changes Reply-To:

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Thomas Munro <thomas(dot)munro(at)gmail(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>
Subject: Re: wrong fds used for refilenodes after pg_upgrade relfilenode changes Reply-To:
Date: 2022-04-05 17:07:12
Message-ID: CA+TgmoaJqH_hBdY4hjNq=iRLpZfNg2fDKMStqQ7LZb12mA+pMw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Apr 4, 2022 at 10:20 PM Thomas Munro <thomas(dot)munro(at)gmail(dot)com> wrote:
> > The checkpointer never takes heavyweight locks, so the opportunity
> > you're describing can't arise.
>
> <thinks harder> Hmm, oh, you probably meant the buffer interlocking
> in SyncOneBuffer(). It's true that my most recent patch throws away
> more requests than it could, by doing the level check at the end of
> the loop over all buffers instead of adding some kind of
> DropPendingWritebacks() in the barrier handler. I guess I could find
> a way to improve that, basically checking the level more often instead
> of at the end, but I don't know if it's worth it; we're still throwing
> out an arbitrary percentage of writeback requests.

Doesn't every backend have its own set of pending writebacks?
BufferAlloc() calls
ScheduleBufferTagForWriteback(&BackendWritebackContext, ...)?

--
Robert Haas
EDB: http://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Stephen Frost 2022-04-05 17:10:38 Re: Improve documentation for pg_upgrade, standbys and rsync
Previous Message Jonathan S. Katz 2022-04-05 16:47:42 Re: PostgreSQL 15 Release Management Team (RMT) + Feature Freeze