Re: pg15b2: large objects lost on upgrade

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Justin Pryzby <pryzby(at)telsasoft(dot)com>
Cc: "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, Bruce Momjian <bruce(at)momjian(dot)us>, Shruthi Gowda <gowdashru(at)gmail(dot)com>
Subject: Re: pg15b2: large objects lost on upgrade
Date: 2022-07-07 17:38:44
Message-ID: CA+TgmoZDg-osUEV5FkXiL7zuVT2=m_ABVP5W9uhaAdBRnhgJPQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Jul 7, 2022 at 1:10 PM Justin Pryzby <pryzby(at)telsasoft(dot)com> wrote:
> Maybe it's a good idea to check that the file is empty before unlinking...

If we want to verify that there are no large objects in the cluster,
we could do that in check_new_cluster_is_empty(). However, even if
there aren't, the length of the file could still be more than 0, if
there were some large objects previously and then they were removed.
So it's not entirely obvious to me that we should refuse to remove a
non-empty file.

--
Robert Haas
EDB: http://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Nathan Bossart 2022-07-07 17:51:42 Re: remove more archiving overhead
Previous Message Robert Haas 2022-07-07 17:26:29 Re: making relfilenodes 56 bits