From: | Fujii Masao <masao(dot)fujii(at)oss(dot)nttdata(dot)com> |
---|---|
To: | Nathan Bossart <nathandbossart(at)gmail(dot)com>, Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: remove more archiving overhead |
Date: | 2022-07-07 14:03:43 |
Message-ID: | 768e6cb3-256d-9c0b-1797-62420ffca7ae@oss.nttdata.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 2022/04/08 7:23, Nathan Bossart wrote:
> On Thu, Feb 24, 2022 at 09:55:53AM -0800, Nathan Bossart wrote:
>> Yes. I found that a crash at an unfortunate moment can produce multiple
>> links to the same file in pg_wal, which seemed bad independent of archival.
>> By fixing that (i.e., switching from durable_rename_excl() to
>> durable_rename()), we not only avoid this problem, but we also avoid trying
>> to archive a file the server is concurrently writing. Then, after a crash,
>> the WAL file to archive should either not exist (which is handled by the
>> archiver) or contain the same contents as any preexisting archives.
>
> I moved the fix for this to a new thread [0] since I think it should be
> back-patched. I've attached a new patch that only contains the part
> related to reducing archiving overhead.
Thanks for updating the patch. It looks good to me.
Barring any objection, I'm thinking to commit it.
Regards,
--
Fujii Masao
Advanced Computing Technology Center
Research and Development Headquarters
NTT DATA CORPORATION
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2022-07-07 14:37:05 | Re: remove more archiving overhead |
Previous Message | Andrey Lepikhov | 2022-07-07 13:51:54 | Re: Fast COPY FROM based on batch insert |