Re: Strange decreasing value of pg_last_wal_receive_lsn()

From: Michael Paquier <michael(at)paquier(dot)xyz>
To: godjan • <g0dj4n(at)gmail(dot)com>
Cc: Sergei Kornilov <sk(at)zsrv(dot)org>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Strange decreasing value of pg_last_wal_receive_lsn()
Date: 2020-05-11 06:54:02
Message-ID: 20200511065402.GD88791@paquier.xyz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Sun, May 10, 2020 at 06:58:50PM +0500, godjan • wrote:
> synchronous_standby_names=ANY 1(host1, host2)
> synchronous_commit=on

Thanks for the details. I was not sure based on your previous
messages.

> So to understand which standby wrote last data to disk I should know
> receive_lsn or write_lsn.

If you have just an access to the standbys, using
pg_last_wal_replay_lsn() should be enough, no? One tricky point is to
make sure that each standby does not have more WAL to replay, though
you can do that by looking at the wait event called
RecoveryRetrieveRetryInterval for the startup process.
Note that when a standby starts and has primary_conninfo set, it would
request streaming to start again at the beginning of the segment as
mentioned, but it does not change the point up to which the startup
process replays the WAL available locally, as that's what takes
priority as WAL source (second choice is a WAL archive and third is
streaming if all options are set in the recovery configuration).

There are several HA solutions floating around in the community, and I
got to wonder as well if some of them don't just scan the local
pg_wal/ of each standby in this case, even if that's more simple to
let the nodes start and replay up to their latest point available.
--
Michael

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Michael Paquier 2020-05-11 07:28:52 Re: Problem with logical replication
Previous Message Tatsuro Yamada 2020-05-11 06:19:50 Re: PG 13 release notes, first draft