Re: Improve handling of parameter differences in physical replication

From: Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com>
To: peter(dot)eisentraut(at)2ndquadrant(dot)com
Cc: masahiko(dot)sawada(at)2ndquadrant(dot)com, alvherre(at)2ndquadrant(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Improve handling of parameter differences in physical replication
Date: 2020-03-11 02:06:37
Message-ID: 20200311.110637.1319131984113864409.horikyota.ntt@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

At Tue, 10 Mar 2020 14:47:47 +0100, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com> wrote in
> On 2020-03-10 09:57, Kyotaro Horiguchi wrote:
> >> Well I meant to periodically send warning messages while waiting for
> >> parameter change, that is after exhausting resources and stopping
> >> recovery. In this situation user need to notice that as soon as
> >> possible.
> > If we lose connection, standby continues to complain about lost
> > connection every 5 seconds. This is a situation of that kind.
>
> My argument is that it's not really the same. If a standby is
> disconnected for more than a few minutes, it's really not going to be
> a good standby anymore after a while. In this case, however, having
> certain parameter discrepancies is really harmless and you can run
> with it for a long time. I'm not strictly opposed to a periodic
> warning, but it's unclear to me how we would find a good interval.

I meant the behavior after streaming is paused. That situation leads
to loss of WAL or running out of WAL storage on the master. Actually
5 seconds as a interval would be too frequent, but, maybe, we need at
least one message for a WAL segment-size?

> > By the way, when I reduced max_connection only on master then take
> > exclusive locks until standby complains on lock exchaustion, I see a
> > WARNING that is saying max_locks_per_transaction instead of
> > max_connection.
...
> > WARNING: recovery paused because of insufficient setting of parameter
> > max_locks_per_transaction (currently 10)
> > DETAIL: The value must be at least as high as on the primary server.
> > HINT: Recovery cannot continue unless the parameter is changed and the
> > server restarted.
> > CONTEXT: WAL redo at 0/6004A80 for Standb
>
> This is all a web of half-truths. The lock tables are sized based on
> max_locks_per_xact * (MaxBackends + max_prepared_xacts). So if you
> run out of lock space, we currently recommend (in the single-server
> case), that you raise max_locks_per_xact, but you could also raise
> max_prepared_xacts or something else. So this is now the opposite
> case where the lock table on the master was bigger because of
> max_connections.

Yeah, I know. So, I'm not sure whether the checks on individual GUC
variable (other than wal_level) makes sense. We might even not need
the WARNING on parameter change.

> We could make the advice less specific and just say, in essence, you
> need to make some parameter changes; see earlier for some hints.

In that sense the direction menetioned above seems sensible.

regards.

--
Kyotaro Horiguchi
NTT Open Source Software Center

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Amit Kapila 2020-03-11 02:27:09 Re: [HACKERS] Moving relation extension locks out of heavyweight lock manager
Previous Message movead li 2020-03-11 01:46:38 Re: Asynchronous Append on postgres_fdw nodes.