Re: Failover and vacuum

From: Ron Johnson <ronljohnsonjr(at)gmail(dot)com>
To: Pgsql-admin <pgsql-admin(at)lists(dot)postgresql(dot)org>
Subject: Re: Failover and vacuum
Date: 2025-03-29 11:08:01
Message-ID: CANzqJaAfW3iqA+Ohf2skF43_7nhnZYd6fxLuOugVJgcZDs11QQ@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-admin

On Thu, Mar 27, 2025 at 1:41 PM Raj <rajeshkumar(dot)dba09(at)gmail(dot)com> wrote:

> Hi
>
> Have 2 nodes ( primary and standby postgres 15.6) in openshift
> kubernetes.
>
> Patroni setup. 300gb data. No failover since last six months. Suddenly
> after failover, there were lot of issues such as too many connections and
> slowness.
>
> Is it due to not analyze done in new node?
>

Is postgresql.conf configured the same on both nodes?

max_connections being lower on the replica node would certainly and
*immediately* cause "too many connections" errors.

diff -y --suppress-common-lines $PGDATA/postgresql.conf <(ssh -q
otherserver "cat $PGDATA/postgresql.conf")

Vacuuming and statistics_are_ replicated: that data is in tables, so *must*
be replicated). However, *when* they were last vacuumed and analyzed is
apparently not on disk. Thus, the new primary can't know the number of
tuples analyzed since the last ANALYZE, and the number of dead and inserted
records since the last ANALYZE.

Thus, I'd do a vacuumdb --analyze-in-stages soon after the switch-over.

--
Death to <Redacted>, and butter sauce.
Don't boil me, I'm still alive.
<Redacted> lobster!

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Raj 2025-03-29 13:10:47 Re: Failover and vacuum
Previous Message Ron Johnson 2025-03-29 10:48:04 Re: Failover and vacuum