Re: Failover and vacuum

From: Raj <rajeshkumar(dot)dba09(at)gmail(dot)com>
To: Ron Johnson <ronljohnsonjr(at)gmail(dot)com>
Cc: Pgsql-admin <pgsql-admin(at)lists(dot)postgresql(dot)org>
Subject: Re: Failover and vacuum
Date: 2025-03-29 13:10:47
Message-ID: CAJk5Atb42rROeOmqAwDPD9jCG-wH6eicUGmCAh_m==E8d2RsRQ@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Great, it's same configuration on both nodes. We are not taking connections
from standby anyway.

On Sat, 29 Mar 2025, 16:38 Ron Johnson, <ronljohnsonjr(at)gmail(dot)com> wrote:

> On Thu, Mar 27, 2025 at 1:41 PM Raj <rajeshkumar(dot)dba09(at)gmail(dot)com> wrote:
>
>> Hi
>>
>> Have 2 nodes ( primary and standby postgres 15.6) in openshift
>> kubernetes.
>>
>> Patroni setup. 300gb data. No failover since last six months. Suddenly
>> after failover, there were lot of issues such as too many connections and
>> slowness.
>>
>> Is it due to not analyze done in new node?
>>
>
> Is postgresql.conf configured the same on both nodes?
>
> max_connections being lower on the replica node would certainly and
> *immediately* cause "too many connections" errors.
>
>
> diff -y --suppress-common-lines $PGDATA/postgresql.conf <(ssh -q otherserver "cat $PGDATA/postgresql.conf")
>
> Vacuuming and statistics_are_ replicated: that data is in tables, so
> *must* be replicated). However, *when* they were last vacuumed and
> analyzed is apparently not on disk. Thus, the new primary can't know the
> number of tuples analyzed since the last ANALYZE, and the number of dead
> and inserted records since the last ANALYZE.
>
> Thus, I'd do a vacuumdb --analyze-in-stages soon after the switch-over.
>
> --
> Death to <Redacted>, and butter sauce.
> Don't boil me, I'm still alive.
> <Redacted> lobster!
>

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Ron Johnson 2025-03-29 15:05:24 Re: Failover and vacuum
Previous Message Ron Johnson 2025-03-29 11:08:01 Re: Failover and vacuum