Re: Replication slot stats misgivings

From: vignesh C <vignesh21(at)gmail(dot)com>
To: Bharath Rupireddy <bharath(dot)rupireddyforpostgres(at)gmail(dot)com>
Cc: Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>, Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
Subject: Re: Replication slot stats misgivings
Date: 2021-04-03 17:41:53
Message-ID: CALDaNm1SJEFxt4b1iYO1mTZ1Rqc58NJVp0OY9k2VXpM-Aea6Ew@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, Apr 2, 2021 at 11:28 AM Bharath Rupireddy
<bharath(dot)rupireddyforpostgres(at)gmail(dot)com> wrote:
>
> On Fri, Apr 2, 2021 at 9:57 AM vignesh C <vignesh21(at)gmail(dot)com> wrote:
> > Thanks for the comments, I will fix the comments and provide a patch
> > for this soon.
>

Thanks for the comments.

> Here are some comments:
> 1) How about something like below
> + (errmsg("skipping \"%s\" replication slot
> statistics as the statistic collector process does not have enough
> statistic slots",
> instead of
> + (errmsg("skipping \"%s\" replication
> slot's statistic as the statistic collector process does not have
> enough statistic slots",
>

Modified.

> 2) Does it mean "pg_statistic slots" when we say "statistic slots" in
> the above warning? If yes, why can't we use "pg_statistic slots"
> instead of "statistic slots" as with another existing message
> "insufficient pg_statistic slots for array stats"?
>

Here pg_stat_replication_slots will not have enought slots. I changed
it to below:
errmsg("skipping \"%s\" replication slot statistics as
pg_stat_replication_slots does not have enough slots"
Thoughts?

> 3) Should we change the if condition to max_replication_slots <=
> nReplSlotStats instead of max_replication_slots == nReplSlotStats? In
> the scenario, it is mentioned that "one of the replication slots is
> dropped", will this issue occur when multiple replication slots are
> dropped?
>

I felt it should be max_replication_slots == nReplSlotStats, if
max_replication_slots = 5, we will be able to store 5 replication slot
statistics from 0,1..4, from 5th we will not have space. I think this
need not be changed.

> 4) Let's end the statement after this and start a new one, something like below
> + * this. To avoid writing beyond the max_replication_slots
> instead of
> + * this, to avoid writing beyond the max_replication_slots
>

Changed it.

> 5) How about something like below
> + * this. To avoid writing beyond the max_replication_slots,
> + * this replication slot statistics information will
> be skipped.
> + */
> instead of
> + * this, to avoid writing beyond the max_replication_slots
> + * these replication slot statistic information will
> be skipped.
> + */
>

Changed it.

> 6) Any specific reason to use a new local variable replSlotStat and
> later memcpy into replSlotStats[nReplSlotStats]? Instead we could
> directly fread into &replSlotStats[nReplSlotStats] and do
> memset(&replSlotStats[nReplSlotStats], 0,
> sizeof(PgStat_ReplSlotStats)); before the warnings. As warning
> scenarios seem to be less frequent, we could avoid doing memcpy
> always.
> - if (fread(&replSlotStats[nReplSlotStats], 1,
> sizeof(PgStat_ReplSlotStats), fpin)
> + if (fread(&replSlotStat, 1, sizeof(PgStat_ReplSlotStats), fpin)
>
> + memcpy(&replSlotStats[nReplSlotStats], &replSlotStat,
> sizeof(PgStat_ReplSlotStats));
>

I wanted to avoid the memcpy instructions multiple times, but your
explanation makes sense to keep the memcpy in failure path so that the
positive flow can be faster. Changed it.
These comments are fixed in the v2 patch posted in my previous mail.

Regards,
Vignesh

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Jürgen Purtz 2021-04-03 17:43:48 Re: Additional Chapter for Tutorial - arch-dev.sgml
Previous Message Pavel Stehule 2021-04-03 17:41:50 Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on Oracle after migration.