Re: Fix overflow in DecodeInterval

From: Joseph Koshakow <koshy44(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Andres Freund <andres(at)anarazel(dot)de>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Fix overflow in DecodeInterval
Date: 2022-04-02 18:22:58
Message-ID: CAAvxfHd730o24gGhu5YoL1Sxt0B=KfoL5aq5uttufjqLEiKitA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Sat, Apr 2, 2022 at 1:29 PM Joseph Koshakow <koshy44(at)gmail(dot)com> wrote:
>
> On Fri, Apr 1, 2022 at 8:06 PM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> >
> > Joseph Koshakow <koshy44(at)gmail(dot)com> writes:
> > > * The existing code for rounding had a lot of int to double
> > > casting and vice versa. I *think* that doubles are able to completely
> > > represent the range of ints. However doubles are not able to represent
> > > the full range of int64. After making the change I started noticing
> > > a lot of lossy behavior. One thought I had was to change the doubles
> > > to long doubles, but I wasn't able to figure out if long doubles could
> > > completely represent the range of int64. Especially since their size
> > > varies depending on the architecture. Does anyone know the answer to
> > > this?
> >
> > I agree that relying on long double is not a great plan. However,
> > I'm not seeing where there's a problem. AFAICS the revised code
> > only uses doubles to represent fractions from the input, ie if you
> > write "123.456 hours" then the ".456" is carried around for awhile
> > as a float. This does not seem likely to pose any real-world
> > problem; do you have a counterexample?
>
> Yeah, you're correct, I don't think there is any problem with just
> using double. I don't exactly remember why I thought long double
> was necessary in the revised code. I probably just confused
> myself because it would have been necessary with the old
> rounding code, but not the revised code.

Ok I actually remember now, the issue is with the rounding
code in AdjustFractMicroseconds.

> frac *= scale;
> usec = (int64) frac;
>
> /* Round off any fractional microsecond */
> frac -= usec;
> if (frac > 0.5)
> usec++;
> else if (frac < -0.5)
> usec--;

I believe it's possible for `frac -= usec;` to result in a value greater
than 1 or less than -1 due to the lossiness of int64 to double
conversions. Then we'd incorrectly round in one direction. I don't
have a concrete counter example, but at worst we'd end up with a
result that's a couple of microseconds off, so it's probably not a huge
deal.

If I'm right about the above, and we care enough to fix it, then I think
it can be fixed with the following:

> frac *= scale;
> usec = (int64) frac;
>
> /* Remove non fractional part from frac */
> frac -= (double) usec;
> /* Adjust for lossy conversion from int64 to double */
> while (frac < 0 && frac < -1)
> frac++;
> while (frac > 0 && frac > 1)
> frac--;
>
> /* Round off any fractional microsecond */
> if (frac > 0.5)
> usec++;
> else if (frac < -0.5)
> usec--;

- Joe Koshakow

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Joseph Koshakow 2022-04-02 18:32:31 Re: Fix overflow in DecodeInterval
Previous Message Ranier Vilela 2022-04-02 18:12:12 logical decoding and replication of sequences