Re: Inaccurate results from numeric ln(), log(), exp() and pow()

From: Dean Rasheed <dean(dot)a(dot)rasheed(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Inaccurate results from numeric ln(), log(), exp() and pow()
Date: 2015-11-15 09:40:55
Message-ID: CAEZATCUJa4oLY8YEicaWem+cH8E2QULehS-2eS52G4u7OMES1w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 14 November 2015 at 20:05, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> I committed this with that change and some other
> mostly-cosmetic revisions.

Thanks.

> Notable non-cosmetic changes:
>
> * I forced all the computed rscales to be at least 0, via code like
> local_rscale = Max(local_rscale, NUMERIC_MIN_DISPLAY_SCALE);
>
> You had done that in some places but not all, with the result that
> functions like mul_var could be invoked with negative rscale and hence
> produce outputs with negative dscale. This is not allowed according to
> the comments in the code, and while it might accidentally fail to fail,
> I do not trust the code to operate properly in that regime. It might be
> worth revisiting the whole question of negative dscale with an eye to not
> calculating digits we don't need, but I think that's material for a
> completely separate patch.
>

Hmm, I wondered about that when deciding on the rscale for the sqrt()s
in the range reduction code for ln_var(). For very large inputs,
forcing the rscale to be non-negative can cause the sqrt() to compute
far more digits than are needed, but I take your point that it might
be playing with fire if the underlying functions aren't sure to handle
negative rscales properly.

In most normal cases it makes little difference to performance. For
ln(9.9e99) it's only around 5% slower than my original code, and for
ln(9.9e999) it is around 5 times slower, but still pretty fast (520
microseconds vs 128). Once you get to cases like ln(2.0^435411),
however, it is pretty bad (13 seconds vs 165ms). But that is a very
contrived worst-case example, and actually that performance is no
different than it was before the patch. I very much doubt anyone will
ever do such a computation, except out of academic interest.

> * I moved some of the new regression test cases into numeric_big.sql.
> While they don't add more than a couple hundred msec on my dev machine,
> they're probably going to cost a lot more than that on the slower
> buildfarm machines, and I'm not sure that they add enough benefit to be
> worth running all the time. This code really shouldn't suffer from many
> portability issues.
>
> (I am going to go run numeric_big manually on all my own buildfarm
> critters just to be sure, though.)
>
> regards, tom lane

OK, that seems fair enough.

Thanks.

Regards,
Dean

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Simon Riggs 2015-11-15 10:41:07 Re: Proposal: "Causal reads" mode for load balancing reads without stale data
Previous Message Amit Kapila 2015-11-15 06:47:19 Re: Freeze avoidance of very large table.