Re: Numeric x^y for negative x

From: Dean Rasheed <dean(dot)a(dot)rasheed(at)gmail(dot)com>
To: Jaime Casanova <jcasanov(at)systemguards(dot)com(dot)ec>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Yugo NAGATA <nagata(at)sraoss(dot)co(dot)jp>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Dave Page <dpage(at)pgadmin(dot)org>
Subject: Re: Numeric x^y for negative x
Date: 2021-09-12 19:36:05
Message-ID: CAEZATCV-Ceu+HpRMf416yUe4KKFv=tdgXQAe5-7S9tD=5E-T1g@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Sep 02, 2021 at 07:27:09AM +0100, Dean Rasheed wrote:
>
> It's mostly done, but there is one more corner case where it loses
> precision. I'll post an update shortly.
>

I spent some more time looking at this, testing a variety of edge
cases, and the only case I could find that produces inaccurate results
was the one I noted previously -- computing x^y when x is very close
to 1 (less than around 1e-1000 away from it, so that ln_dweight is
less than around -1000). In this case, it loses precision due to the
way local_rscale is set for the initial low-precision calculation:

local_rscale = 8 - ln_dweight;
local_rscale = Max(local_rscale, NUMERIC_MIN_DISPLAY_SCALE);
local_rscale = Min(local_rscale, NUMERIC_MAX_DISPLAY_SCALE);

This needs to be allowed to be greater than NUMERIC_MAX_DISPLAY_SCALE
(1000), otherwise the approximate result will lose all precision,
leading to a poor choice of scale for the full-precision calculation.

So the fix is just to remove the upper bound on this local_rscale, as
we do for the full-precision calculation. This doesn't impact
performance, because it's only computing the logarithm to 8
significant digits at this stage, and when x is very close to 1 like
this, ln_var() has very little work to do -- there is no argument
reduction to do, and the Taylor series terminates on the second term,
since 1-x is so small.

Coming up with a test case that doesn't have thousands of digits is a
bit fiddly, so I chose one where most of the significant digits of the
result are a long way after the decimal point and shifted them up,
which makes the loss of precision in HEAD more obvious. The expected
result can be verified using bc with a scale of 2000.

Regards,
Dean

Attachment Content-Type Size
fix-numeric-power-precision-loss.patch text/x-patch 1.7 KB

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Justin Pryzby 2021-09-12 20:10:29 Re: CLUSTER on partitioned index
Previous Message Andrew Dunstan 2021-09-12 18:41:23 Re: pg_upgrade test for binary compatibility of core data types