Re: Inaccurate results from numeric ln(), log(), exp() and pow()

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Dean Rasheed <dean(dot)a(dot)rasheed(at)gmail(dot)com>
Cc: PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Inaccurate results from numeric ln(), log(), exp() and pow()
Date: 2015-09-16 14:32:54
Message-ID: 14453.1442413974@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Dean Rasheed <dean(dot)a(dot)rasheed(at)gmail(dot)com> writes:
> ... For example, exp() works for inputs up to 6000. However, if you
> compute exp(5999.999) the answer is truly huge -- probably only of
> academic interest to anyone. With HEAD, exp(5999.999) produces a
> number with 2609 significant digits in just 1.5ms (on my ageing
> desktop box). However, only the first 9 digits returned are correct.
> The other 2600 digits are pure noise. With my patch, all 2609 digits
> are correct (confirmed using bc), but it takes 27ms to compute, making
> it 18x slower.

> AFAICT, this kind of slowdown only happens in cases like this where a
> very large number of digits are being returned. It's not obvious what
> we should be doing in cases like this. Is a performance reduction like
> that acceptable to generate the correct answer? Or should we try to
> produce a more approximate result more quickly, and where do we draw
> the line?

FWIW, in that particular example I'd happily take the 27ms time to get
the more accurate answer. If it were 270ms, maybe not. I think my
initial reaction to this patch is "are there any cases where it makes
things 100x slower ... especially for non-outrageous inputs?" If not,
sure, let's go for more accuracy.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Jesper Pedersen 2015-09-16 14:37:43 Re: Additional LWLOCK_STATS statistics
Previous Message Shulgin, Oleksandr 2015-09-16 14:31:47 Re: On-demand running query plans using auto_explain and signals