From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Will Pugh <willpugh(at)gmail(dot)com> |
Cc: | pgsql-sql(at)postgresql(dot)org |
Subject: | Re: How does Numeric division determine precision? |
Date: | 2012-07-12 17:30:19 |
Message-ID: | 29730.1342114219@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
Will Pugh <willpugh(at)gmail(dot)com> writes:
> It seems that is 9.1, numerics that don't have a specified precision
> and scale are arbitrary scale/precision.
> For many operations this is straightforward. However, when doing a
> division operation that does not terminate, I'm curious about how the
> number of digits is determined.
According to select_div_scale() in src/backend/utils/adt/numeric.c,
/*
* The result scale of a division isn't specified in any SQL standard. For
* PostgreSQL we select a result scale that will give at least
* NUMERIC_MIN_SIG_DIGITS significant digits, so that numeric gives a
* result no less accurate than float8; but use a scale not less than
* either input's display scale.
*/
I wouldn't necessarily claim that that couldn't be improved on,
but that's what it does now.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Jasen Betts | 2012-07-14 03:22:07 | Re: Simple Upgrade from PostgreSQL version 8.1.11 (With schemas) |
Previous Message | Will Pugh | 2012-07-12 16:02:03 | How does Numeric division determine precision? |