Re: Re: floating point representation

From: Robert Schrem <Robert(dot)Schrem(at)WiredMinds(dot)de>
To: Philip Warner <pjw(at)rhyme(dot)com(dot)au>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Re: floating point representation
Date: 2001-02-21 11:34:04
Message-ID: 01022112545703.17287@pc-robert
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, 21 Feb 2001, you wrote:
> At 10:19 21/02/01 +0100, Robert Schrem wrote:
> >The advantage would be, that we only generate as much ASCII data
> >as absolutly neccessary to rebuild the original data exactly.
> >At least this is what I would expect from pg_dump.
>
> pg_dump is only one side of thre problem, but the simplest solution might
> be to add an option to dump the hex mantissa, exponent & sign. This should
> be low-cost and an exact representation of the machine version of the number.

The hex dumps should be done in a machine independant way - I think
that's what you ment when stating mantissa, exponent & sign seperatly,
right? I think this whould be a good solution...

> The other issues, like what is sent to psql & via interfaces like odbc
> (currently text) should be application/DBA based and setable on a
> per-attribute basis.

You think of an additional tag in a CREATE TABLE instruction like

CREATE TABLE temperature (
id id,
messure_time timestamp default now() formatted as "hhmmmdd",
value float formatted as "%3.2f"
);

or maybe

CREATE TABLE temperature (
id id,
messure_time("hhmmmdd") timestamp default now(),
value("%3.2f") float
);

or is there something in SQL99 ?

> eg. some applications want 1.0000 because the data
> came from a piece of hardware with a know error, and 1.0000 means 1.0000+/-
> 0.00005 etc. Maybe this is just an argument for a new 'number with error'
> type...

I think a float value in a database column has no known error
range and therefore we should not care about the 'physical' error
of a value in this context. Just think of a computed column in a
VIEW - how can we know for shure how percise such a result is if
we don't have any additional information about the messurement
errors of all operands (or constants) involved.

If you would introduce a new type - 'number with error' - this
would be totally different and a big contribution to the solution
of this. Then you can also handle errors like 1.0000+/-0.00002
percisely - witch you can't by only formatting 1.0000.

robert schrem

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Thomas Lockhart 2001-02-21 13:38:45 Re: Encoding names
Previous Message Karel Zak 2001-02-21 11:01:28 Re: Encoding names