From: | Jeff Davis <pgsql(at)j-davis(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Andrew Dunstan <andrew(at)dunslane(dot)net>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Floating-point timestamps versus Range Types |
Date: | 2010-10-18 19:24:56 |
Message-ID: | 1287429896.15261.10.camel@jdavis-ux.asterdata.local |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, 2010-10-18 at 14:49 -0400, Tom Lane wrote:
> whereas an int-timestamp build sees these inputs as all the same.
> Thus, to get into trouble you'd need to have a unique index on data that
> conflicts at the microsecond scale but not at the tenth-of-a-microsecond
> scale. This seems implausible. In particular, you didn't get any such
> data from now(), which relies on Unix APIs that don't go below
> microsecond precision. You might conceivably have entered such data
> externally, as I did above, but you'd have to not notice/care that it
> wasn't coming back out at the same precision.
You can also get there via interval math, like multiplying by a numeric.
That seems slightly more plausible.
> So the argument seems academic to me ...
With UNIQUE indexes I agree completely. If nothing else, who puts a
UNIQUE index on high-precision timestamps? And the problem has existed
for a long time already, it's nothing new.
With Exclusion Constraints, it's slightly less academic, and it's a new
addition. Still pretty far-fetched; but at least plausible, which is why
I brought it up.
However, I won't argue with the "don't do anything" approach to
float-timestamps.
Regards,
Jeff Davis
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2010-10-18 19:26:38 | Re: Creation of temporary tables on read-only standby servers |
Previous Message | Tom Lane | 2010-10-18 19:21:26 | Re: create tablespace fails silently, or succeeds improperly |