Re: Should we throw error when converting a nonexistent/ambiguous timestamp?

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: pgsql-hackers(at)postgreSQL(dot)org
Subject: Re: Should we throw error when converting a nonexistent/ambiguous timestamp?
Date: 2010-03-16 01:12:53
Message-ID: 27156.1268701973@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> On Mon, Mar 15, 2010 at 7:50 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> I'm starting to think that maybe we should throw error in these cases
>> instead of silently doing something that's got a 50-50 chance of being
>> wrong. I'm not sure if the "assume standard time" rule is standardized,
>> but I think it might be better if we dropped it. Thoughts?

> That seems overly picky and fairly pointless to me. Generally I'm a
> big fan of the idea that obvious breakage is better than silent
> breakage, but in this case it seems highly likely that you'll still
> have silent breakage until such time as a time change rolls around.

Yes, that's true, the failure will only be apparent when a DST
transition is sufficiently close by. However, the problem with the
current behavior is that the failure isn't obvious even then ---
you might not notice the data inconsistency until much later when
it's not possible to sort things out.

The current code behavior seems to me to be on par with, for example,
trying to intuit MM-DD versus DD-MM field orders. We used to try to
do that, too, and gave it up as a bad idea.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2010-03-16 01:22:32 Re: Should we throw error when converting a nonexistent/ambiguous timestamp?
Previous Message Takahiro Itagaki 2010-03-16 00:46:47 Re: Ragged latency log data in multi-threaded pgbench