From: | Thom Brown <thom(at)linux(dot)com> |
---|---|
To: | Josh Berkus <josh(at)agliodbs(dot)com> |
Cc: | PG Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Why is time with timezone 12 bytes? |
Date: | 2010-09-22 21:54:53 |
Message-ID: | AANLkTimwGa6H80kvKOW17bcG8tswjXExn+vEn0cTq0Um@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 22 September 2010 22:01, Josh Berkus <josh(at)agliodbs(dot)com> wrote:
> All,
>
> I was just checking on our year-2027 compliance, and happened to notice
> that time with time zone takes up 12 bytes. This seems peculiar, given
> that timestamp with time zone is only 8 bytes, and at my count we only
> need 5 for the time with microsecond precision. What's up with that?
>
> Also, what is the real range of our 8-byte *integer* timestamp?
The time is 8 bytes, (1,000,000 microseconds * 60 minutes, * 24 hours
= 1,440,000,000 microseconds = 31 bits = 8 bytes).
The timezone displacement takes up to 12 bits, meaning 3 bytes.
(1460+1459 = 2919 = 12 bits = 3 bytes). So that's 11 bytes. Not sure
where the extra 1 byte comes from.
--
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2010-09-22 21:57:03 | Re: Why is time with timezone 12 bytes? |
Previous Message | Andrew Dunstan | 2010-09-22 21:46:48 | Re: Git conversion status |