time-nuts@lists.febo.com

Discussion of precise time and frequency measurement

View all threads

Could computers represent the timestamp differently?

PV
Peter Vince
Fri, Dec 23, 2016 1:10 PM

It seems to me that the major problem with the leap-second is the inability
of current computer operating systems to represent it, and this is due to
their using a second count since 1970 rather than writing it out as we
would by hand.  While it doubtless made sense in the days of floppy discs
to squeeze tha date and time into a single 4-byte number, with modern
communication speeds and storage media capacities, that no longer seems to
be a requirement.  The (numerical) date and time could be packed into 24
ASCII characters, 12 if BCD was used.  Would it not make sense now for the
next generation of operating systems to do that?  Yes, those who need to
find the elapsed time between two time-stamps would still have a problem,
but isn't the overwhelming major requirement just to represent the
date/time, and be able to easily show if one timestamp is before or after
another?

Peter

It seems to me that the major problem with the leap-second is the inability of current computer operating systems to represent it, and this is due to their using a second count since 1970 rather than writing it out as we would by hand. While it doubtless made sense in the days of floppy discs to squeeze tha date and time into a single 4-byte number, with modern communication speeds and storage media capacities, that no longer seems to be a requirement. The (numerical) date and time could be packed into 24 ASCII characters, 12 if BCD was used. Would it not make sense now for the next generation of operating systems to do that? Yes, those who need to find the elapsed time between two time-stamps would still have a problem, but isn't the overwhelming major requirement just to represent the date/time, and be able to easily show if one timestamp is before or after another? Peter
CA
Chris Albertson
Fri, Dec 23, 2016 7:15 PM

Seconds from 1970 always works.  The decision to add a leap second or not
only effects the written out time and date.

If you want to know the number of seconds between tow written out dates
then you need to know if leap seconds were introduced.  This is REALLY
hard for distance further dates.  (Quick some one tell be how many seconds
between now and January 2nd 2367)  But it is trivial to tell me how many
second it will take the seconds count to increase by one billion seconds.

Or put another way.  A seconds counter represents something about nature,
how many times some cyclic even will occur or whatever defines your
standard "second".    But written out date/time is a burly human invention
the the conversion to it from seconds counting is totally arbitrary and the
details do change.  Not only is the conversion to date/time just a
convention but it is also dependent on geophysical events we can not
predict.

So the OS counts the number of cycles emitted by some maser or whatever
they use at the time as a standard.

On Fri, Dec 23, 2016 at 5:10 AM, Peter Vince petervince1952@gmail.com
wrote:

It seems to me that the major problem with the leap-second is the inability
of current computer operating systems to represent it, and this is due to
their using a second count since 1970 rather than writing it out as we
would by hand.  While it doubtless made sense in the days of floppy discs
to squeeze tha date and time into a single 4-byte number, with modern
communication speeds and storage media capacities, that no longer seems to
be a requirement.  The (numerical) date and time could be packed into 24
ASCII characters, 12 if BCD was used.  Would it not make sense now for the
next generation of operating systems to do that?  Yes, those who need to
find the elapsed time between two time-stamps would still have a problem,
but isn't the overwhelming major requirement just to represent the
date/time, and be able to easily show if one timestamp is before or after
another?

Peter


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/
mailman/listinfo/time-nuts
and follow the instructions there.

--

Chris Albertson
Redondo Beach, California

Seconds from 1970 always works. The decision to add a leap second or not only effects the written out time and date. If you want to know the number of seconds between tow written out dates then you need to know if leap seconds were introduced. This is REALLY hard for distance further dates. (Quick some one tell be how many seconds between now and January 2nd 2367) But it is trivial to tell me how many second it will take the seconds count to increase by one billion seconds. Or put another way. A seconds counter represents something about nature, how many times some cyclic even will occur or whatever defines your standard "second". But written out date/time is a burly human invention the the conversion to it from seconds counting is totally arbitrary and the details do change. Not only is the conversion to date/time just a convention but it is also dependent on geophysical events we can not predict. So the OS counts the number of cycles emitted by some maser or whatever they use at the time as a standard. On Fri, Dec 23, 2016 at 5:10 AM, Peter Vince <petervince1952@gmail.com> wrote: > It seems to me that the major problem with the leap-second is the inability > of current computer operating systems to represent it, and this is due to > their using a second count since 1970 rather than writing it out as we > would by hand. While it doubtless made sense in the days of floppy discs > to squeeze tha date and time into a single 4-byte number, with modern > communication speeds and storage media capacities, that no longer seems to > be a requirement. The (numerical) date and time could be packed into 24 > ASCII characters, 12 if BCD was used. Would it not make sense now for the > next generation of operating systems to do that? Yes, those who need to > find the elapsed time between two time-stamps would still have a problem, > but isn't the overwhelming major requirement just to represent the > date/time, and be able to easily show if one timestamp is before or after > another? > > Peter > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/ > mailman/listinfo/time-nuts > and follow the instructions there. > -- Chris Albertson Redondo Beach, California
MD
Magnus Danielson
Fri, Dec 23, 2016 7:19 PM

Peter,

There is so many ways to solve this, and most of them have already been
discussed and nothing really happend.

Letting the time-stamp represent time in TAI and do UTC as presentation
is less intrusive way of achieving the same thing. Yet, it has not
happen. Essentially will the POSIX standard have to be amended but they
have been trying to avoid leap-second handling for decades. There is
even an email-list for discussion of this, see the leapsecond email list.

God Jul, Gott Nytt År och Glad Skott-sekund!
Merry Christmas, Happy New Year and Happy Leap-second!

Cheers,
Magnus

On 12/23/2016 02:10 PM, Peter Vince wrote:

It seems to me that the major problem with the leap-second is the inability
of current computer operating systems to represent it, and this is due to
their using a second count since 1970 rather than writing it out as we
would by hand.  While it doubtless made sense in the days of floppy discs
to squeeze tha date and time into a single 4-byte number, with modern
communication speeds and storage media capacities, that no longer seems to
be a requirement.  The (numerical) date and time could be packed into 24
ASCII characters, 12 if BCD was used.  Would it not make sense now for the
next generation of operating systems to do that?  Yes, those who need to
find the elapsed time between two time-stamps would still have a problem,
but isn't the overwhelming major requirement just to represent the
date/time, and be able to easily show if one timestamp is before or after
another?

Peter


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Peter, There is so many ways to solve this, and most of them have already been discussed and nothing really happend. Letting the time-stamp represent time in TAI and do UTC as presentation is less intrusive way of achieving the same thing. Yet, it has not happen. Essentially will the POSIX standard have to be amended but they have been trying to avoid leap-second handling for decades. There is even an email-list for discussion of this, see the leapsecond email list. God Jul, Gott Nytt År och Glad Skott-sekund! Merry Christmas, Happy New Year and Happy Leap-second! Cheers, Magnus On 12/23/2016 02:10 PM, Peter Vince wrote: > It seems to me that the major problem with the leap-second is the inability > of current computer operating systems to represent it, and this is due to > their using a second count since 1970 rather than writing it out as we > would by hand. While it doubtless made sense in the days of floppy discs > to squeeze tha date and time into a single 4-byte number, with modern > communication speeds and storage media capacities, that no longer seems to > be a requirement. The (numerical) date and time could be packed into 24 > ASCII characters, 12 if BCD was used. Would it not make sense now for the > next generation of operating systems to do that? Yes, those who need to > find the elapsed time between two time-stamps would still have a problem, > but isn't the overwhelming major requirement just to represent the > date/time, and be able to easily show if one timestamp is before or after > another? > > Peter > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there. >
AK
Attila Kinali
Fri, Dec 23, 2016 9:02 PM

On Fri, 23 Dec 2016 13:10:30 +0000
Peter Vince petervince1952@gmail.com wrote:

Would it not make sense now for the
next generation of operating systems to do that?  Yes, those who need to
find the elapsed time between two time-stamps would still have a problem,
but isn't the overwhelming major requirement just to represent the
date/time, and be able to easily show if one timestamp is before or after
another?

You don't need to wait for the next generation of operating systems
for this. With the 2004 revision of POSIX-1[1], we got an additional
clock source called CLOCK_MONOTONIC, which is has to be monotonically
increasing, may not be set using a system call (and thus have
discontinuities) and may never jump backwards. It's starting point (epoch)
may be arbitrary, though. All current POSIX complient systems (Linux,
all BSD's, MacOSX) have CLOCK_MONOTONIC implemented since virtually forever.

On Linux (kernel 3.10 and newer) you can have some additional clock
sources[2], one of which is defined as CLOCK_TAI and is exactly what
the name says. Unfortunately, it's only set correctly if the OS knows
what TAI actually is. By default the offset between CLOCK_REALTIME
(which may or may not be the same as CLOCK_UTC) is set to 0 but a
modern ntpd can set it if it knows the offset. I am not sure whether
any of the BSDs offers something similar. Though, if I am not mistaken,
MirBSD choose to use TAI instead of UTC as the basis of their internal
clock and thus should also have a way to access it as TAI.

		Attila Kinali

[1] IEEE 1003.1-2004 System Interfaces, Section 2.8.5 Clocks and Timers

[2] clock_gettime(2)

Malek's Law:
Any simple idea will be worded in the most complicated way.

On Fri, 23 Dec 2016 13:10:30 +0000 Peter Vince <petervince1952@gmail.com> wrote: > Would it not make sense now for the > next generation of operating systems to do that? Yes, those who need to > find the elapsed time between two time-stamps would still have a problem, > but isn't the overwhelming major requirement just to represent the > date/time, and be able to easily show if one timestamp is before or after > another? You don't need to wait for the next generation of operating systems for this. With the 2004 revision of POSIX-1[1], we got an additional clock source called CLOCK_MONOTONIC, which is has to be monotonically increasing, may not be set using a system call (and thus have discontinuities) and may never jump backwards. It's starting point (epoch) may be arbitrary, though. All current POSIX complient systems (Linux, all BSD's, MacOSX) have CLOCK_MONOTONIC implemented since virtually forever. On Linux (kernel 3.10 and newer) you can have some additional clock sources[2], one of which is defined as CLOCK_TAI and is exactly what the name says. Unfortunately, it's only set correctly if the OS knows what TAI actually is. By default the offset between CLOCK_REALTIME (which may or may not be the same as CLOCK_UTC) is set to 0 but a modern ntpd can set it if it knows the offset. I am not sure whether any of the BSDs offers something similar. Though, if I am not mistaken, MirBSD choose to use TAI instead of UTC as the basis of their internal clock and thus should also have a way to access it as TAI. Attila Kinali [1] IEEE 1003.1-2004 System Interfaces, Section 2.8.5 Clocks and Timers [2] clock_gettime(2) -- Malek's Law: Any simple idea will be worded in the most complicated way.