time-nuts@lists.febo.com

Discussion of precise time and frequency measurement

View all threads

GPS message jitter (was GPS for Nixie Clock)

MS
Mark Sims
Sat, Jul 16, 2016 7:13 PM

I just added some code to Lady Heather to record and plot the time that the timing message arrived from the receiver (well, actually the time that the screen update routine was called,  maybe a few microseconds difference).    I am using my existing GetMsec() routine which on Windoze actually has around a 16 msec granularity.  The Linux version uses the Linux nanosecond clock (divided down to msec resolution).  I just started testing it on a Ublox 8M in NMEA and binary message mode...  really surprising results to come shortly...

I just added some code to Lady Heather to record and plot the time that the timing message arrived from the receiver (well, actually the time that the screen update routine was called, maybe a few microseconds difference). I am using my existing GetMsec() routine which on Windoze actually has around a 16 msec granularity. The Linux version uses the Linux nanosecond clock (divided down to msec resolution). I just started testing it on a Ublox 8M in NMEA and binary message mode... really surprising results to come shortly...
TV
Tom Van Baak
Sat, Jul 16, 2016 8:03 PM

Hi Mark,

As one example of what you'll see, scroll down to the NMEA Latency/Jitter plot at:
http://leapsecond.com/pages/MG1613S/

In that 900 sample (15 minutes) run, the mean latency was 350.2 ms with a standard deviation (jitter) of 10.7 ms. I'll dig out some other data I may have. It will be quite different depending on receiver make/model.

For this plot I made TIC measurements between the leading edge of the 1PPS and the leading edge of the start bit of the first byte of the first NMEA sentence.

BTW, on Windows use QueryPerformanceCounter if you want granularity-free millisecond or even microsecond time interval resolution.

/tvb

----- Original Message -----
From: "Mark Sims" holrum@hotmail.com
To: time-nuts@febo.com
Sent: Saturday, July 16, 2016 12:13 PM
Subject: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

I just added some code to Lady Heather to record and plot the time that the timing message arrived from the receiver (well, actually the time that the screen update routine was called,  maybe a few microseconds difference).    I am using my existing GetMsec() routine which on Windoze actually has around a 16 msec granularity.  The Linux version uses the Linux nanosecond clock (divided down to msec resolution).  I just started testing it on a Ublox 8M in NMEA and binary message mode...  really surprising results to come shortly...

Hi Mark, As one example of what you'll see, scroll down to the NMEA Latency/Jitter plot at: http://leapsecond.com/pages/MG1613S/ In that 900 sample (15 minutes) run, the mean latency was 350.2 ms with a standard deviation (jitter) of 10.7 ms. I'll dig out some other data I may have. It will be quite different depending on receiver make/model. For this plot I made TIC measurements between the leading edge of the 1PPS and the leading edge of the start bit of the first byte of the first NMEA sentence. BTW, on Windows use QueryPerformanceCounter if you want granularity-free millisecond or even microsecond time interval resolution. /tvb ----- Original Message ----- From: "Mark Sims" <holrum@hotmail.com> To: <time-nuts@febo.com> Sent: Saturday, July 16, 2016 12:13 PM Subject: [time-nuts] GPS message jitter (was GPS for Nixie Clock) >I just added some code to Lady Heather to record and plot the time that the timing message arrived from the receiver (well, actually the time that the screen update routine was called, maybe a few microseconds difference). I am using my existing GetMsec() routine which on Windoze actually has around a 16 msec granularity. The Linux version uses the Linux nanosecond clock (divided down to msec resolution). I just started testing it on a Ublox 8M in NMEA and binary message mode... really surprising results to come shortly... >
DJ
David J Taylor
Sun, Jul 17, 2016 6:31 AM

From: Mark Sims

I just added some code to Lady Heather to record and plot the time that the
timing message arrived from the receiver (well, actually the time that the
screen update routine was called,  maybe a few microseconds difference).
I am using my existing GetMsec() routine which on Windoze actually has
around a 16 msec granularity.  The Linux version uses the Linux nanosecond
clock (divided down to msec resolution).  I just started testing it on a
Ublox 8M in NMEA and binary message mode...  really surprising results to
come shortly...


Mark,

Thanks for those updates.

For Windows lower than 8, turn on the high-resolution timer and you can get
millisecond level (0.977 ms IIRC).  You can also use the
QueryPeformanceCounter and related functions for better precision.

https://msdn.microsoft.com/en-us/library/windows/desktop/ms644904(v=vs.85).aspx

However for current Windows (8, 8.1 10) the situation is much better as you
can get 100 ns precision using the new GetSystemTimePreciseAsFileTime call:

https://msdn.microsoft.com/en-us/library/windows/desktop/hh706895(v=vs.85).aspx

I wrote up a little more here:

http://www.satsignal.eu/ntp/TSCtime.html

based on:

http://www.lochan.org/2005/keith-cl/useful/win32time.html

I look forward to your results.

Cheers,
David

SatSignal Software - Quality software written to your requirements
Web: http://www.satsignal.eu
Email: david-taylor@blueyonder.co.uk
Twitter: @gm8arv

From: Mark Sims I just added some code to Lady Heather to record and plot the time that the timing message arrived from the receiver (well, actually the time that the screen update routine was called, maybe a few microseconds difference). I am using my existing GetMsec() routine which on Windoze actually has around a 16 msec granularity. The Linux version uses the Linux nanosecond clock (divided down to msec resolution). I just started testing it on a Ublox 8M in NMEA and binary message mode... really surprising results to come shortly... _______________________________________________ Mark, Thanks for those updates. For Windows lower than 8, turn on the high-resolution timer and you can get millisecond level (0.977 ms IIRC). You can also use the QueryPeformanceCounter and related functions for better precision. https://msdn.microsoft.com/en-us/library/windows/desktop/ms644904(v=vs.85).aspx However for current Windows (8, 8.1 10) the situation is much better as you can get 100 ns precision using the new GetSystemTimePreciseAsFileTime call: https://msdn.microsoft.com/en-us/library/windows/desktop/hh706895(v=vs.85).aspx I wrote up a little more here: http://www.satsignal.eu/ntp/TSCtime.html based on: http://www.lochan.org/2005/keith-cl/useful/win32time.html I look forward to your results. Cheers, David -- SatSignal Software - Quality software written to your requirements Web: http://www.satsignal.eu Email: david-taylor@blueyonder.co.uk Twitter: @gm8arv
SS
Scott Stobbe
Mon, Jul 18, 2016 3:51 PM

I suppose it is one of those cases where, the GPS designers decided you
shouldn't ever use the serial data for sub-second timing, and consequently
spent no effort on serial latency and jitter.

Most UARTs I have come across have been synthesized with a 16x baud clock
and included flow control. It would not have been too much effort to spec
latency as some mu ±100 ns and jitter of ±1/(16*baud).

For 9600 baud, the jitter on the start bit would be ±6.5 us.

If CTS was resampled a 1 full bit time (9600 baud), the jitter would
be ±104 us.

On Sat, Jul 16, 2016 at 3:13 PM, Mark Sims holrum@hotmail.com wrote:

I just added some code to Lady Heather to record and plot the time that
the timing message arrived from the receiver (well, actually the time that
the screen update routine was called,  maybe a few microseconds
difference).    I am using my existing GetMsec() routine which on Windoze
actually has around a 16 msec granularity.  The Linux version uses the
Linux nanosecond clock (divided down to msec resolution).  I just started
testing it on a Ublox 8M in NMEA and binary message mode...  really
surprising results to come shortly...


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

I suppose it is one of those cases where, the GPS designers decided you shouldn't ever use the serial data for sub-second timing, and consequently spent no effort on serial latency and jitter. Most UARTs I have come across have been synthesized with a 16x baud clock and included flow control. It would not have been too much effort to spec latency as some mu ±100 ns and jitter of ±1/(16*baud). For 9600 baud, the jitter on the start bit would be ±6.5 us. If CTS was resampled a 1 full bit time (9600 baud), the jitter would be ±104 us. On Sat, Jul 16, 2016 at 3:13 PM, Mark Sims <holrum@hotmail.com> wrote: > I just added some code to Lady Heather to record and plot the time that > the timing message arrived from the receiver (well, actually the time that > the screen update routine was called, maybe a few microseconds > difference). I am using my existing GetMsec() routine which on Windoze > actually has around a 16 msec granularity. The Linux version uses the > Linux nanosecond clock (divided down to msec resolution). I just started > testing it on a Ublox 8M in NMEA and binary message mode... really > surprising results to come shortly... > > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to > https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there. >
DJ
David J Taylor
Mon, Jul 18, 2016 4:19 PM

I suppose it is one of those cases where, the GPS designers decided you
shouldn't ever use the serial data for sub-second timing, and consequently
spent no effort on serial latency and jitter.

Most UARTs I have come across have been synthesized with a 16x baud clock
and included flow control. It would not have been too much effort to spec
latency as some mu ±100 ns and jitter of ±1/(16*baud).

For 9600 baud, the jitter on the start bit would be ±6.5 us.

If CTS was resampled a 1 full bit time (9600 baud), the jitter would
be ±104 us.

Scott,

You're right about the design priorities (and we have had to take Garmin to
task on this, but they did fix the problem), but it's not the UART which is
the major problem, but that the tiny CPU inside is taking a variable amount
of time to have the serial data ready.  We're talking tens, possibly
hundreds of milliseconds peak-to-peak jitter.

Cheers,
David

SatSignal Software - Quality software written to your requirements
Web: http://www.satsignal.eu
Email: david-taylor@blueyonder.co.uk
Twitter: @gm8arv

I suppose it is one of those cases where, the GPS designers decided you shouldn't ever use the serial data for sub-second timing, and consequently spent no effort on serial latency and jitter. Most UARTs I have come across have been synthesized with a 16x baud clock and included flow control. It would not have been too much effort to spec latency as some mu ±100 ns and jitter of ±1/(16*baud). For 9600 baud, the jitter on the start bit would be ±6.5 us. If CTS was resampled a 1 full bit time (9600 baud), the jitter would be ±104 us. ============================== Scott, You're right about the design priorities (and we have had to take Garmin to task on this, but they did fix the problem), but it's not the UART which is the major problem, but that the tiny CPU inside is taking a variable amount of time to have the serial data ready. We're talking tens, possibly hundreds of milliseconds peak-to-peak jitter. Cheers, David -- SatSignal Software - Quality software written to your requirements Web: http://www.satsignal.eu Email: david-taylor@blueyonder.co.uk Twitter: @gm8arv
BC
Bob Camp
Mon, Jul 18, 2016 5:40 PM

Hi

On Jul 18, 2016, at 12:19 PM, David J Taylor david-taylor@blueyonder.co.uk wrote:

I suppose it is one of those cases where, the GPS designers decided you
shouldn't ever use the serial data for sub-second timing, and consequently
spent no effort on serial latency and jitter.

Most UARTs I have come across have been synthesized with a 16x baud clock
and included flow control. It would not have been too much effort to spec
latency as some mu ±100 ns and jitter of ±1/(16*baud).

For 9600 baud, the jitter on the start bit would be ±6.5 us.

If CTS was resampled a 1 full bit time (9600 baud), the jitter would
be ±104 us.

Scott,

You're right about the design priorities (and we have had to take Garmin to task on this, but they did fix the problem), but it's not the UART which is the major problem, but that the tiny CPU inside is taking a variable amount of time to have the serial data ready.  We're talking tens, possibly hundreds of milliseconds peak-to-peak jitter.

….. but ….

It’s been a long time since 9600 baud was a fast baud rate. It is pretty common these
days to run at 115K baud on something like this. Indeed a number of GPS modules will only
run at that speed or faster if you want the full feature set to work. Most modern modules
will run much faster than 115K if you want them to. The simple fact that they need the higher
baud rate to get all the data out forces a better serial i/o approach in a modern module.

In order for sawtooth correction to work, the relation of the serial message to the pps
needs to be pretty well defined. It either is talking about the next pps or about the
prior pps edge. If it is ambiguous relative to the pps, you can not be sure of what it
is relating to.

If the module has a pps out and has sawtooth correction (or uses the same code base
as one that does), the serial timing string is not going to be all over the place. They no
longer are running itty bitty CPU’s in these things. ARM’s running at >= 400 MHz
are the typical approach these days.  Running out of clock cycles to get it all
done went away at least 5 years ago and more like 10 years for the “usual suspects” that
you see in timing applications.

Can you still find a 20 or 30 year old module on eBay that has issues? Sure you can. It’s
not what I would call a modern part, even if it is being sold as “new in box”. Can you find
modules that simply do not keep time at all? Sure you can. That’s not the serial port’s fault.
It’s the fact that that specific module is broke. Don’t use that one, move on.

Bob

Cheers,
David

SatSignal Software - Quality software written to your requirements
Web: http://www.satsignal.eu
Email: david-taylor@blueyonder.co.uk
Twitter: @gm8arv


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi > On Jul 18, 2016, at 12:19 PM, David J Taylor <david-taylor@blueyonder.co.uk> wrote: > > I suppose it is one of those cases where, the GPS designers decided you > shouldn't ever use the serial data for sub-second timing, and consequently > spent no effort on serial latency and jitter. > > Most UARTs I have come across have been synthesized with a 16x baud clock > and included flow control. It would not have been too much effort to spec > latency as some mu ±100 ns and jitter of ±1/(16*baud). > > For 9600 baud, the jitter on the start bit would be ±6.5 us. > > If CTS was resampled a 1 full bit time (9600 baud), the jitter would > be ±104 us. > ============================== > > Scott, > > You're right about the design priorities (and we have had to take Garmin to task on this, but they did fix the problem), but it's not the UART which is the major problem, but that the tiny CPU inside is taking a variable amount of time to have the serial data ready. We're talking tens, possibly hundreds of milliseconds peak-to-peak jitter. ….. but …. It’s been a long time since 9600 baud was a fast baud rate. It is pretty common these days to run at 115K baud on something like this. Indeed a number of GPS modules will only run at that speed or faster if you want the full feature set to work. Most modern modules will run much faster than 115K if you want them to. The simple fact that they need the higher baud rate to get all the data out forces a better serial i/o approach in a modern module. In order for sawtooth correction to work, the relation of the serial message to the pps needs to be pretty well defined. It either is talking about the *next* pps or about the *prior* pps edge. If it is ambiguous relative to the pps, you can not be sure of what it is relating to. If the module has a pps out and has sawtooth correction (or uses the same code base as one that does), the serial timing string is not going to be all over the place. They no longer are running itty bitty CPU’s in these things. ARM’s running at >= 400 MHz are the typical approach these days. Running out of clock cycles to get it all done went away at least 5 years ago and more like 10 years for the “usual suspects” that you see in timing applications. Can you still find a 20 or 30 year old module on eBay that has issues? Sure you can. It’s not what I would call a modern part, even if it is being sold as “new in box”. Can you find modules that simply do not keep time at all? Sure you can. That’s not the serial port’s fault. It’s the fact that that specific module is broke. Don’t use that one, move on. Bob > > Cheers, > David > -- > SatSignal Software - Quality software written to your requirements > Web: http://www.satsignal.eu > Email: david-taylor@blueyonder.co.uk > Twitter: @gm8arv > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there.
SS
Scott Stobbe
Mon, Jul 18, 2016 5:44 PM

I am happy to hear you issue was resolved. What I meant to say is the
problem could also be mitigated using the UART's flow control, this could
be done by the original GPS designers or by an end user if the CTS line is
pined out. Gating the UART with a conservative delay, say 500 ms from the
time mark or PPS signal. The serial string would just sit in the transmit
buffer until the fixed delay expires and the UART starts transmitting.

On Mon, Jul 18, 2016 at 12:19 PM, David J Taylor <
david-taylor@blueyonder.co.uk> wrote:

I suppose it is one of those cases where, the GPS designers decided you
shouldn't ever use the serial data for sub-second timing, and consequently
spent no effort on serial latency and jitter.

Most UARTs I have come across have been synthesized with a 16x baud clock
and included flow control. It would not have been too much effort to spec
latency as some mu ±100 ns and jitter of ±1/(16*baud).

For 9600 baud, the jitter on the start bit would be ±6.5 us.

If CTS was resampled a 1 full bit time (9600 baud), the jitter would
be ±104 us.

Scott,

You're right about the design priorities (and we have had to take Garmin
to task on this, but they did fix the problem), but it's not the UART which
is the major problem, but that the tiny CPU inside is taking a variable
amount of time to have the serial data ready.  We're talking tens, possibly
hundreds of milliseconds peak-to-peak jitter.

Cheers,
David

SatSignal Software - Quality software written to your requirements
Web: http://www.satsignal.eu
Email: david-taylor@blueyonder.co.uk
Twitter: @gm8arv


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

I am happy to hear you issue was resolved. What I meant to say is the problem could also be mitigated using the UART's flow control, this could be done by the original GPS designers or by an end user if the CTS line is pined out. Gating the UART with a conservative delay, say 500 ms from the time mark or PPS signal. The serial string would just sit in the transmit buffer until the fixed delay expires and the UART starts transmitting. On Mon, Jul 18, 2016 at 12:19 PM, David J Taylor < david-taylor@blueyonder.co.uk> wrote: > I suppose it is one of those cases where, the GPS designers decided you > shouldn't ever use the serial data for sub-second timing, and consequently > spent no effort on serial latency and jitter. > > Most UARTs I have come across have been synthesized with a 16x baud clock > and included flow control. It would not have been too much effort to spec > latency as some mu ±100 ns and jitter of ±1/(16*baud). > > For 9600 baud, the jitter on the start bit would be ±6.5 us. > > If CTS was resampled a 1 full bit time (9600 baud), the jitter would > be ±104 us. > ============================== > > Scott, > > You're right about the design priorities (and we have had to take Garmin > to task on this, but they did fix the problem), but it's not the UART which > is the major problem, but that the tiny CPU inside is taking a variable > amount of time to have the serial data ready. We're talking tens, possibly > hundreds of milliseconds peak-to-peak jitter. > > Cheers, > David > -- > SatSignal Software - Quality software written to your requirements > Web: http://www.satsignal.eu > Email: david-taylor@blueyonder.co.uk > Twitter: @gm8arv > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to > https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there. >
J
jimlux
Mon, Jul 18, 2016 6:43 PM

On 7/18/16 8:51 AM, Scott Stobbe wrote:

I suppose it is one of those cases where, the GPS designers decided you
shouldn't ever use the serial data for sub-second timing, and consequently
spent no effort on serial latency and jitter.

Most UARTs I have come across have been synthesized with a 16x baud clock
and included flow control. It would not have been too much effort to spec
latency as some mu ±100 ns and jitter of ±1/(16*baud).

For 9600 baud, the jitter on the start bit would be ±6.5 us.

If CTS was resampled a 1 full bit time (9600 baud), the jitter would
be ±104 us.

except that virtually every UART in use today has some sort of buffering
(whether a FIFO or double buffering) between the CPU interface and the
bits on the wire, which completely desynchronizes the bits on the wire
from the CPU interface.

Determinism in UART timing between the CPU bus interface and the "bits
on the wire" has never been something that is specified.  You can go
back to venerable parts like the 8251, and there's no spec in the data
sheet.
( there's a tCR specified as 16 tCY for the read setup time from CTS*,
DSR* to READ* assert.  And tSRX (2 usec min) and tHRX (2 usec min) for
the setup and hold of the internal sampling pulse relative to RxD. And
20 tCY as a max from center of stop bit to RxRDY, and then whatever the
delay is from the internal RxRDY to the bus read)

There's "what we observed in a running circuit" or "what we inferred
from knowing the internal design".

Since a huge number of serial ports these days are implemented with a
USB interface, the timing uncertainty is even greater, because you're
dealing with the 8kHz frame timing on USB.

This is why PTP compatible interfaces added time tagging to the PHY layer.

On 7/18/16 8:51 AM, Scott Stobbe wrote: > I suppose it is one of those cases where, the GPS designers decided you > shouldn't ever use the serial data for sub-second timing, and consequently > spent no effort on serial latency and jitter. > > Most UARTs I have come across have been synthesized with a 16x baud clock > and included flow control. It would not have been too much effort to spec > latency as some mu ±100 ns and jitter of ±1/(16*baud). > > For 9600 baud, the jitter on the start bit would be ±6.5 us. > > If CTS was resampled a 1 full bit time (9600 baud), the jitter would > be ±104 us. > except that virtually every UART in use today has some sort of buffering (whether a FIFO or double buffering) between the CPU interface and the bits on the wire, which completely desynchronizes the bits on the wire from the CPU interface. Determinism in UART timing between the CPU bus interface and the "bits on the wire" has never been something that is specified. You can go back to venerable parts like the 8251, and there's no spec in the data sheet. ( there's a tCR specified as 16 tCY for the read setup time from CTS*, DSR* to READ* assert. And tSRX (2 usec min) and tHRX (2 usec min) for the setup and hold of the internal sampling pulse relative to RxD. And 20 tCY as a max from center of stop bit to RxRDY, and then whatever the delay is from the internal RxRDY to the bus read) There's "what we observed in a running circuit" or "what we inferred from knowing the internal design". Since a huge number of serial ports these days are implemented with a USB interface, the timing uncertainty is even greater, because you're dealing with the 8kHz frame timing on USB. This is why PTP compatible interfaces added time tagging to the PHY layer.
D
David
Mon, Jul 18, 2016 7:35 PM

On Mon, 18 Jul 2016 11:43:32 -0700, you wrote:

except that virtually every UART in use today has some sort of buffering
(whether a FIFO or double buffering) between the CPU interface and the
bits on the wire, which completely desynchronizes the bits on the wire
from the CPU interface.

Determinism in UART timing between the CPU bus interface and the "bits
on the wire" has never been something that is specified.  You can go
back to venerable parts like the 8251, and there's no spec in the data
sheet.
( there's a tCR specified as 16 tCY for the read setup time from CTS*,
DSR* to READ* assert.  And tSRX (2 usec min) and tHRX (2 usec min) for
the setup and hold of the internal sampling pulse relative to RxD. And
20 tCY as a max from center of stop bit to RxRDY, and then whatever the
delay is from the internal RxRDY to the bus read)

Long ago I remember seeing a circuit design or application note using
an 8250 or similar where the UART start bit was gated so that the
leading edge could be used for precision synchronization.

On Mon, 18 Jul 2016 11:43:32 -0700, you wrote: >except that virtually every UART in use today has some sort of buffering >(whether a FIFO or double buffering) between the CPU interface and the >bits on the wire, which completely desynchronizes the bits on the wire >from the CPU interface. > >Determinism in UART timing between the CPU bus interface and the "bits >on the wire" has never been something that is specified. You can go >back to venerable parts like the 8251, and there's no spec in the data >sheet. >( there's a tCR specified as 16 tCY for the read setup time from CTS*, >DSR* to READ* assert. And tSRX (2 usec min) and tHRX (2 usec min) for >the setup and hold of the internal sampling pulse relative to RxD. And >20 tCY as a max from center of stop bit to RxRDY, and then whatever the >delay is from the internal RxRDY to the bus read) Long ago I remember seeing a circuit design or application note using an 8250 or similar where the UART start bit was gated so that the leading edge could be used for precision synchronization.
SS
Scott Stobbe
Mon, Jul 18, 2016 8:44 PM

Well, I suppose in the case of USB, the host hardware (consumer PC) is not
going to have any special hardware. But, if a gps receiver implements a USB
interface, in addition to standard NEMA data, it could also report the
phase and frequency error of your USB clock (since it has to recover it
anyways to get the usb data).

I don't have the answer here, but usb-audio ICs suffer similar problems in
clock distribution. The gist of it seems to be locking on the hosts SOF
packets. PROGRAMMABLE CLOCK GENERATION AND SYNCHRONIZATION FOR USB AUDIO
SYSTEMS http://www.cypress.com/file/122521/download

On Mon, Jul 18, 2016 at 2:43 PM, jimlux jimlux@earthlink.net wrote:

On 7/18/16 8:51 AM, Scott Stobbe wrote:

I suppose it is one of those cases where, the GPS designers decided you
shouldn't ever use the serial data for sub-second timing, and consequently
spent no effort on serial latency and jitter.

Most UARTs I have come across have been synthesized with a 16x baud clock
and included flow control. It would not have been too much effort to spec
latency as some mu ±100 ns and jitter of ±1/(16*baud).

For 9600 baud, the jitter on the start bit would be ±6.5 us.

If CTS was resampled a 1 full bit time (9600 baud), the jitter would
be ±104 us.

except that virtually every UART in use today has some sort of buffering
(whether a FIFO or double buffering) between the CPU interface and the bits
on the wire, which completely desynchronizes the bits on the wire from the
CPU interface.

Determinism in UART timing between the CPU bus interface and the "bits on
the wire" has never been something that is specified.  You can go back to
venerable parts like the 8251, and there's no spec in the data sheet.
( there's a tCR specified as 16 tCY for the read setup time from CTS*,
DSR* to READ* assert.  And tSRX (2 usec min) and tHRX (2 usec min) for the
setup and hold of the internal sampling pulse relative to RxD. And 20 tCY
as a max from center of stop bit to RxRDY, and then whatever the delay is
from the internal RxRDY to the bus read)

There's "what we observed in a running circuit" or "what we inferred from
knowing the internal design".

Since a huge number of serial ports these days are implemented with a USB
interface, the timing uncertainty is even greater, because you're dealing
with the 8kHz frame timing on USB.

This is why PTP compatible interfaces added time tagging to the PHY layer.


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Well, I suppose in the case of USB, the host hardware (consumer PC) is not going to have any special hardware. But, if a gps receiver implements a USB interface, in addition to standard NEMA data, it could also report the phase and frequency error of your USB clock (since it has to recover it anyways to get the usb data). I don't have the answer here, but usb-audio ICs suffer similar problems in clock distribution. The gist of it seems to be locking on the hosts SOF packets. PROGRAMMABLE CLOCK GENERATION AND SYNCHRONIZATION FOR USB AUDIO SYSTEMS <http://www.cypress.com/file/122521/download> On Mon, Jul 18, 2016 at 2:43 PM, jimlux <jimlux@earthlink.net> wrote: > On 7/18/16 8:51 AM, Scott Stobbe wrote: > >> I suppose it is one of those cases where, the GPS designers decided you >> shouldn't ever use the serial data for sub-second timing, and consequently >> spent no effort on serial latency and jitter. >> >> Most UARTs I have come across have been synthesized with a 16x baud clock >> and included flow control. It would not have been too much effort to spec >> latency as some mu ±100 ns and jitter of ±1/(16*baud). >> >> For 9600 baud, the jitter on the start bit would be ±6.5 us. >> >> If CTS was resampled a 1 full bit time (9600 baud), the jitter would >> be ±104 us. >> >> > > except that virtually every UART in use today has some sort of buffering > (whether a FIFO or double buffering) between the CPU interface and the bits > on the wire, which completely desynchronizes the bits on the wire from the > CPU interface. > > Determinism in UART timing between the CPU bus interface and the "bits on > the wire" has never been something that is specified. You can go back to > venerable parts like the 8251, and there's no spec in the data sheet. > ( there's a tCR specified as 16 tCY for the read setup time from CTS*, > DSR* to READ* assert. And tSRX (2 usec min) and tHRX (2 usec min) for the > setup and hold of the internal sampling pulse relative to RxD. And 20 tCY > as a max from center of stop bit to RxRDY, and then whatever the delay is > from the internal RxRDY to the bus read) > > > There's "what we observed in a running circuit" or "what we inferred from > knowing the internal design". > > > Since a huge number of serial ports these days are implemented with a USB > interface, the timing uncertainty is even greater, because you're dealing > with the 8kHz frame timing on USB. > > > This is why PTP compatible interfaces added time tagging to the PHY layer. > > > > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to > https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there. >