time-nuts@lists.febo.com

Discussion of precise time and frequency measurement

View all threads

Re: [time-nuts] WWV receivers?

HM
Hal Murray
Sat, Oct 29, 2016 6:01 AM

That single-chip version is going to have a LOT less (and less variable)
latency than an SDR.

Latency isn't an issue as long as it is known so that you can correct for it.

Has anybody measured the jitter through SDR and/or tried to reduce it?  I'd
expect that even if you counted cycles and such there would still be jitter
from not being able to reproduce cache misses and interrupts.

--
These are my opinions.  I hate spam.

nsayer@kfu.com said: > That single-chip version is going to have a *LOT* less (and less variable) > latency than an SDR. Latency isn't an issue as long as it is known so that you can correct for it. Has anybody measured the jitter through SDR and/or tried to reduce it? I'd expect that even if you counted cycles and such there would still be jitter from not being able to reproduce cache misses and interrupts. -- These are my opinions. I hate spam.
AK
Attila Kinali
Sat, Oct 29, 2016 11:49 AM

On Fri, 28 Oct 2016 23:01:52 -0700
Hal Murray hmurray@megapathdsl.net wrote:

That single-chip version is going to have a LOT less (and less variable)
latency than an SDR.

Latency isn't an issue as long as it is known so that you can correct for it.

Has anybody measured the jitter through SDR and/or tried to reduce it?  I'd
expect that even if you counted cycles and such there would still be jitter
from not being able to reproduce cache misses and interrupts.

Should not be too high. If Jeff Sherman's and Robert Jörden's paper[1]
is any indication, then the jitter should be dominated by the jitter
of the ADC and its reference oscillator. So sub-ps, order of 100fs jitter
should be possible with proper design. Long term drift is another issue
and I have not completely figured out what are the contributors there.
Temperature stabilizing for sure helps, but it doesn't seem to be the
only effect.

		Attila Kinali

[1] "Oscillator metrology with software defined radio",
by Jeff Sherman and Robert Jörden, 2016
http://dx.doi.org/10.1063/1.4950898
http://arxiv.org/abs/1605.03505

--
Malek's Law:
Any simple idea will be worded in the most complicated way.

On Fri, 28 Oct 2016 23:01:52 -0700 Hal Murray <hmurray@megapathdsl.net> wrote: > nsayer@kfu.com said: > > That single-chip version is going to have a *LOT* less (and less variable) > > latency than an SDR. > > Latency isn't an issue as long as it is known so that you can correct for it. > > Has anybody measured the jitter through SDR and/or tried to reduce it? I'd > expect that even if you counted cycles and such there would still be jitter > from not being able to reproduce cache misses and interrupts. Should not be too high. If Jeff Sherman's and Robert Jörden's paper[1] is any indication, then the jitter should be dominated by the jitter of the ADC and its reference oscillator. So sub-ps, order of 100fs jitter should be possible with proper design. Long term drift is another issue and I have not completely figured out what are the contributors there. Temperature stabilizing for sure helps, but it doesn't seem to be the only effect. Attila Kinali [1] "Oscillator metrology with software defined radio", by Jeff Sherman and Robert Jörden, 2016 http://dx.doi.org/10.1063/1.4950898 http://arxiv.org/abs/1605.03505 -- Malek's Law: Any simple idea will be worded in the most complicated way.
PK
Poul-Henning Kamp
Sat, Oct 29, 2016 11:59 AM

In message 20161029134952.e60a2182e1f53844ec50bb0b@kinali.ch, Attila Kinali writes:

That single-chip version is going to have a LOT less (and less variable)
latency than an SDR.

Latency isn't an issue as long as it is known so that you can correct for it.

Has anybody measured the jitter through SDR and/or tried to reduce it?  I'd
expect that even if you counted cycles and such there would still be jitter
from not being able to reproduce cache misses and interrupts.

Should not be too high.

It should be nonexistent.

The sensible way to do SDR-timing, is to capture a signal from the disciplined
oscillator with the ADC samples, so that their precise timing relationship is
firmly bolted down.

--
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.

-------- In message <20161029134952.e60a2182e1f53844ec50bb0b@kinali.ch>, Attila Kinali writes: >> nsayer@kfu.com said: >> > That single-chip version is going to have a *LOT* less (and less variable) >> > latency than an SDR. >> >> Latency isn't an issue as long as it is known so that you can correct for it. >> >> Has anybody measured the jitter through SDR and/or tried to reduce it? I'd >> expect that even if you counted cycles and such there would still be jitter >> from not being able to reproduce cache misses and interrupts. > >Should not be too high. It should be nonexistent. The sensible way to do SDR-timing, is to capture a signal from the disciplined oscillator with the ADC samples, so that their precise timing relationship is firmly bolted down. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk@FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.
J
jimlux
Sat, Oct 29, 2016 4:32 PM

On 10/28/16 11:01 PM, Hal Murray wrote:

That single-chip version is going to have a LOT less (and less variable)
latency than an SDR.

Latency isn't an issue as long as it is known so that you can correct for it.

Has anybody measured the jitter through SDR and/or tried to reduce it?  I'd
expect that even if you counted cycles and such there would still be jitter
from not being able to reproduce cache misses and interrupts.

Since I make my living doing SDRs with very stringent output vs input
timing requirements --

This is a little bit like discussing "what is hard real-time software"

distinguish between an SDR where the signal processing chain is
implemented in a reprogrammable FPGA (hence, "soft") and one where the
signal processing chain is implemented in a general purpose processor
(e.g. gnuradio).

Having low jitter in an FPGA implementation isn't hard, down to the
basic timing resolution of the FPGA: the clock rate, etc.  That gets
you into questions about jitter relative to what: if I feed in a highly
accurate 1pps and use that as the sampling clock, then the jitter
relative to the 1pps will be quite small (if not zero) - if I register
that 1pps with a 10 MHz clock that is asynchronous to the 1pps, then
I've bumped the jitter to 100ns - plenty of discussion on this list
about hardware level jitter (which is what FPGA implementations really are).

In the more traditionally software (running on a CPU) it depends a lot
on your design architecture:  If your hardware time stamped the samples,
and the "output" of the system has a hardware control that can read time
stamps, then you can have a system with large latency, but small jitter.
You put some FIFOs in the system to accommodate the processing time
variation, and the latency is determined by hardware tick in to hardware
tick out.

If you're talking about something like gnu radio - it has almost no
provision for latency control - it's a non-deterministic processing
chain where data is passed from block to block like a signal flow
diagram.  What control is there is limited by the host OS being able to
"keep up".  Implementing closed loop systems in gnu radio is very
tricky as a result, likewise, implementing something like a coherent
transponder (where the output signal has to have a precise and
consistent phase relationship with the input signal) with gnuradio would
be tough

Somewhere in between are various "real time" implementation approaches:
if your processor has a periodic interrrupt capability, and the CPU
clock is driven from the same clock as everything else (e.g. it's a soft
core in an FPGA), then you're in the "hardware controlled" area: if I
get an interrupt every millisecond, counted down from my 100 MHz master
clock, then I can make a pretty low jitter system with 1 kHz sample
rates - most processors have fairly deterministic interrupt handling
time if the interrupt is high enough priority. And you keep your
interrupt code very simple and short:

my_isr:
load register from ValueFromMemory
output register to I/O address
return from interrupt

you might also need to do some stuff like hold the value in a register
all the time (so you don't get a variable time in the fetch)

Would this work on a modern X86 with pipelines, speculative
execution, multiple thread CPUs, etc.?  I don't know enough about how
the interrupt logic works - is code executing in an ISR running in a
mode where the CPU has all the fancy bells and whistles turned off?

On 10/28/16 11:01 PM, Hal Murray wrote: > > nsayer@kfu.com said: >> That single-chip version is going to have a *LOT* less (and less variable) >> latency than an SDR. > > Latency isn't an issue as long as it is known so that you can correct for it. > > Has anybody measured the jitter through SDR and/or tried to reduce it? I'd > expect that even if you counted cycles and such there would still be jitter > from not being able to reproduce cache misses and interrupts. > Since I make my living doing SDRs with very stringent output vs input timing requirements -- This is a little bit like discussing "what is hard real-time software" distinguish between an SDR where the signal processing chain is implemented in a reprogrammable FPGA (hence, "soft") and one where the signal processing chain is implemented in a general purpose processor (e.g. gnuradio). Having low jitter in an FPGA implementation isn't hard, down to the basic timing resolution of the FPGA: the clock rate, etc. That gets you into questions about jitter relative to what: if I feed in a highly accurate 1pps and use that as the sampling clock, then the jitter relative to the 1pps will be quite small (if not zero) - if I register that 1pps with a 10 MHz clock that is asynchronous to the 1pps, then I've bumped the jitter to 100ns - plenty of discussion on this list about hardware level jitter (which is what FPGA implementations really are). In the more traditionally software (running on a CPU) it depends a lot on your design architecture: If your hardware time stamped the samples, and the "output" of the system has a hardware control that can read time stamps, then you can have a system with large latency, but small jitter. You put some FIFOs in the system to accommodate the processing time variation, and the latency is determined by hardware tick in to hardware tick out. If you're talking about something like gnu radio - it has almost no provision for latency control - it's a non-deterministic processing chain where data is passed from block to block like a signal flow diagram. What control is there is limited by the host OS being able to "keep up". Implementing closed loop systems in gnu radio is very tricky as a result, likewise, implementing something like a coherent transponder (where the output signal has to have a precise and consistent phase relationship with the input signal) with gnuradio would be tough Somewhere in between are various "real time" implementation approaches: if your processor has a periodic interrrupt capability, and the CPU clock is driven from the same clock as everything else (e.g. it's a soft core in an FPGA), then you're in the "hardware controlled" area: if I get an interrupt every millisecond, counted down from my 100 MHz master clock, then I can make a pretty low jitter system with 1 kHz sample rates - most processors have fairly deterministic interrupt handling time if the interrupt is high enough priority. And you keep your interrupt code very simple and short: my_isr: load register from ValueFromMemory output register to I/O address return from interrupt you might also need to do some stuff like hold the value in a register all the time (so you don't get a variable time in the fetch) Would this work on a modern X86 with pipelines, speculative execution, multiple thread CPUs, etc.? I don't know enough about how the interrupt logic works - is code executing in an ISR running in a mode where the CPU has all the fancy bells and whistles turned off?
J
jimlux
Sat, Oct 29, 2016 4:35 PM

On 10/29/16 4:49 AM, Attila Kinali wrote:

On Fri, 28 Oct 2016 23:01:52 -0700
Hal Murray hmurray@megapathdsl.net wrote:

That single-chip version is going to have a LOT less (and less variable)
latency than an SDR.

Latency isn't an issue as long as it is known so that you can correct for it.

Has anybody measured the jitter through SDR and/or tried to reduce it?  I'd
expect that even if you counted cycles and such there would still be jitter
from not being able to reproduce cache misses and interrupts.

Should not be too high. If Jeff Sherman's and Robert Jörden's paper[1]
is any indication, then the jitter should be dominated by the jitter
of the ADC and its reference oscillator. So sub-ps, order of 100fs jitter
should be possible with proper design. Long term drift is another issue
and I have not completely figured out what are the contributors there.
Temperature stabilizing for sure helps, but it doesn't seem to be the
only effect.

Well, that's "jitter in the original samples" which can be very low, as
you describe. But I would interpret the original question as "jitter
through an SDR" which implies that we're looking at the timing of
output vs input.

Consider an SDR which receives a RF signal that's BPSK modulated, and
puts out a stream of data bits on a wire (as opposed to dumping into a
file or network connection)-  and you want to look at an eye diagram of
the output.

		Attila Kinali

[1] "Oscillator metrology with software defined radio",
by Jeff Sherman and Robert Jörden, 2016
http://dx.doi.org/10.1063/1.4950898
http://arxiv.org/abs/1605.03505

On 10/29/16 4:49 AM, Attila Kinali wrote: > On Fri, 28 Oct 2016 23:01:52 -0700 > Hal Murray <hmurray@megapathdsl.net> wrote: > >> nsayer@kfu.com said: >>> That single-chip version is going to have a *LOT* less (and less variable) >>> latency than an SDR. >> >> Latency isn't an issue as long as it is known so that you can correct for it. >> >> Has anybody measured the jitter through SDR and/or tried to reduce it? I'd >> expect that even if you counted cycles and such there would still be jitter >> from not being able to reproduce cache misses and interrupts. > > Should not be too high. If Jeff Sherman's and Robert Jörden's paper[1] > is any indication, then the jitter should be dominated by the jitter > of the ADC and its reference oscillator. So sub-ps, order of 100fs jitter > should be possible with proper design. Long term drift is another issue > and I have not completely figured out what are the contributors there. > Temperature stabilizing for sure helps, but it doesn't seem to be the > only effect. Well, that's "jitter in the original samples" which can be very low, as you describe. But I would interpret the original question as "jitter *through* an SDR" which implies that we're looking at the timing of output vs input. Consider an SDR which receives a RF signal that's BPSK modulated, and puts out a stream of data bits on a wire (as opposed to dumping into a file or network connection)- and you want to look at an eye diagram of the output. > > > Attila Kinali > > [1] "Oscillator metrology with software defined radio", > by Jeff Sherman and Robert Jörden, 2016 > http://dx.doi.org/10.1063/1.4950898 > http://arxiv.org/abs/1605.03505 > >
AK
Attila Kinali
Sat, Oct 29, 2016 7:07 PM

On Sat, 29 Oct 2016 09:35:25 -0700
jimlux jimlux@earthlink.net wrote:

Should not be too high. If Jeff Sherman's and Robert Jörden's paper[1]
is any indication, then the jitter should be dominated by the jitter
of the ADC and its reference oscillator. So sub-ps, order of 100fs jitter
should be possible with proper design. Long term drift is another issue
and I have not completely figured out what are the contributors there.
Temperature stabilizing for sure helps, but it doesn't seem to be the
only effect.

Well, that's "jitter in the original samples" which can be very low, as
you describe. But I would interpret the original question as "jitter
through an SDR" which implies that we're looking at the timing of
output vs input.

Oh.. yes...The whole latency into the PC is a whole different game.
I don't know the numbers for SDR, but for soundcards that delay jitter
is usually in the couple 100µs range, Ie way lower than most people
would notice. But this is only true if the OS reports the buffer sizes
correctly. On Linux that means no pulseaudio as it is known to mess up
the buffer reporting completely, to the point where it was off by 10's of ms.

I don't know what the numbers under windows are, but as I have never heard
of any problems there it might just work correctly out of the box.

Those I know who do precsision timing with SDR usually use the timestamping
facilities on the SDR hardware and process those timestamps within GnuRadio.

			Attila Kinali

--
Malek's Law:
Any simple idea will be worded in the most complicated way.

On Sat, 29 Oct 2016 09:35:25 -0700 jimlux <jimlux@earthlink.net> wrote: > > Should not be too high. If Jeff Sherman's and Robert Jörden's paper[1] > > is any indication, then the jitter should be dominated by the jitter > > of the ADC and its reference oscillator. So sub-ps, order of 100fs jitter > > should be possible with proper design. Long term drift is another issue > > and I have not completely figured out what are the contributors there. > > Temperature stabilizing for sure helps, but it doesn't seem to be the > > only effect. > > > Well, that's "jitter in the original samples" which can be very low, as > you describe. But I would interpret the original question as "jitter > *through* an SDR" which implies that we're looking at the timing of > output vs input. Oh.. yes...The whole latency into the PC is a whole different game. I don't know the numbers for SDR, but for soundcards that delay jitter is usually in the couple 100µs range, Ie way lower than most people would notice. But this is only true if the OS reports the buffer sizes correctly. On Linux that means no pulseaudio as it is known to mess up the buffer reporting completely, to the point where it was off by 10's of ms. I don't know what the numbers under windows are, but as I have never heard of any problems there it might just work correctly out of the box. Those I know who do precsision timing with SDR usually use the timestamping facilities on the SDR hardware and process those timestamps within GnuRadio. Attila Kinali -- Malek's Law: Any simple idea will be worded in the most complicated way.
CA
Chris Albertson
Sat, Oct 29, 2016 8:56 PM

There is zero jitter through the SDR software because you can always buffer
the output and then reclock it on output and all you have to deal with is a
known fixed delay.  If the samples are clocked in accurately that is all
you need.

Some audio interfaces have can have very good timing and run off an
external reference oscillator.  But those are typically found in
professional studios.  (Some studios have coax or fiber frequency
distribution.) But The typical home studio audio interface that sells for
under $200 uses a four pin  oscillator.

The bigger question is propagation.

On Sat, Oct 29, 2016 at 12:07 PM, Attila Kinali attila@kinali.ch wrote:

On Sat, 29 Oct 2016 09:35:25 -0700
jimlux jimlux@earthlink.net wrote:

Should not be too high. If Jeff Sherman's and Robert Jörden's paper[1]
is any indication, then the jitter should be dominated by the jitter
of the ADC and its reference oscillator. So sub-ps, order of 100fs

jitter

should be possible with proper design. Long term drift is another issue
and I have not completely figured out what are the contributors there.
Temperature stabilizing for sure helps, but it doesn't seem to be the
only effect.

Well, that's "jitter in the original samples" which can be very low, as
you describe. But I would interpret the original question as "jitter
through an SDR" which implies that we're looking at the timing of
output vs input.

Oh.. yes...The whole latency into the PC is a whole different game.
I don't know the numbers for SDR, but for soundcards that delay jitter
is usually in the couple 100µs range, Ie way lower than most people
would notice. But this is only true if the OS reports the buffer sizes
correctly. On Linux that means no pulseaudio as it is known to mess up
the buffer reporting completely, to the point where it was off by 10's of
ms.

I don't know what the numbers under windows are, but as I have never heard
of any problems there it might just work correctly out of the box.

Those I know who do precsision timing with SDR usually use the timestamping
facilities on the SDR hardware and process those timestamps within
GnuRadio.

                             Attila Kinali

--
Malek's Law:
Any simple idea will be worded in the most complicated way.


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/
mailman/listinfo/time-nuts
and follow the instructions there.

--

Chris Albertson
Redondo Beach, California

There is zero jitter through the SDR software because you can always buffer the output and then reclock it on output and all you have to deal with is a known fixed delay. If the samples are clocked in accurately that is all you need. Some audio interfaces have can have very good timing and run off an external reference oscillator. But those are typically found in professional studios. (Some studios have coax or fiber frequency distribution.) But The typical home studio audio interface that sells for under $200 uses a four pin oscillator. The bigger question is propagation. On Sat, Oct 29, 2016 at 12:07 PM, Attila Kinali <attila@kinali.ch> wrote: > On Sat, 29 Oct 2016 09:35:25 -0700 > jimlux <jimlux@earthlink.net> wrote: > > > > Should not be too high. If Jeff Sherman's and Robert Jörden's paper[1] > > > is any indication, then the jitter should be dominated by the jitter > > > of the ADC and its reference oscillator. So sub-ps, order of 100fs > jitter > > > should be possible with proper design. Long term drift is another issue > > > and I have not completely figured out what are the contributors there. > > > Temperature stabilizing for sure helps, but it doesn't seem to be the > > > only effect. > > > > > > Well, that's "jitter in the original samples" which can be very low, as > > you describe. But I would interpret the original question as "jitter > > *through* an SDR" which implies that we're looking at the timing of > > output vs input. > > Oh.. yes...The whole latency into the PC is a whole different game. > I don't know the numbers for SDR, but for soundcards that delay jitter > is usually in the couple 100µs range, Ie way lower than most people > would notice. But this is only true if the OS reports the buffer sizes > correctly. On Linux that means no pulseaudio as it is known to mess up > the buffer reporting completely, to the point where it was off by 10's of > ms. > > I don't know what the numbers under windows are, but as I have never heard > of any problems there it might just work correctly out of the box. > > Those I know who do precsision timing with SDR usually use the timestamping > facilities on the SDR hardware and process those timestamps within > GnuRadio. > > Attila Kinali > -- > Malek's Law: > Any simple idea will be worded in the most complicated way. > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/ > mailman/listinfo/time-nuts > and follow the instructions there. > -- Chris Albertson Redondo Beach, California