SC
Stewart Cobb
Sat, Oct 29, 2016 11:38 PM
What's the expected value of ADEV at tau = 1 s for time-interval
measurements quantized at 1 ns?
This question can probably be answered from pure theory (by someone more
mathematical than me), but it arises from a very practical situation. I
have several HP5334B counters comparing PPS pulses from various devices.
The HP5334B readout is quantized at 1 ns, and the spec sheet (IIRC) also
gives the instrument accuracy as 1 ns.
The devices under test are relatively stable. Their PPS pulses are all
within a few microseconds of each other but uncorrelated. They are stable
enough that the dominant error source on the ADEV plot out to several
hundred seconds is the 1 ns quantization of the counter. The plots all
start near 1 ns and follow a -1 slope down to the point where the
individual device characteristics start to dominate the counter
quantization error.
One might expect that the actual ADEV value in this situation would be
exactly 1 ns at tau = 1 second. Values of 0.5 ns or sqrt(2)/2 ns might not
be surprising. My actual measured value is about 0.65 ns, which does not
seem to have an obvious explanation. This brings to mind various questions:
What is the theoretical ADEV value of a perfect time-interval measurement
quantized at 1 ns? What's the effect of an imperfect measurement
(instrument errors)? Can one use this technique in reverse to sort
instruments by their error contributions, or to tune up an instrument
calibration?
I'd be grateful for answers to any of these questions.
BTW, thanks to whichever time-nuts recommended the HP5334B, back in the
archives; they're perfect for what I'm doing. And thanks to fellow time-nut
Rick Karlquist for his part in designing them.
Cheers!
--Stu
What's the expected value of ADEV at tau = 1 s for time-interval
measurements quantized at 1 ns?
This question can probably be answered from pure theory (by someone more
mathematical than me), but it arises from a very practical situation. I
have several HP5334B counters comparing PPS pulses from various devices.
The HP5334B readout is quantized at 1 ns, and the spec sheet (IIRC) also
gives the instrument accuracy as 1 ns.
The devices under test are relatively stable. Their PPS pulses are all
within a few microseconds of each other but uncorrelated. They are stable
enough that the dominant error source on the ADEV plot out to several
hundred seconds is the 1 ns quantization of the counter. The plots all
start near 1 ns and follow a -1 slope down to the point where the
individual device characteristics start to dominate the counter
quantization error.
One might expect that the actual ADEV value in this situation would be
exactly 1 ns at tau = 1 second. Values of 0.5 ns or sqrt(2)/2 ns might not
be surprising. My actual measured value is about 0.65 ns, which does not
seem to have an obvious explanation. This brings to mind various questions:
What is the theoretical ADEV value of a perfect time-interval measurement
quantized at 1 ns? What's the effect of an imperfect measurement
(instrument errors)? Can one use this technique in reverse to sort
instruments by their error contributions, or to tune up an instrument
calibration?
I'd be grateful for answers to any of these questions.
BTW, thanks to whichever time-nuts recommended the HP5334B, back in the
archives; they're perfect for what I'm doing. And thanks to fellow time-nut
Rick Karlquist for his part in designing them.
Cheers!
--Stu
BS
Bob Stewart
Sun, Oct 30, 2016 12:20 AM
Perfect? I can't tell you that. But I can tell you that the 1s ADEV that I can measure is limited to the stability of the reference oscillator, and to the resolution and stability of the measuring device. For example, I have an HP 5370A TIC. It's good to about +/- 20ps. So, that's the lower limit on the 1s ADEVs that I can measure.
Bob -----------------------------------------------------------------
AE6RV.com
GFS GPSDO list:
groups.yahoo.com/neo/groups/GFS-GPSDOs/info
From: Stewart Cobb <stewart.cobb@gmail.com>
To: time-nuts@febo.com
Sent: Saturday, October 29, 2016 6:38 PM
Subject: [time-nuts] theoretical Allan Variance question
What's the expected value of ADEV at tau = 1 s for time-interval
measurements quantized at 1 ns?
This question can probably be answered from pure theory (by someone more
mathematical than me), but it arises from a very practical situation. I
have several HP5334B counters comparing PPS pulses from various devices.
The HP5334B readout is quantized at 1 ns, and the spec sheet (IIRC) also
gives the instrument accuracy as 1 ns.
The devices under test are relatively stable. Their PPS pulses are all
within a few microseconds of each other but uncorrelated. They are stable
enough that the dominant error source on the ADEV plot out to several
hundred seconds is the 1 ns quantization of the counter. The plots all
start near 1 ns and follow a -1 slope down to the point where the
individual device characteristics start to dominate the counter
quantization error.
One might expect that the actual ADEV value in this situation would be
exactly 1 ns at tau = 1 second. Values of 0.5 ns or sqrt(2)/2 ns might not
be surprising. My actual measured value is about 0.65 ns, which does not
seem to have an obvious explanation. This brings to mind various questions:
What is the theoretical ADEV value of a perfect time-interval measurement
quantized at 1 ns? What's the effect of an imperfect measurement
(instrument errors)? Can one use this technique in reverse to sort
instruments by their error contributions, or to tune up an instrument
calibration?
I'd be grateful for answers to any of these questions.
BTW, thanks to whichever time-nuts recommended the HP5334B, back in the
archives; they're perfect for what I'm doing. And thanks to fellow time-nut
Rick Karlquist for his part in designing them.
Cheers!
--Stu
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Perfect? I can't tell you that. But I can tell you that the 1s ADEV that I can measure is limited to the stability of the reference oscillator, and to the resolution and stability of the measuring device. For example, I have an HP 5370A TIC. It's good to about +/- 20ps. So, that's the lower limit on the 1s ADEVs that I can measure.
Bob -----------------------------------------------------------------
AE6RV.com
GFS GPSDO list:
groups.yahoo.com/neo/groups/GFS-GPSDOs/info
From: Stewart Cobb <stewart.cobb@gmail.com>
To: time-nuts@febo.com
Sent: Saturday, October 29, 2016 6:38 PM
Subject: [time-nuts] theoretical Allan Variance question
What's the expected value of ADEV at tau = 1 s for time-interval
measurements quantized at 1 ns?
This question can probably be answered from pure theory (by someone more
mathematical than me), but it arises from a very practical situation. I
have several HP5334B counters comparing PPS pulses from various devices.
The HP5334B readout is quantized at 1 ns, and the spec sheet (IIRC) also
gives the instrument accuracy as 1 ns.
The devices under test are relatively stable. Their PPS pulses are all
within a few microseconds of each other but uncorrelated. They are stable
enough that the dominant error source on the ADEV plot out to several
hundred seconds is the 1 ns quantization of the counter. The plots all
start near 1 ns and follow a -1 slope down to the point where the
individual device characteristics start to dominate the counter
quantization error.
One might expect that the actual ADEV value in this situation would be
exactly 1 ns at tau = 1 second. Values of 0.5 ns or sqrt(2)/2 ns might not
be surprising. My actual measured value is about 0.65 ns, which does not
seem to have an obvious explanation. This brings to mind various questions:
What is the theoretical ADEV value of a perfect time-interval measurement
quantized at 1 ns? What's the effect of an imperfect measurement
(instrument errors)? Can one use this technique in reverse to sort
instruments by their error contributions, or to tune up an instrument
calibration?
I'd be grateful for answers to any of these questions.
BTW, thanks to whichever time-nuts recommended the HP5334B, back in the
archives; they're perfect for what I'm doing. And thanks to fellow time-nut
Rick Karlquist for his part in designing them.
Cheers!
--Stu
_______________________________________________
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
MW
Michael Wouters
Sun, Oct 30, 2016 1:59 AM
Dear Stuart
In a perfect world, your TI measurements would have a uniform
probability distribution function with amplitude 0.5 ns and mean value
0 ns. At least, this is the kind of PDF you would assume for
"resolution error". For this distribution, ADEV is 0.5 ns.
I don't know the HP5334B, perhaps its effective resolution is a bit
poorer than 0.5 ns, which I assume is the displayed resolution ?
BTW, one way to remove instability of the DUTs from this kind of test
is to reference the counter to 5/10 MHz and then do your TI
measurement with the same 5/10 MHz input to each channel of the
counter.
Cheers
Michael
On Sun, Oct 30, 2016 at 10:38 AM, Stewart Cobb stewart.cobb@gmail.com wrote:
What's the expected value of ADEV at tau = 1 s for time-interval
measurements quantized at 1 ns?
This question can probably be answered from pure theory (by someone more
mathematical than me), but it arises from a very practical situation. I
have several HP5334B counters comparing PPS pulses from various devices.
The HP5334B readout is quantized at 1 ns, and the spec sheet (IIRC) also
gives the instrument accuracy as 1 ns.
The devices under test are relatively stable. Their PPS pulses are all
within a few microseconds of each other but uncorrelated. They are stable
enough that the dominant error source on the ADEV plot out to several
hundred seconds is the 1 ns quantization of the counter. The plots all
start near 1 ns and follow a -1 slope down to the point where the
individual device characteristics start to dominate the counter
quantization error.
One might expect that the actual ADEV value in this situation would be
exactly 1 ns at tau = 1 second. Values of 0.5 ns or sqrt(2)/2 ns might not
be surprising. My actual measured value is about 0.65 ns, which does not
seem to have an obvious explanation. This brings to mind various questions:
What is the theoretical ADEV value of a perfect time-interval measurement
quantized at 1 ns? What's the effect of an imperfect measurement
(instrument errors)? Can one use this technique in reverse to sort
instruments by their error contributions, or to tune up an instrument
calibration?
I'd be grateful for answers to any of these questions.
BTW, thanks to whichever time-nuts recommended the HP5334B, back in the
archives; they're perfect for what I'm doing. And thanks to fellow time-nut
Rick Karlquist for his part in designing them.
Cheers!
--Stu
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Dear Stuart
In a perfect world, your TI measurements would have a uniform
probability distribution function with amplitude 0.5 ns and mean value
0 ns. At least, this is the kind of PDF you would assume for
"resolution error". For this distribution, ADEV is 0.5 ns.
I don't know the HP5334B, perhaps its effective resolution is a bit
poorer than 0.5 ns, which I assume is the displayed resolution ?
BTW, one way to remove instability of the DUTs from this kind of test
is to reference the counter to 5/10 MHz and then do your TI
measurement with the same 5/10 MHz input to each channel of the
counter.
Cheers
Michael
On Sun, Oct 30, 2016 at 10:38 AM, Stewart Cobb <stewart.cobb@gmail.com> wrote:
> What's the expected value of ADEV at tau = 1 s for time-interval
> measurements quantized at 1 ns?
>
> This question can probably be answered from pure theory (by someone more
> mathematical than me), but it arises from a very practical situation. I
> have several HP5334B counters comparing PPS pulses from various devices.
> The HP5334B readout is quantized at 1 ns, and the spec sheet (IIRC) also
> gives the instrument accuracy as 1 ns.
>
> The devices under test are relatively stable. Their PPS pulses are all
> within a few microseconds of each other but uncorrelated. They are stable
> enough that the dominant error source on the ADEV plot out to several
> hundred seconds is the 1 ns quantization of the counter. The plots all
> start near 1 ns and follow a -1 slope down to the point where the
> individual device characteristics start to dominate the counter
> quantization error.
>
> One might expect that the actual ADEV value in this situation would be
> exactly 1 ns at tau = 1 second. Values of 0.5 ns or sqrt(2)/2 ns might not
> be surprising. My actual measured value is about 0.65 ns, which does not
> seem to have an obvious explanation. This brings to mind various questions:
>
> What is the theoretical ADEV value of a perfect time-interval measurement
> quantized at 1 ns? What's the effect of an imperfect measurement
> (instrument errors)? Can one use this technique in reverse to sort
> instruments by their error contributions, or to tune up an instrument
> calibration?
>
> I'd be grateful for answers to any of these questions.
>
> BTW, thanks to whichever time-nuts recommended the HP5334B, back in the
> archives; they're perfect for what I'm doing. And thanks to fellow time-nut
> Rick Karlquist for his part in designing them.
>
> Cheers!
> --Stu
> _______________________________________________
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.
BC
Bob Camp
Sun, Oct 30, 2016 2:17 AM
Hi
Well, situation one:
You have two perfect sources.
Your measuring device is noiseless
If your devices are in perfect sync, you get a series of zeros
Your ADEV is zero
Situation two:
Same sources
Noisy measuring device
You get the standard deviation of the difference in measurements
Your ADEV is simply a measure of the noise of the measuring device
Situation three:
Your sources are much worse than 1x10^-9 at 1 second
Your ADEV is the proper number for your sources (or close to it)
Situation four:
The real world, you have a bit of each and you really don’t know
what is what.
Lots of possibilities and no single answer.
Bob
On Oct 29, 2016, at 7:38 PM, Stewart Cobb stewart.cobb@gmail.com wrote:
What's the expected value of ADEV at tau = 1 s for time-interval
measurements quantized at 1 ns?
This question can probably be answered from pure theory (by someone more
mathematical than me), but it arises from a very practical situation. I
have several HP5334B counters comparing PPS pulses from various devices.
The HP5334B readout is quantized at 1 ns, and the spec sheet (IIRC) also
gives the instrument accuracy as 1 ns.
The devices under test are relatively stable. Their PPS pulses are all
within a few microseconds of each other but uncorrelated. They are stable
enough that the dominant error source on the ADEV plot out to several
hundred seconds is the 1 ns quantization of the counter. The plots all
start near 1 ns and follow a -1 slope down to the point where the
individual device characteristics start to dominate the counter
quantization error.
One might expect that the actual ADEV value in this situation would be
exactly 1 ns at tau = 1 second. Values of 0.5 ns or sqrt(2)/2 ns might not
be surprising. My actual measured value is about 0.65 ns, which does not
seem to have an obvious explanation. This brings to mind various questions:
What is the theoretical ADEV value of a perfect time-interval measurement
quantized at 1 ns? What's the effect of an imperfect measurement
(instrument errors)? Can one use this technique in reverse to sort
instruments by their error contributions, or to tune up an instrument
calibration?
I'd be grateful for answers to any of these questions.
BTW, thanks to whichever time-nuts recommended the HP5334B, back in the
archives; they're perfect for what I'm doing. And thanks to fellow time-nut
Rick Karlquist for his part in designing them.
Cheers!
--Stu
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Hi
Well, situation one:
You have two perfect sources.
Your measuring device is noiseless
If your devices are in perfect sync, you get a series of zeros
Your ADEV is zero
Situation two:
Same sources
Noisy measuring device
You get the standard deviation of the difference in measurements
Your ADEV is simply a measure of the noise of the measuring device
Situation three:
Your sources are much worse than 1x10^-9 at 1 second
Your ADEV is the proper number for your sources (or close to it)
Situation four:
The real world, you have a bit of each and you really don’t know
what is what.
Lots of possibilities and no single answer.
Bob
> On Oct 29, 2016, at 7:38 PM, Stewart Cobb <stewart.cobb@gmail.com> wrote:
>
> What's the expected value of ADEV at tau = 1 s for time-interval
> measurements quantized at 1 ns?
>
> This question can probably be answered from pure theory (by someone more
> mathematical than me), but it arises from a very practical situation. I
> have several HP5334B counters comparing PPS pulses from various devices.
> The HP5334B readout is quantized at 1 ns, and the spec sheet (IIRC) also
> gives the instrument accuracy as 1 ns.
>
> The devices under test are relatively stable. Their PPS pulses are all
> within a few microseconds of each other but uncorrelated. They are stable
> enough that the dominant error source on the ADEV plot out to several
> hundred seconds is the 1 ns quantization of the counter. The plots all
> start near 1 ns and follow a -1 slope down to the point where the
> individual device characteristics start to dominate the counter
> quantization error.
>
> One might expect that the actual ADEV value in this situation would be
> exactly 1 ns at tau = 1 second. Values of 0.5 ns or sqrt(2)/2 ns might not
> be surprising. My actual measured value is about 0.65 ns, which does not
> seem to have an obvious explanation. This brings to mind various questions:
>
> What is the theoretical ADEV value of a perfect time-interval measurement
> quantized at 1 ns? What's the effect of an imperfect measurement
> (instrument errors)? Can one use this technique in reverse to sort
> instruments by their error contributions, or to tune up an instrument
> calibration?
>
> I'd be grateful for answers to any of these questions.
>
> BTW, thanks to whichever time-nuts recommended the HP5334B, back in the
> archives; they're perfect for what I'm doing. And thanks to fellow time-nut
> Rick Karlquist for his part in designing them.
>
> Cheers!
> --Stu
> _______________________________________________
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.
MW
Michael Wouters
Sun, Oct 30, 2016 4:04 AM
Dear Stewart
FWIW, if you do the experiment I suggested (TI measurement on
identical 10 MHZ etc) on a HP53132A counter, you get ADEV = 2.0x10^-10
at 1 second.
The manual says the LSD is 150 ps; when you include trigger errors, it
specifies a resolution of 300 ps. The 200 ps implied by the
measurement above is somewhere in between. The specs may be
conservative though.
As Bob Camp said, in Situation 2, you see the noise of the measurement system.
Cheers
Michael
On Sun, Oct 30, 2016 at 1:17 PM, Bob Camp kb8tq@n1k.org wrote:
Hi
Well, situation one:
You have two perfect sources.
Your measuring device is noiseless
If your devices are in perfect sync, you get a series of zeros
Your ADEV is zero
Situation two:
Same sources
Noisy measuring device
You get the standard deviation of the difference in measurements
Your ADEV is simply a measure of the noise of the measuring device
Situation three:
Your sources are much worse than 1x10^-9 at 1 second
Your ADEV is the proper number for your sources (or close to it)
Situation four:
The real world, you have a bit of each and you really don’t know
what is what.
Lots of possibilities and no single answer.
Bob
On Oct 29, 2016, at 7:38 PM, Stewart Cobb stewart.cobb@gmail.com wrote:
What's the expected value of ADEV at tau = 1 s for time-interval
measurements quantized at 1 ns?
This question can probably be answered from pure theory (by someone more
mathematical than me), but it arises from a very practical situation. I
have several HP5334B counters comparing PPS pulses from various devices.
The HP5334B readout is quantized at 1 ns, and the spec sheet (IIRC) also
gives the instrument accuracy as 1 ns.
The devices under test are relatively stable. Their PPS pulses are all
within a few microseconds of each other but uncorrelated. They are stable
enough that the dominant error source on the ADEV plot out to several
hundred seconds is the 1 ns quantization of the counter. The plots all
start near 1 ns and follow a -1 slope down to the point where the
individual device characteristics start to dominate the counter
quantization error.
One might expect that the actual ADEV value in this situation would be
exactly 1 ns at tau = 1 second. Values of 0.5 ns or sqrt(2)/2 ns might not
be surprising. My actual measured value is about 0.65 ns, which does not
seem to have an obvious explanation. This brings to mind various questions:
What is the theoretical ADEV value of a perfect time-interval measurement
quantized at 1 ns? What's the effect of an imperfect measurement
(instrument errors)? Can one use this technique in reverse to sort
instruments by their error contributions, or to tune up an instrument
calibration?
I'd be grateful for answers to any of these questions.
BTW, thanks to whichever time-nuts recommended the HP5334B, back in the
archives; they're perfect for what I'm doing. And thanks to fellow time-nut
Rick Karlquist for his part in designing them.
Cheers!
--Stu
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Dear Stewart
FWIW, if you do the experiment I suggested (TI measurement on
identical 10 MHZ etc) on a HP53132A counter, you get ADEV = 2.0x10^-10
at 1 second.
The manual says the LSD is 150 ps; when you include trigger errors, it
specifies a resolution of 300 ps. The 200 ps implied by the
measurement above is somewhere in between. The specs may be
conservative though.
As Bob Camp said, in Situation 2, you see the noise of the measurement system.
Cheers
Michael
On Sun, Oct 30, 2016 at 1:17 PM, Bob Camp <kb8tq@n1k.org> wrote:
> Hi
>
> Well, situation one:
>
> You have two perfect sources.
> Your measuring device is noiseless
> If your devices are in perfect sync, you get a series of zeros
> Your ADEV is zero
>
> Situation two:
>
> Same sources
> Noisy measuring device
> You get the standard deviation of the difference in measurements
> Your ADEV is simply a measure of the noise of the measuring device
>
> Situation three:
>
> Your sources are much worse than 1x10^-9 at 1 second
> Your ADEV is the proper number for your sources (or close to it)
>
> Situation four:
>
> The real world, you have a bit of each and you really don’t know
> what is what.
>
> Lots of possibilities and no single answer.
>
> Bob
>
>> On Oct 29, 2016, at 7:38 PM, Stewart Cobb <stewart.cobb@gmail.com> wrote:
>>
>> What's the expected value of ADEV at tau = 1 s for time-interval
>> measurements quantized at 1 ns?
>>
>> This question can probably be answered from pure theory (by someone more
>> mathematical than me), but it arises from a very practical situation. I
>> have several HP5334B counters comparing PPS pulses from various devices.
>> The HP5334B readout is quantized at 1 ns, and the spec sheet (IIRC) also
>> gives the instrument accuracy as 1 ns.
>>
>> The devices under test are relatively stable. Their PPS pulses are all
>> within a few microseconds of each other but uncorrelated. They are stable
>> enough that the dominant error source on the ADEV plot out to several
>> hundred seconds is the 1 ns quantization of the counter. The plots all
>> start near 1 ns and follow a -1 slope down to the point where the
>> individual device characteristics start to dominate the counter
>> quantization error.
>>
>> One might expect that the actual ADEV value in this situation would be
>> exactly 1 ns at tau = 1 second. Values of 0.5 ns or sqrt(2)/2 ns might not
>> be surprising. My actual measured value is about 0.65 ns, which does not
>> seem to have an obvious explanation. This brings to mind various questions:
>>
>> What is the theoretical ADEV value of a perfect time-interval measurement
>> quantized at 1 ns? What's the effect of an imperfect measurement
>> (instrument errors)? Can one use this technique in reverse to sort
>> instruments by their error contributions, or to tune up an instrument
>> calibration?
>>
>> I'd be grateful for answers to any of these questions.
>>
>> BTW, thanks to whichever time-nuts recommended the HP5334B, back in the
>> archives; they're perfect for what I'm doing. And thanks to fellow time-nut
>> Rick Karlquist for his part in designing them.
>>
>> Cheers!
>> --Stu
>> _______________________________________________
>> time-nuts mailing list -- time-nuts@febo.com
>> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
>> and follow the instructions there.
>
> _______________________________________________
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.
TV
Tom Van Baak
Sun, Oct 30, 2016 5:14 AM
One might expect that the actual ADEV value in this situation would be
exactly 1 ns at tau = 1 second. Values of 0.5 ns or sqrt(2)/2 ns might not
be surprising. My actual measured value is about 0.65 ns, which does not
seem to have an obvious explanation. This brings to mind various questions:
What is the theoretical ADEV value of a perfect time-interval measurement
quantized at 1 ns? What's the effect of an imperfect measurement
(instrument errors)? Can one use this technique in reverse to sort
instruments by their error contributions, or to tune up an instrument
calibration?
Hi Stu,
If you have white phase noise with standard deviation of 1 then the ADEV will be sqrt(3). This is because each term in the ADEV formula is based on the addition/subtraction of 3 phase samples. And the variance of normally distributed random variables is the sum of the variances. So if your standard deviation is 0.5 ns, then the AVAR should be 1.5 ns and the ADEV should be 0.87 ns, which is sqrt(3)/2 ns. You can check this with a quick simulation [1].
Note this assumes that 1 ns quantization error has a normal distribution with standard deviation of +/- 0.5 ns. Someone who's actually measured the hp 5334B quantization noise can correct this assumption.
/tvb
[1] Simulation:
C:\tvb> rand 100000 0.5e-9 0 | adev4 /at 1
rand 100000(count) 5e-010(sdev) 0(mean)
** tau from 1 to 1 step 1
1 a 8.676237e-010 99998 t 5.009227e-010 99998
In this 100k sample simulation we see ADEV is close to sqrt(3)/2 ns. The TDEV is 0.5 ns. This is because TDEV is based on tau * MDEV / sqrt(3). In other words, the sqrt(3) is eliminated in definition of TDEV.
> One might expect that the actual ADEV value in this situation would be
> exactly 1 ns at tau = 1 second. Values of 0.5 ns or sqrt(2)/2 ns might not
> be surprising. My actual measured value is about 0.65 ns, which does not
> seem to have an obvious explanation. This brings to mind various questions:
>
> What is the theoretical ADEV value of a perfect time-interval measurement
> quantized at 1 ns? What's the effect of an imperfect measurement
> (instrument errors)? Can one use this technique in reverse to sort
> instruments by their error contributions, or to tune up an instrument
> calibration?
Hi Stu,
If you have white phase noise with standard deviation of 1 then the ADEV will be sqrt(3). This is because each term in the ADEV formula is based on the addition/subtraction of 3 phase samples. And the variance of normally distributed random variables is the sum of the variances. So if your standard deviation is 0.5 ns, then the AVAR should be 1.5 ns and the ADEV should be 0.87 ns, which is sqrt(3)/2 ns. You can check this with a quick simulation [1].
Note this assumes that 1 ns quantization error has a normal distribution with standard deviation of +/- 0.5 ns. Someone who's actually measured the hp 5334B quantization noise can correct this assumption.
/tvb
[1] Simulation:
C:\tvb> rand 100000 0.5e-9 0 | adev4 /at 1
rand 100000(count) 5e-010(sdev) 0(mean)
** tau from 1 to 1 step 1
1 a 8.676237e-010 99998 t 5.009227e-010 99998
In this 100k sample simulation we see ADEV is close to sqrt(3)/2 ns. The TDEV is 0.5 ns. This is because TDEV is based on tau * MDEV / sqrt(3). In other words, the sqrt(3) is eliminated in definition of TDEV.
AW
Anders Wallin
Sun, Oct 30, 2016 8:04 AM
FWIW this example in AllanTools generates white pm and compares to the
IEEE1139 ADEV formula:
https://github.com/aewallin/allantools/blob/master/examples/ieee1139_white_pm.py
two comments:
- Theoretical white noise has infinite power, so limiting the RMS to some
value already implies you have limited the measurement bandwidth (f_h in
the formulas)
- Interestingly my understanding of the phase-meters (Timepod/3120A and/or
DIY USRP SDR) is that they don't measure raw phase of the signal much
better than a 20ps counter. The trick is to measure the phase between
REF/DUT at a high sample-rate, and then dramatically reduce the bandwidth
by low-pass filtering numerically. So an ADEV(1s)<1e-13 (100-fold better
than a typical counter) is not done via some "magic" 100fs
phase-measurement, but instead by doing plain old ~20 ps phase measurements
at a high rate and averaging lots of them together.
I've been playing with an USRP phase-meter and hope to release some
software/results soonish.
Anders
(ITSF2016/Prague next week)
Hi Stu,
If you have white phase noise with standard deviation of 1 then the ADEV
will be sqrt(3). This is because each term in the ADEV formula is based on
the addition/subtraction of 3 phase samples. And the variance of normally
distributed random variables is the sum of the variances. So if your
standard deviation is 0.5 ns, then the AVAR should be 1.5 ns and the ADEV
should be 0.87 ns, which is sqrt(3)/2 ns. You can check this with a quick
simulation [1].
Note this assumes that 1 ns quantization error has a normal distribution
with standard deviation of +/- 0.5 ns. Someone who's actually measured the
hp 5334B quantization noise can correct this assumption.
/tvb
[1] Simulation:
C:\tvb> rand 100000 0.5e-9 0 | adev4 /at 1
rand 100000(count) 5e-010(sdev) 0(mean)
** tau from 1 to 1 step 1
1 a 8.676237e-010 99998 t 5.009227e-010 99998
In this 100k sample simulation we see ADEV is close to sqrt(3)/2 ns. The
TDEV is 0.5 ns. This is because TDEV is based on tau * MDEV / sqrt(3). In
other words, the sqrt(3) is eliminated in definition of TDEV.
FWIW this example in AllanTools generates white pm and compares to the
IEEE1139 ADEV formula:
https://github.com/aewallin/allantools/blob/master/examples/ieee1139_white_pm.py
two comments:
1. Theoretical white noise has infinite power, so limiting the RMS to some
value already implies you have limited the measurement bandwidth (f_h in
the formulas)
2. Interestingly my understanding of the phase-meters (Timepod/3120A and/or
DIY USRP SDR) is that they don't measure raw phase of the signal much
better than a 20ps counter. The trick is to measure the phase between
REF/DUT at a high sample-rate, and then dramatically reduce the bandwidth
by low-pass filtering numerically. So an ADEV(1s)<1e-13 (100-fold better
than a typical counter) is not done via some "magic" 100fs
phase-measurement, but instead by doing plain old ~20 ps phase measurements
at a high rate and averaging lots of them together.
I've been playing with an USRP phase-meter and hope to release some
software/results soonish.
Anders
(ITSF2016/Prague next week)
Hi Stu,
> If you have white phase noise with standard deviation of 1 then the ADEV
> will be sqrt(3). This is because each term in the ADEV formula is based on
> the addition/subtraction of 3 phase samples. And the variance of normally
> distributed random variables is the sum of the variances. So if your
> standard deviation is 0.5 ns, then the AVAR should be 1.5 ns and the ADEV
> should be 0.87 ns, which is sqrt(3)/2 ns. You can check this with a quick
> simulation [1].
>
> Note this assumes that 1 ns quantization error has a normal distribution
> with standard deviation of +/- 0.5 ns. Someone who's actually measured the
> hp 5334B quantization noise can correct this assumption.
>
> /tvb
>
> [1] Simulation:
>
> C:\tvb> rand 100000 0.5e-9 0 | adev4 /at 1
> rand 100000(count) 5e-010(sdev) 0(mean)
> ** tau from 1 to 1 step 1
> 1 a 8.676237e-010 99998 t 5.009227e-010 99998
>
> In this 100k sample simulation we see ADEV is close to sqrt(3)/2 ns. The
> TDEV is 0.5 ns. This is because TDEV is based on tau * MDEV / sqrt(3). In
> other words, the sqrt(3) is eliminated in definition of TDEV.
>
J
jimlux
Sun, Oct 30, 2016 12:32 PM
On 10/29/16 10:14 PM, Tom Van Baak wrote:
One might expect that the actual ADEV value in this situation would be
exactly 1 ns at tau = 1 second. Values of 0.5 ns or sqrt(2)/2 ns might not
be surprising. My actual measured value is about 0.65 ns, which does not
seem to have an obvious explanation. This brings to mind various questions:
What is the theoretical ADEV value of a perfect time-interval measurement
quantized at 1 ns? What's the effect of an imperfect measurement
(instrument errors)? Can one use this technique in reverse to sort
instruments by their error contributions, or to tune up an instrument
calibration?
Hi Stu,
If you have white phase noise with standard deviation of 1 then the ADEV will be sqrt(3). This is because each term in the ADEV formula is based on the addition/subtraction of 3 phase samples. And the variance of normally distributed random variables is the sum of the variances. So if your standard deviation is 0.5 ns, then the AVAR should be 1.5 ns and the ADEV should be 0.87 ns, which is sqrt(3)/2 ns. You can check this with a quick simulation [1].
Note this assumes that 1 ns quantization error has a normal distribution with standard deviation of +/- 0.5 ns. Someone who's actually measured the hp 5334B quantization noise can correct this assumption.
isn't the distribution of quantization more like a rectangular
distribution (e.g. like an ADC). so variance of 1/12th?
On 10/29/16 10:14 PM, Tom Van Baak wrote:
>> One might expect that the actual ADEV value in this situation would be
>> exactly 1 ns at tau = 1 second. Values of 0.5 ns or sqrt(2)/2 ns might not
>> be surprising. My actual measured value is about 0.65 ns, which does not
>> seem to have an obvious explanation. This brings to mind various questions:
>>
>> What is the theoretical ADEV value of a perfect time-interval measurement
>> quantized at 1 ns? What's the effect of an imperfect measurement
>> (instrument errors)? Can one use this technique in reverse to sort
>> instruments by their error contributions, or to tune up an instrument
>> calibration?
>
> Hi Stu,
>
> If you have white phase noise with standard deviation of 1 then the ADEV will be sqrt(3). This is because each term in the ADEV formula is based on the addition/subtraction of 3 phase samples. And the variance of normally distributed random variables is the sum of the variances. So if your standard deviation is 0.5 ns, then the AVAR should be 1.5 ns and the ADEV should be 0.87 ns, which is sqrt(3)/2 ns. You can check this with a quick simulation [1].
>
> Note this assumes that 1 ns quantization error has a normal distribution with standard deviation of +/- 0.5 ns. Someone who's actually measured the hp 5334B quantization noise can correct this assumption.
>
isn't the distribution of quantization more like a rectangular
distribution (e.g. like an ADC). so variance of 1/12th?
BC
Bob Camp
Sun, Oct 30, 2016 1:55 PM
Hi
There are entire (big and heavy) books written on quantization errors…..
In a counter, there are a number of different sub-systems contributing
to the error. Depending on the design, each may ( or may not) be a bit better than
absolutely needed. Toss in things like 10 MHz reference feedthrough
(which is decidedly weird statistically) and you have a real mess.
The simple answer is that there is no single answer for a full blown counter.
For an ADC sampling system, there may indeed be a somewhat more
tractable answer to the question (unless feed through is an issue with
your setup …).
Bob
On Oct 30, 2016, at 8:32 AM, jimlux jimlux@earthlink.net wrote:
On 10/29/16 10:14 PM, Tom Van Baak wrote:
One might expect that the actual ADEV value in this situation would be
exactly 1 ns at tau = 1 second. Values of 0.5 ns or sqrt(2)/2 ns might not
be surprising. My actual measured value is about 0.65 ns, which does not
seem to have an obvious explanation. This brings to mind various questions:
What is the theoretical ADEV value of a perfect time-interval measurement
quantized at 1 ns? What's the effect of an imperfect measurement
(instrument errors)? Can one use this technique in reverse to sort
instruments by their error contributions, or to tune up an instrument
calibration?
Hi Stu,
If you have white phase noise with standard deviation of 1 then the ADEV will be sqrt(3). This is because each term in the ADEV formula is based on the addition/subtraction of 3 phase samples. And the variance of normally distributed random variables is the sum of the variances. So if your standard deviation is 0.5 ns, then the AVAR should be 1.5 ns and the ADEV should be 0.87 ns, which is sqrt(3)/2 ns. You can check this with a quick simulation [1].
Note this assumes that 1 ns quantization error has a normal distribution with standard deviation of +/- 0.5 ns. Someone who's actually measured the hp 5334B quantization noise can correct this assumption.
Hi
There are entire (big and heavy) books written on quantization errors…..
In a counter, there are a number of different sub-systems contributing
to the error. Depending on the design, each may ( or may not) be a bit better than
absolutely needed. Toss in things like 10 MHz reference feedthrough
(which is decidedly weird statistically) and you have a real mess.
The simple answer is that there is no single answer for a full blown counter.
For an ADC sampling system, there may indeed be a somewhat more
tractable answer to the question (unless feed through is an issue with
your setup …).
Bob
> On Oct 30, 2016, at 8:32 AM, jimlux <jimlux@earthlink.net> wrote:
>
> On 10/29/16 10:14 PM, Tom Van Baak wrote:
>>> One might expect that the actual ADEV value in this situation would be
>>> exactly 1 ns at tau = 1 second. Values of 0.5 ns or sqrt(2)/2 ns might not
>>> be surprising. My actual measured value is about 0.65 ns, which does not
>>> seem to have an obvious explanation. This brings to mind various questions:
>>>
>>> What is the theoretical ADEV value of a perfect time-interval measurement
>>> quantized at 1 ns? What's the effect of an imperfect measurement
>>> (instrument errors)? Can one use this technique in reverse to sort
>>> instruments by their error contributions, or to tune up an instrument
>>> calibration?
>>
>> Hi Stu,
>>
>> If you have white phase noise with standard deviation of 1 then the ADEV will be sqrt(3). This is because each term in the ADEV formula is based on the addition/subtraction of 3 phase samples. And the variance of normally distributed random variables is the sum of the variances. So if your standard deviation is 0.5 ns, then the AVAR should be 1.5 ns and the ADEV should be 0.87 ns, which is sqrt(3)/2 ns. You can check this with a quick simulation [1].
>>
>> Note this assumes that 1 ns quantization error has a normal distribution with standard deviation of +/- 0.5 ns. Someone who's actually measured the hp 5334B quantization noise can correct this assumption.
>>
>
> isn't the distribution of quantization more like a rectangular distribution (e.g. like an ADC). so variance of 1/12th?
>
>
>
>
>
> _______________________________________________
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.
MD
Magnus Danielson
Sun, Oct 30, 2016 10:43 PM
Hi Stu,
On 10/30/2016 01:38 AM, Stewart Cobb wrote:
What's the expected value of ADEV at tau = 1 s for time-interval
measurements quantized at 1 ns?
This question can probably be answered from pure theory (by someone more
mathematical than me), but it arises from a very practical situation. I
have several HP5334B counters comparing PPS pulses from various devices.
The HP5334B readout is quantized at 1 ns, and the spec sheet (IIRC) also
gives the instrument accuracy as 1 ns.
Experience show that you have a measurement limit typically around 1/tau
times resolution... typically, unless naturally you have a noisier signal.
Proving this limit turns out to be much harder than the first attempt of
using it's slope as an indication of the white phase modulation noise
which we assume to be there.
The devices under test are relatively stable. Their PPS pulses are all
within a few microseconds of each other but uncorrelated. They are stable
enough that the dominant error source on the ADEV plot out to several
hundred seconds is the 1 ns quantization of the counter. The plots all
start near 1 ns and follow a -1 slope down to the point where the
individual device characteristics start to dominate the counter
quantization error.
Which matches the typical empirical behavior.
One might expect that the actual ADEV value in this situation would be
exactly 1 ns at tau = 1 second. Values of 0.5 ns or sqrt(2)/2 ns might not
be surprising. My actual measured value is about 0.65 ns, which does not
seem to have an obvious explanation. This brings to mind various questions:
What is the theoretical ADEV value of a perfect time-interval measurement
quantized at 1 ns? What's the effect of an imperfect measurement
(instrument errors)? Can one use this technique in reverse to sort
instruments by their error contributions, or to tune up an instrument
calibration?
It turns out to be a bit complicated, as you probably gather.
The exact value depends on several aspects.
Let's start of with assuming that you have a long run, in which case the
confidence interval of the ADEV for tau=1s has become tight, that is it
is not spread around much from a infinite long average.
The resolution of a counter and it's behavior for different phase
releasionships may not be perfect, so that can create some noise, but
let's assume there is no such effect.
The interpolator may also have biases and variations due to interaction
with another signal, let's rule that out.
Remains is interpolator steps and noise. The average value and the
variance of the value depends on the quantization step size and noise...
in an interesting fashion. Rather than having a constant value as you
assume, they interact so that you expectance value for ADEV varies with
several parameters. I've so far not seen a paper that nails this
properly. I have ideas for a paper, but nothing ready yet.
So, for the moment, the rule of thumb is what you can expect the counter
to do, roughly. Shifting of this line is what motivates you to buy
better counters or do tricks like DMTD.
Cheers.
Magnus
Hi Stu,
On 10/30/2016 01:38 AM, Stewart Cobb wrote:
> What's the expected value of ADEV at tau = 1 s for time-interval
> measurements quantized at 1 ns?
Reasonable question.
> This question can probably be answered from pure theory (by someone more
> mathematical than me), but it arises from a very practical situation. I
> have several HP5334B counters comparing PPS pulses from various devices.
> The HP5334B readout is quantized at 1 ns, and the spec sheet (IIRC) also
> gives the instrument accuracy as 1 ns.
Experience show that you have a measurement limit typically around 1/tau
times resolution... typically, unless naturally you have a noisier signal.
Proving this limit turns out to be much harder than the first attempt of
using it's slope as an indication of the white phase modulation noise
which we assume to be there.
> The devices under test are relatively stable. Their PPS pulses are all
> within a few microseconds of each other but uncorrelated. They are stable
> enough that the dominant error source on the ADEV plot out to several
> hundred seconds is the 1 ns quantization of the counter. The plots all
> start near 1 ns and follow a -1 slope down to the point where the
> individual device characteristics start to dominate the counter
> quantization error.
Which matches the typical empirical behavior.
> One might expect that the actual ADEV value in this situation would be
> exactly 1 ns at tau = 1 second. Values of 0.5 ns or sqrt(2)/2 ns might not
> be surprising. My actual measured value is about 0.65 ns, which does not
> seem to have an obvious explanation. This brings to mind various questions:
>
> What is the theoretical ADEV value of a perfect time-interval measurement
> quantized at 1 ns? What's the effect of an imperfect measurement
> (instrument errors)? Can one use this technique in reverse to sort
> instruments by their error contributions, or to tune up an instrument
> calibration?
It turns out to be a bit complicated, as you probably gather.
The exact value depends on several aspects.
Let's start of with assuming that you have a long run, in which case the
confidence interval of the ADEV for tau=1s has become tight, that is it
is not spread around much from a infinite long average.
The resolution of a counter and it's behavior for different phase
releasionships may not be perfect, so that can create some noise, but
let's assume there is no such effect.
The interpolator may also have biases and variations due to interaction
with another signal, let's rule that out.
Remains is interpolator steps and noise. The average value and the
variance of the value depends on the quantization step size and noise...
in an interesting fashion. Rather than having a constant value as you
assume, they interact so that you expectance value for ADEV varies with
several parameters. I've so far not seen a paper that nails this
properly. I have ideas for a paper, but nothing ready yet.
So, for the moment, the rule of thumb is what you can expect the counter
to do, roughly. Shifting of this line is what motivates you to buy
better counters or do tricks like DMTD.
Cheers.
Magnus