MD
Magnus Danielson
Fri, May 11, 2018 8:15 PM
Hi,
On 05/11/2018 05:35 PM, Bob kb8tq wrote:
Hi
If you do the weighted average as indicated in the paper and compare it to a “single sample” computation,
the results are different for that time interval. To me that’s a problem. To the authors, the fact that the rest of
the curve is the same is proof that it works. I certainly agree that once you get to longer tau, the process
has no detrimental impact. There is still the problem that the first post on the graph is different depending
on the technique.
Check what I did in my paper. I made sure to check that my estimator of
phase and frequency is bias-free, that is, when exposed to stable phase
or stable frequency, that comes out of the phase and frequency unbiased,
but 0 as you switch them, as a good estimator should do.
The other side of all this is that ADEV is really not a very good way to test a counter. It has it’s quirks and it’s
issues. They are impacted by what is in a counter, but that’s a side effect. If one is after a general test of
counter hardware, one probably should look at other approaches.
Well, you can tell a few things from the ADEV, to give you a hint about
what you can expect from that counter when you do ADEV... and measure of
frequency. The 1/tau limit is that of the counter. It's... a complex
issue of single-shot resolution and noise, but a hint.
If you are trying specifically just to measure ADEV, then there are a lot of ways to do that by it’s self. It’s not
clear that re-invinting the hardware is required to do this. Going with an “average down” approach ultimately
will have problems for certain signals and noise profiles.
The filtering needs to be understood and handled correctly, for sure,
and it's not doing anything good for lower true ADEV measures. Filtering
helps for improving the frequency reading, as the measures deviation
shifts from ADEV to MDEV or PDEV, but let's not confuse that with
improving the ADEV, it's a completely different thing. Improving the
ADEV takes single-shot resolution, stable hardware and stable reference
source.
Cheers,
Magnus
Hi,
On 05/11/2018 05:35 PM, Bob kb8tq wrote:
> Hi
>
> If you do the weighted average as indicated in the paper *and* compare it to a “single sample” computation,
> the results are different for that time interval. To me that’s a problem. To the authors, the fact that the rest of
> the curve is the same is proof that it works. I certainly agree that once you get to longer tau, the process
> has no detrimental impact. There is still the problem that the first post on the graph is different depending
> on the technique.
Check what I did in my paper. I made sure to check that my estimator of
phase and frequency is bias-free, that is, when exposed to stable phase
or stable frequency, that comes out of the phase and frequency unbiased,
but 0 as you switch them, as a good estimator should do.
> The other side of all this is that ADEV is really not a very good way to test a counter. It has it’s quirks and it’s
> issues. They are impacted by what is in a counter, but that’s a side effect. If one is after a general test of
> counter hardware, one probably should look at other approaches.
Well, you can tell a few things from the ADEV, to give you a hint about
what you can expect from that counter when you do ADEV... and measure of
frequency. The 1/tau limit is that of the counter. It's... a complex
issue of single-shot resolution and noise, but a hint.
> If you are trying specifically just to measure ADEV, then there are a lot of ways to do that by it’s self. It’s not
> clear that re-invinting the hardware is required to do this. Going with an “average down” approach ultimately
> *will* have problems for certain signals and noise profiles.
The filtering needs to be understood and handled correctly, for sure,
and it's not doing anything good for lower true ADEV measures. Filtering
helps for improving the frequency reading, as the measures deviation
shifts from ADEV to MDEV or PDEV, but let's not confuse that with
improving the ADEV, it's a completely different thing. Improving the
ADEV takes single-shot resolution, stable hardware and stable reference
source.
Cheers,
Magnus
OS
Oleg Skydan
Sat, May 12, 2018 5:20 PM
There is still the problem that the first post on the graph is different
depending
on the technique.
The leftmost tau values are skipped and they "stay" inside the counter. If I
setup counter to generate lets say 1s stamps (ADEV starts at 1s) it will
generate internally 1/8sec averaged measurements, but export combined data
for 1s stamps. The result will be strictly speaking different, but the
difference should be insignificant.
The other side of all this is that ADEV is really not a very good way to
test a counter.
Counter testing was not a main reason to dig into statistics details last
days. Initially I used ADEV when tried to test the idea of making the
counter with very simple HW and good resolution (BTW, it appeared later it
was not ADEV in reality :). Then I saw it worked, so I decided to make a
"normal" useful counter (I liked the HW/SW concept). The HW has enough power
to compute various statistics onboard in real time, and while it is not
requisite feature of the project now, I think it will be good if the counter
will be able to do it (or at least if it will export data suitable to do it
in post process). The rest of the story you know :)
If you are trying specifically just to measure ADEV, then there are a lot
of ways to do that by it’s self.
Yes, but if it can be done with only some additional code - why not to have
such ability? Even if it has some known limitations it is still a useful
addition. Of cause it should be done as good as it can be with the HW
limitations. Also it was/is a good educational moment.
Now it is period of tests/experiments to see the used technology
features/limitations(of cause if those experiments can be done with the
current "ugly style HW"). I have already got a lot of useful information, it
should help me in the following HW/FW development. The next steps are analog
front end and GPS frequency correction (I should get the GPS module next
week). I have already tested the 6GHz prescaler and now wait for some parts
to finish it. Hope this project will have the "happy end" :).
All the best!
Oleg
Hi!
From: "Bob kb8tq" <kb8tq@n1k.org>
> There is still the problem that the first post on the graph is different
> depending
> on the technique.
The leftmost tau values are skipped and they "stay" inside the counter. If I
setup counter to generate lets say 1s stamps (ADEV starts at 1s) it will
generate internally 1/8sec averaged measurements, but export combined data
for 1s stamps. The result will be strictly speaking different, but the
difference should be insignificant.
> The other side of all this is that ADEV is really not a very good way to
> test a counter.
Counter testing was not a main reason to dig into statistics details last
days. Initially I used ADEV when tried to test the idea of making the
counter with very simple HW and good resolution (BTW, it appeared later it
was not ADEV in reality :). Then I saw it worked, so I decided to make a
"normal" useful counter (I liked the HW/SW concept). The HW has enough power
to compute various statistics onboard in real time, and while it is not
requisite feature of the project now, I think it will be good if the counter
will be able to do it (or at least if it will export data suitable to do it
in post process). The rest of the story you know :)
> If you are trying specifically just to measure ADEV, then there are a lot
> of ways to do that by it’s self.
Yes, but if it can be done with only some additional code - why not to have
such ability? Even if it has some known limitations it is still a useful
addition. Of cause it should be done as good as it can be with the HW
limitations. Also it was/is a good educational moment.
Now it is period of tests/experiments to see the used technology
features/limitations(of cause if those experiments can be done with the
current "ugly style HW"). I have already got a lot of useful information, it
should help me in the following HW/FW development. The next steps are analog
front end and GPS frequency correction (I should get the GPS module next
week). I have already tested the 6GHz prescaler and now wait for some parts
to finish it. Hope this project will have the "happy end" :).
All the best!
Oleg
OS
Oleg Skydan
Sat, May 12, 2018 6:38 PM
ADEV assumes brick-wall filtering up to the Nyquist frequency as result
of the sample-rate. When you filter the data as you do a Linear
Regression / Least Square estimation, the actual bandwidth will be much
less, so the ADEV measures will be biased for lower taus, but for higher
taus less of the kernel of the ADEV will be affected by the filter and
thus the bias will reduce.
Thanks for clarification. Bob already pointed me to problem and after some
reading *DEV theme seems to be clearer.
Does the ADEV plots I got looks reasonable for the used "mid range"
OCXOs (see the second plot for the long run test)?
You probably want to find the source of the wavy response as the orange
and red trace.
I have already found the problem. It is HW problem related to poor isolation
between reference OCXO signal and counter input signal clock line (it is
also possible there are some grounding or power supply decoupling problems -
the HW is made in "ugly construction" style). When the input clock frequency
is very close (0.3..0.4Hz difference) to the OCXO subharmonic this problem
become visible (it is not FW problem discussed before, cause counter
reference is not a harmonic of the OCXO anymore). It looks like some
commercial counters suffers from that problem too. After I connected OCXO
and input feed lines with short pieces of the coax this effect greatly
decreased, but not disappeared. The "large N" plots were measured with the
input signal 1.4Hz (0.3ppm) higher then 1/2 subharmonic of the OCXO
frequency, with such frequency difference that problem completely
disappears. I will check for this problem again when I will move the HW to
the normal PCB.
If fact, you can do a Omega-style counter you can use for PDEV, you just
need to use the right approach to be able to decimate the data. Oh,
there's a draft paper on that:
https://arxiv.org/abs/1604.01004
Thanks for the document. It needs some time to study and maybe I will add
the features to the counter to calculate correct PDEV.
If ADEV is needed, the averaging
interval can be reduced and several measurements (more then eight) can
be combined into one point (creating the new weighting function which
resembles the usual Pi one, as shown in the [1] p.54), it should be
possible to calculate usual ADEV using such data. As far as I
understand, the filter which is formed by the resulting weighting
function will have wider bandwidth, so the impact on ADEV will be
smaller and it can be computed correctly. Am I missing something?
Well, you can reduce averaging interval to 1 and then you compute the
ADEV, but it does not behave as the MDEV any longer.
With no averaging it will be a simple reciprocal counter with time
resolution of only 2.5ns. The idea was to use trapezoidal weighting, so the
counter will become somewhere "between" Pi and Delta counters. When the
upper base of the weighting function trapezium is 0 length (triangular
weighting) it is usual Delta counter, if it is infinitely long the result
should converge to usual Pi counter. Prof. Rubiola claims if the ratio of
upper to lower base is more than 8/9 the ADEV plots made from such data
should be sufficiently close to usual ADEV. Of cause the gain from the
averaging will be at least 3 times less than from the usual Delta averaging.
Maybe I need to find or make "not so good" signal source and measure its
ADEV using above method and compare with the traditional. It should be
interesting experiment.
What you can do is that you can calculate MDEV or PDEV, and then apply
the suitable bias function to convert the level to that of ADEV.
That can be done if the statistics is calculated inside the counter, but it
will not make the exported data suitable for post processing with the
TimeLab or other software that is not aware of what is going on inside the
counter.
Yes, they give relatively close values of deviation, where PDEV goes
somewhat lower, indicating that there is a slight advantage of the LR/LS
frequency estimation measure over that of the Delta counter, as given by
it's MDEV.
Here is another question - how to correctly calculate averaging length in
Delta counter? I have 5e6 timestamps in one second, so Pi and Omega counters
process 5e6 samples totally and one measurement have also 5e6 samples, but
the Delta one processes 10e6 totally with each of the averaged measurement
having 5e6 samples. Delta counter actually used two times more data. What
should be equal when comparing different counter types - the number of
samples in one measurement (gating time) or the total number of samples
processed?
Thanks!
Oleg
Hi!
From: "Magnus Danielson" <magnus@rubidium.dyndns.org>
> ADEV assumes brick-wall filtering up to the Nyquist frequency as result
> of the sample-rate. When you filter the data as you do a Linear
> Regression / Least Square estimation, the actual bandwidth will be much
> less, so the ADEV measures will be biased for lower taus, but for higher
> taus less of the kernel of the ADEV will be affected by the filter and
> thus the bias will reduce.
Thanks for clarification. Bob already pointed me to problem and after some
reading *DEV theme seems to be clearer.
>> Does the ADEV plots I got looks reasonable for the used "mid range"
>> OCXOs (see the second plot for the long run test)?
>
> You probably want to find the source of the wavy response as the orange
> and red trace.
I have already found the problem. It is HW problem related to poor isolation
between reference OCXO signal and counter input signal clock line (it is
also possible there are some grounding or power supply decoupling problems -
the HW is made in "ugly construction" style). When the input clock frequency
is very close (0.3..0.4Hz difference) to the OCXO subharmonic this problem
become visible (it is not FW problem discussed before, cause counter
reference is not a harmonic of the OCXO anymore). It looks like some
commercial counters suffers from that problem too. After I connected OCXO
and input feed lines with short pieces of the coax this effect greatly
decreased, but not disappeared. The "large N" plots were measured with the
input signal 1.4Hz (0.3ppm) higher then 1/2 subharmonic of the OCXO
frequency, with such frequency difference that problem completely
disappears. I will check for this problem again when I will move the HW to
the normal PCB.
> If fact, you can do a Omega-style counter you can use for PDEV, you just
> need to use the right approach to be able to decimate the data. Oh,
> there's a draft paper on that:
>
> https://arxiv.org/abs/1604.01004
Thanks for the document. It needs some time to study and maybe I will add
the features to the counter to calculate correct PDEV.
>> If ADEV is needed, the averaging
>> interval can be reduced and several measurements (more then eight) can
>> be combined into one point (creating the new weighting function which
>> resembles the usual Pi one, as shown in the [1] p.54), it should be
>> possible to calculate usual ADEV using such data. As far as I
>> understand, the filter which is formed by the resulting weighting
>> function will have wider bandwidth, so the impact on ADEV will be
>> smaller and it can be computed correctly. Am I missing something?
>
> Well, you can reduce averaging interval to 1 and then you compute the
> ADEV, but it does not behave as the MDEV any longer.
With no averaging it will be a simple reciprocal counter with time
resolution of only 2.5ns. The idea was to use trapezoidal weighting, so the
counter will become somewhere "between" Pi and Delta counters. When the
upper base of the weighting function trapezium is 0 length (triangular
weighting) it is usual Delta counter, if it is infinitely long the result
should converge to usual Pi counter. Prof. Rubiola claims if the ratio of
upper to lower base is more than 8/9 the ADEV plots made from such data
should be sufficiently close to usual ADEV. Of cause the gain from the
averaging will be at least 3 times less than from the usual Delta averaging.
Maybe I need to find or make "not so good" signal source and measure its
ADEV using above method and compare with the traditional. It should be
interesting experiment.
> What you can do is that you can calculate MDEV or PDEV, and then apply
> the suitable bias function to convert the level to that of ADEV.
That can be done if the statistics is calculated inside the counter, but it
will not make the exported data suitable for post processing with the
TimeLab or other software that is not aware of what is going on inside the
counter.
> Yes, they give relatively close values of deviation, where PDEV goes
> somewhat lower, indicating that there is a slight advantage of the LR/LS
> frequency estimation measure over that of the Delta counter, as given by
> it's MDEV.
Here is another question - how to correctly calculate averaging length in
Delta counter? I have 5e6 timestamps in one second, so Pi and Omega counters
process 5e6 samples totally and one measurement have also 5e6 samples, but
the Delta one processes 10e6 totally with each of the averaged measurement
having 5e6 samples. Delta counter actually used two times more data. What
should be equal when comparing different counter types - the number of
samples in one measurement (gating time) or the total number of samples
processed?
Thanks!
Oleg
MD
Magnus Danielson
Sat, May 12, 2018 6:52 PM
Hi Oleg,
On 05/12/2018 07:20 PM, Oleg Skydan wrote:
There is still the problem that the first post on the graph is
different depending
on the technique.
The leftmost tau values are skipped and they "stay" inside the counter.
If I setup counter to generate lets say 1s stamps (ADEV starts at 1s) it
will generate internally 1/8sec averaged measurements, but export
combined data for 1s stamps. The result will be strictly speaking
different, but the difference should be insignificant.
What is your motivation for doing this?
I'm not saying you are necessarilly incorrect, but it would be
interesting to hear your motivation.
Cheers,
Magnus
Hi Oleg,
On 05/12/2018 07:20 PM, Oleg Skydan wrote:
> Hi!
>
> From: "Bob kb8tq" <kb8tq@n1k.org>
>> There is still the problem that the first post on the graph is
>> different depending
>> on the technique.
>
> The leftmost tau values are skipped and they "stay" inside the counter.
> If I setup counter to generate lets say 1s stamps (ADEV starts at 1s) it
> will generate internally 1/8sec averaged measurements, but export
> combined data for 1s stamps. The result will be strictly speaking
> different, but the difference should be insignificant.
What is your motivation for doing this?
I'm not saying you are necessarilly incorrect, but it would be
interesting to hear your motivation.
Cheers,
Magnus
BK
Bob kb8tq
Sat, May 12, 2018 7:41 PM
There is still the problem that the first post on the graph is different depending
on the technique.
The leftmost tau values are skipped and they "stay" inside the counter. If I setup counter to generate lets say 1s stamps (ADEV starts at 1s) it will generate internally 1/8sec averaged measurements, but export combined data for 1s stamps. The result will be strictly speaking different, but the difference should be insignificant.
Except here are a lot of papers where they demonstrate that the difference may be very significant. I would
suggest that the “is significant’ group is actually larger than the “is not” group.
The other side of all this is that ADEV is really not a very good way to test a counter.
Counter testing was not a main reason to dig into statistics details last days. Initially I used ADEV when tried to test the idea of making the counter with very simple HW and good resolution (BTW, it appeared later it was not ADEV in reality :). Then I saw it worked, so I decided to make a "normal" useful counter (I liked the HW/SW concept). The HW has enough power to compute various statistics onboard in real time, and while it is not requisite feature of the project now, I think it will be good if the counter will be able to do it (or at least if it will export data suitable to do it in post process). The rest of the story you know :)
Again, ADEV is tricky and sensitive to various odd things. This whole debate about it being sensitive goes
back to the original papers in the late 1960’s and 1970’s. At every paper I attended the issue of averaging
and bandwidth came up in the questions after the paper. The conversation has been going on for a long
time.
If you are trying specifically just to measure ADEV, then there are a lot of ways to do that by it’s self.
Yes, but if it can be done with only some additional code - why not to have such ability? Even if it has some known limitations it is still a useful addition. Of cause it should be done as good as it can be with the HW limitations. Also it was/is a good educational moment.
It’s only useful if it is accurate. Since you can “do code” that gives you results that are better than reality,
simply coming up with a number is not the full answer. To be useful as ADEV, it needs to be correct.
Now it is period of tests/experiments to see the used technology features/limitations(of cause if those experiments can be done with the current "ugly style HW"). I have already got a lot of useful information, it should help me in the following HW/FW development. The next steps are analog front end and GPS frequency correction (I should get the GPS module next week). I have already tested the 6GHz prescaler and now wait for some parts to finish it. Hope this project will have the "happy end" :).
I’m sure it will come out to be a very cool counter. My only concern here is creating inaccurate results
by stretching to far with what you are trying to do. Keep it to the stuff that is accurate.
Bob
Hi
> On May 12, 2018, at 1:20 PM, Oleg Skydan <olegskydan@gmail.com> wrote:
>
> Hi!
>
> From: "Bob kb8tq" <kb8tq@n1k.org>
>> There is still the problem that the first post on the graph is different depending
>> on the technique.
>
> The leftmost tau values are skipped and they "stay" inside the counter. If I setup counter to generate lets say 1s stamps (ADEV starts at 1s) it will generate internally 1/8sec averaged measurements, but export combined data for 1s stamps. The result will be strictly speaking different, but the difference should be insignificant.
Except here are a *lot* of papers where they demonstrate that the difference may be *very* significant. I would
suggest that the “is significant’ group is actually larger than the “is not” group.
>
>> The other side of all this is that ADEV is really not a very good way to test a counter.
>
> Counter testing was not a main reason to dig into statistics details last days. Initially I used ADEV when tried to test the idea of making the counter with very simple HW and good resolution (BTW, it appeared later it was not ADEV in reality :). Then I saw it worked, so I decided to make a "normal" useful counter (I liked the HW/SW concept). The HW has enough power to compute various statistics onboard in real time, and while it is not requisite feature of the project now, I think it will be good if the counter will be able to do it (or at least if it will export data suitable to do it in post process). The rest of the story you know :)
Again, ADEV is tricky and sensitive to various odd things. This whole debate about it being sensitive goes
back to the original papers in the late 1960’s and 1970’s. At every paper I attended the issue of averaging
and bandwidth came up in the questions after the paper. The conversation has been going on for a *long*
time.
>
>> If you are trying specifically just to measure ADEV, then there are a lot of ways to do that by it’s self.
>
> Yes, but if it can be done with only some additional code - why not to have such ability? Even if it has some known limitations it is still a useful addition. Of cause it should be done as good as it can be with the HW limitations. Also it was/is a good educational moment.
It’s only useful if it is accurate. Since you can “do code” that gives you results that are better than reality,
simply coming up with a number is not the full answer. To be useful as ADEV, it needs to be correct.
>
> Now it is period of tests/experiments to see the used technology features/limitations(of cause if those experiments can be done with the current "ugly style HW"). I have already got a lot of useful information, it should help me in the following HW/FW development. The next steps are analog front end and GPS frequency correction (I should get the GPS module next week). I have already tested the 6GHz prescaler and now wait for some parts to finish it. Hope this project will have the "happy end" :).
I’m sure it will come out to be a very cool counter. My *only* concern here is creating inaccurate results
by stretching to far with what you are trying to do. Keep it to the stuff that is accurate.
Bob
>
> All the best!
> Oleg
> _______________________________________________
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.
MD
Magnus Danielson
Sat, May 12, 2018 9:20 PM
Hi,
On 05/12/2018 08:38 PM, Oleg Skydan wrote:
ADEV assumes brick-wall filtering up to the Nyquist frequency as result
of the sample-rate. When you filter the data as you do a Linear
Regression / Least Square estimation, the actual bandwidth will be much
less, so the ADEV measures will be biased for lower taus, but for higher
taus less of the kernel of the ADEV will be affected by the filter and
thus the bias will reduce.
Thanks for clarification. Bob already pointed me to problem and after
some reading *DEV theme seems to be clearer.
The mistake is easy to make. Back in the days, it was given that you
should always give the system bandwidth alongside a ADEV plot, a
practice that later got lost. Many people does not know what bandwidth
they have, and the effect it has on the plot. I've even heard
distinguished and knowledgeable person in the field admit of doing it
incorrect.
Does the ADEV plots I got looks reasonable for the used "mid range"
OCXOs (see the second plot for the long run test)?
You probably want to find the source of the wavy response as the orange
and red trace.
I have already found the problem. It is HW problem related to poor
isolation between reference OCXO signal and counter input signal clock
line (it is also possible there are some grounding or power supply
decoupling problems - the HW is made in "ugly construction" style). When
the input clock frequency is very close (0.3..0.4Hz difference) to the
OCXO subharmonic this problem become visible (it is not FW problem
discussed before, cause counter reference is not a harmonic of the OCXO
anymore).
Make sense. Cross-talk has been performance limit of several counters,
and care should be taken to reduce it.
It looks like some commercial counters suffers from that
problem too. After I connected OCXO and input feed lines with short
pieces of the coax this effect greatly decreased, but not disappeared.
Cross talk exists for sure, but there is a similar effect too which is
not due to cross-talk but due to how the counter is able to interpolate
certain frequencies.
The "large N" plots were measured with the input signal 1.4Hz (0.3ppm)
higher then 1/2 subharmonic of the OCXO frequency, with such frequency
difference that problem completely disappears. I will check for this
problem again when I will move the HW to the normal PCB.
If fact, you can do a Omega-style counter you can use for PDEV, you just
need to use the right approach to be able to decimate the data. Oh,
there's a draft paper on that:
https://arxiv.org/abs/1604.01004
Thanks for the document. It needs some time to study and maybe I will
add the features to the counter to calculate correct PDEV.
It suggest a very practical method for FPGA based counters, so that you
can make use of the high rate of samples that you have and reduce it in
HW before handing of to SW. As you want to decimate data, you do not
want to lose the Least Square property, and this is a practical method
of achieving it.
If ADEV is needed, the averaging
interval can be reduced and several measurements (more then eight) can
be combined into one point (creating the new weighting function which
resembles the usual Pi one, as shown in the [1] p.54), it should be
possible to calculate usual ADEV using such data. As far as I
understand, the filter which is formed by the resulting weighting
function will have wider bandwidth, so the impact on ADEV will be
smaller and it can be computed correctly. Am I missing something?
Well, you can reduce averaging interval to 1 and then you compute the
ADEV, but it does not behave as the MDEV any longer.
With no averaging it will be a simple reciprocal counter with time
resolution of only 2.5ns. The idea was to use trapezoidal weighting, so
the counter will become somewhere "between" Pi and Delta counters. When
the upper base of the weighting function trapezium is 0 length
(triangular weighting) it is usual Delta counter, if it is infinitely
long the result should converge to usual Pi counter. Prof. Rubiola
claims if the ratio of upper to lower base is more than 8/9 the ADEV
plots made from such data should be sufficiently close to usual ADEV. Of
cause the gain from the averaging will be at least 3 times less than
from the usual Delta averaging.
You do not want to mix pre-filtering and ADEV that way. We can do things
better.
Maybe I need to find or make "not so good" signal source and measure its
ADEV using above method and compare with the traditional. It should be
interesting experiment.
It is always good to experiment and learn from both not so stable stuff,
stuff with significant drift and very stable stuff.
What you can do is that you can calculate MDEV or PDEV, and then apply
the suitable bias function to convert the level to that of ADEV.
That can be done if the statistics is calculated inside the counter, but
it will not make the exported data suitable for post processing with the
TimeLab or other software that is not aware of what is going on inside
the counter.
Exactly. You need to continue the processing in the counter for the
post-processing to produce good post-processing for unbias values.
There is many ways to mess it up.
Yes, they give relatively close values of deviation, where PDEV goes
somewhat lower, indicating that there is a slight advantage of the LR/LS
frequency estimation measure over that of the Delta counter, as given by
it's MDEV.
Here is another question - how to correctly calculate averaging length
in Delta counter? I have 5e6 timestamps in one second, so Pi and Omega
counters process 5e6 samples totally and one measurement have also 5e6
samples, but the Delta one processes 10e6 totally with each of the
averaged measurement having 5e6 samples. Delta counter actually used two
times more data. What should be equal when comparing different counter
types - the number of samples in one measurement (gating time) or the
total number of samples processed?
How does you get so different event-rates?
If you have 5 MHz, the rising edge gives you 5E6 events, and which type
of processing you do, Pi (none), Delta or Omega, is just different types
of post-processing on the raw phase data-set.
Cheers,
Magnus
Hi,
On 05/12/2018 08:38 PM, Oleg Skydan wrote:
> Hi!
>
> From: "Magnus Danielson" <magnus@rubidium.dyndns.org>
>> ADEV assumes brick-wall filtering up to the Nyquist frequency as result
>> of the sample-rate. When you filter the data as you do a Linear
>> Regression / Least Square estimation, the actual bandwidth will be much
>> less, so the ADEV measures will be biased for lower taus, but for higher
>> taus less of the kernel of the ADEV will be affected by the filter and
>> thus the bias will reduce.
>
> Thanks for clarification. Bob already pointed me to problem and after
> some reading *DEV theme seems to be clearer.
The mistake is easy to make. Back in the days, it was given that you
should always give the system bandwidth alongside a ADEV plot, a
practice that later got lost. Many people does not know what bandwidth
they have, and the effect it has on the plot. I've even heard
distinguished and knowledgeable person in the field admit of doing it
incorrect.
>>> Does the ADEV plots I got looks reasonable for the used "mid range"
>>> OCXOs (see the second plot for the long run test)?
>>
>> You probably want to find the source of the wavy response as the orange
>> and red trace.
>
> I have already found the problem. It is HW problem related to poor
> isolation between reference OCXO signal and counter input signal clock
> line (it is also possible there are some grounding or power supply
> decoupling problems - the HW is made in "ugly construction" style). When
> the input clock frequency is very close (0.3..0.4Hz difference) to the
> OCXO subharmonic this problem become visible (it is not FW problem
> discussed before, cause counter reference is not a harmonic of the OCXO
> anymore).
Make sense. Cross-talk has been performance limit of several counters,
and care should be taken to reduce it.
> It looks like some commercial counters suffers from that
> problem too. After I connected OCXO and input feed lines with short
> pieces of the coax this effect greatly decreased, but not disappeared.
Cross talk exists for sure, but there is a similar effect too which is
not due to cross-talk but due to how the counter is able to interpolate
certain frequencies.
> The "large N" plots were measured with the input signal 1.4Hz (0.3ppm)
> higher then 1/2 subharmonic of the OCXO frequency, with such frequency
> difference that problem completely disappears. I will check for this
> problem again when I will move the HW to the normal PCB.
Yes.
>> If fact, you can do a Omega-style counter you can use for PDEV, you just
>> need to use the right approach to be able to decimate the data. Oh,
>> there's a draft paper on that:
>>
>> https://arxiv.org/abs/1604.01004
>
> Thanks for the document. It needs some time to study and maybe I will
> add the features to the counter to calculate correct PDEV.
It suggest a very practical method for FPGA based counters, so that you
can make use of the high rate of samples that you have and reduce it in
HW before handing of to SW. As you want to decimate data, you do not
want to lose the Least Square property, and this is a practical method
of achieving it.
>>> If ADEV is needed, the averaging
>>> interval can be reduced and several measurements (more then eight) can
>>> be combined into one point (creating the new weighting function which
>>> resembles the usual Pi one, as shown in the [1] p.54), it should be
>>> possible to calculate usual ADEV using such data. As far as I
>>> understand, the filter which is formed by the resulting weighting
>>> function will have wider bandwidth, so the impact on ADEV will be
>>> smaller and it can be computed correctly. Am I missing something?
>>
>> Well, you can reduce averaging interval to 1 and then you compute the
>> ADEV, but it does not behave as the MDEV any longer.
>
> With no averaging it will be a simple reciprocal counter with time
> resolution of only 2.5ns. The idea was to use trapezoidal weighting, so
> the counter will become somewhere "between" Pi and Delta counters. When
> the upper base of the weighting function trapezium is 0 length
> (triangular weighting) it is usual Delta counter, if it is infinitely
> long the result should converge to usual Pi counter. Prof. Rubiola
> claims if the ratio of upper to lower base is more than 8/9 the ADEV
> plots made from such data should be sufficiently close to usual ADEV. Of
> cause the gain from the averaging will be at least 3 times less than
> from the usual Delta averaging.
You do not want to mix pre-filtering and ADEV that way. We can do things
better.
> Maybe I need to find or make "not so good" signal source and measure its
> ADEV using above method and compare with the traditional. It should be
> interesting experiment.
It is always good to experiment and learn from both not so stable stuff,
stuff with significant drift and very stable stuff.
>> What you can do is that you can calculate MDEV or PDEV, and then apply
>> the suitable bias function to convert the level to that of ADEV.
>
> That can be done if the statistics is calculated inside the counter, but
> it will not make the exported data suitable for post processing with the
> TimeLab or other software that is not aware of what is going on inside
> the counter.
Exactly. You need to continue the processing in the counter for the
post-processing to produce good post-processing for unbias values.
There is many ways to mess it up.
>> Yes, they give relatively close values of deviation, where PDEV goes
>> somewhat lower, indicating that there is a slight advantage of the LR/LS
>> frequency estimation measure over that of the Delta counter, as given by
>> it's MDEV.
>
> Here is another question - how to correctly calculate averaging length
> in Delta counter? I have 5e6 timestamps in one second, so Pi and Omega
> counters process 5e6 samples totally and one measurement have also 5e6
> samples, but the Delta one processes 10e6 totally with each of the
> averaged measurement having 5e6 samples. Delta counter actually used two
> times more data. What should be equal when comparing different counter
> types - the number of samples in one measurement (gating time) or the
> total number of samples processed?
How does you get so different event-rates?
If you have 5 MHz, the rising edge gives you 5E6 events, and which type
of processing you do, Pi (none), Delta or Omega, is just different types
of post-processing on the raw phase data-set.
Cheers,
Magnus
MD
Magnus Danielson
Sat, May 12, 2018 9:44 PM
On 05/12/2018 09:41 PM, Bob kb8tq wrote:
There is still the problem that the first post on the graph is different depending
on the technique.
The leftmost tau values are skipped and they "stay" inside the counter. If I setup counter to generate lets say 1s stamps (ADEV starts at 1s) it will generate internally 1/8sec averaged measurements, but export combined data for 1s stamps. The result will be strictly speaking different, but the difference should be insignificant.
Except here are a lot of papers where they demonstrate that the difference may be very significant. I would
suggest that the “is significant’ group is actually larger than the “is not” group.
There is no reason to treat it light-handed as about the same, as they
become different measures, where there is a measurement bias. Depending
on what you do, there might be a bias function to compensate the bias
with... or not. Even when there is, most people forget to apply it.
Stay clear of it and do it properly.
Averaging prior to ADEV does nothing really useful unless it is
well-founded, and then we call it MDEV and PDEV, and then you have to be
careful about the details to do it proper. Otherwise you just waste your
time to get "improved numbers" which does not actually help you to give
proper measures.
The other side of all this is that ADEV is really not a very good way to test a counter.
Counter testing was not a main reason to dig into statistics details last days. Initially I used ADEV when tried to test the idea of making the counter with very simple HW and good resolution (BTW, it appeared later it was not ADEV in reality :). Then I saw it worked, so I decided to make a "normal" useful counter (I liked the HW/SW concept). The HW has enough power to compute various statistics onboard in real time, and while it is not requisite feature of the project now, I think it will be good if the counter will be able to do it (or at least if it will export data suitable to do it in post process). The rest of the story you know :)
Again, ADEV is tricky and sensitive to various odd things. This whole debate about it being sensitive goes
back to the original papers in the late 1960’s and 1970’s. At every paper I attended the issue of averaging
and bandwidth came up in the questions after the paper. The conversation has been going on for a long
time.
If you go back to Dr. David Allan's Feb 1966 paper, you clearly see how
white and flicker phase modulation noise depend on the bandwidth, and
then assumed to be brick-wall filter. Your ability to reflect the
amplitude of those noises properly thus depends on the bandwidth.
Any filtering reduces the bandwidth, and hence artificially reduces the
ADEV value for the same amount of actual noise, then it is not
representing the underlying noise properly. However, if you use this for
improving your frequency measurements, it's fine and the processed ADEV
will represent the counters performance with that filter. Thus, the aim
will govern if you should or should not do the pre-filtering.
If you are trying specifically just to measure ADEV, then there are a lot of ways to do that by it’s self.
Yes, but if it can be done with only some additional code - why not to have such ability? Even if it has some known limitations it is still a useful addition. Of cause it should be done as good as it can be with the HW limitations. Also it was/is a good educational moment.
It’s only useful if it is accurate. Since you can “do code” that gives you results that are better than reality,
simply coming up with a number is not the full answer. To be useful as ADEV, it needs to be correct.
Now it is period of tests/experiments to see the used technology features/limitations(of cause if those experiments can be done with the current "ugly style HW"). I have already got a lot of useful information, it should help me in the following HW/FW development. The next steps are analog front end and GPS frequency correction (I should get the GPS module next week). I have already tested the 6GHz prescaler and now wait for some parts to finish it. Hope this project will have the "happy end" :).
I’m sure it will come out to be a very cool counter. My only concern here is creating inaccurate results
by stretching to far with what you are trying to do. Keep it to the stuff that is accurate.
Bob and I are picky, and for a reasoon. When we want our ADEV plots, we
want them done properly, or else we can improve the specs of the
oscillators by changing how fancy post-processing we do on the
counter-data. Yes, we see this in professional conferences too.
Mumble... BAD SCIENCE!
Metrology correct measures takes care for details.
Measurements needs to be repeatable and of correct value.
Cheers,
Magnus
On 05/12/2018 09:41 PM, Bob kb8tq wrote:
> Hi
>
>
>> On May 12, 2018, at 1:20 PM, Oleg Skydan <olegskydan@gmail.com> wrote:
>>
>> Hi!
>>
>> From: "Bob kb8tq" <kb8tq@n1k.org>
>>> There is still the problem that the first post on the graph is different depending
>>> on the technique.
>>
>> The leftmost tau values are skipped and they "stay" inside the counter. If I setup counter to generate lets say 1s stamps (ADEV starts at 1s) it will generate internally 1/8sec averaged measurements, but export combined data for 1s stamps. The result will be strictly speaking different, but the difference should be insignificant.
>
> Except here are a *lot* of papers where they demonstrate that the difference may be *very* significant. I would
> suggest that the “is significant’ group is actually larger than the “is not” group.
There is no reason to treat it light-handed as about the same, as they
become different measures, where there is a measurement bias. Depending
on what you do, there might be a bias function to compensate the bias
with... or not. Even when there is, most people forget to apply it.
Stay clear of it and do it properly.
Averaging prior to ADEV does nothing really useful unless it is
well-founded, and then we call it MDEV and PDEV, and then you have to be
careful about the details to do it proper. Otherwise you just waste your
time to get "improved numbers" which does not actually help you to give
proper measures.
>>
>>> The other side of all this is that ADEV is really not a very good way to test a counter.
>>
>> Counter testing was not a main reason to dig into statistics details last days. Initially I used ADEV when tried to test the idea of making the counter with very simple HW and good resolution (BTW, it appeared later it was not ADEV in reality :). Then I saw it worked, so I decided to make a "normal" useful counter (I liked the HW/SW concept). The HW has enough power to compute various statistics onboard in real time, and while it is not requisite feature of the project now, I think it will be good if the counter will be able to do it (or at least if it will export data suitable to do it in post process). The rest of the story you know :)
>
> Again, ADEV is tricky and sensitive to various odd things. This whole debate about it being sensitive goes
> back to the original papers in the late 1960’s and 1970’s. At every paper I attended the issue of averaging
> and bandwidth came up in the questions after the paper. The conversation has been going on for a *long*
> time.
If you go back to Dr. David Allan's Feb 1966 paper, you clearly see how
white and flicker phase modulation noise depend on the bandwidth, and
then assumed to be brick-wall filter. Your ability to reflect the
amplitude of those noises properly thus depends on the bandwidth.
Any filtering reduces the bandwidth, and hence artificially reduces the
ADEV value for the same amount of actual noise, then it is not
representing the underlying noise properly. However, if you use this for
improving your frequency measurements, it's fine and the processed ADEV
will represent the counters performance with that filter. Thus, the aim
will govern if you should or should not do the pre-filtering.
>>> If you are trying specifically just to measure ADEV, then there are a lot of ways to do that by it’s self.
>>
>> Yes, but if it can be done with only some additional code - why not to have such ability? Even if it has some known limitations it is still a useful addition. Of cause it should be done as good as it can be with the HW limitations. Also it was/is a good educational moment.
>
> It’s only useful if it is accurate. Since you can “do code” that gives you results that are better than reality,
> simply coming up with a number is not the full answer. To be useful as ADEV, it needs to be correct.
Exactly.
>>
>> Now it is period of tests/experiments to see the used technology features/limitations(of cause if those experiments can be done with the current "ugly style HW"). I have already got a lot of useful information, it should help me in the following HW/FW development. The next steps are analog front end and GPS frequency correction (I should get the GPS module next week). I have already tested the 6GHz prescaler and now wait for some parts to finish it. Hope this project will have the "happy end" :).
>
> I’m sure it will come out to be a very cool counter. My *only* concern here is creating inaccurate results
> by stretching to far with what you are trying to do. Keep it to the stuff that is accurate.
Bob and I are picky, and for a reasoon. When we want our ADEV plots, we
want them done properly, or else we can improve the specs of the
oscillators by changing how fancy post-processing we do on the
counter-data. Yes, we see this in professional conferences too.
Mumble... BAD SCIENCE!
Metrology correct measures takes care for details.
Measurements needs to be repeatable and of correct value.
Cheers,
Magnus
OS
Oleg Skydan
Sun, May 13, 2018 7:07 AM
It’s only useful if it is accurate. Since you can “do code” that gives you
results that are better than reality,
simply coming up with a number is not the full answer. To be useful as
ADEV, it needs to be correct.
I understand it, so I try to investigate the problem to understand what can
be done (if any :).
I’m sure it will come out to be a very cool counter. My only concern
here is creating inaccurate results
by stretching to far with what you are trying to do. Keep it to the stuff
that is accurate.
I am interested in accurate results or at least with well defined
limitations for a few specific measurements/modes. So I will try to make
results as accurate as I can do/it can be done keeping simple hardware.
Thanks!
Oleg
Hi Bob!
From: "Bob kb8tq" <kb8tq@n1k.org>
> It’s only useful if it is accurate. Since you can “do code” that gives you
> results that are better than reality,
> simply coming up with a number is not the full answer. To be useful as
> ADEV, it needs to be correct.
I understand it, so I try to investigate the problem to understand what can
be done (if any :).
> I’m sure it will come out to be a very cool counter. My *only* concern
> here is creating inaccurate results
> by stretching to far with what you are trying to do. Keep it to the stuff
> that is accurate.
I am interested in accurate results or at least with well defined
limitations for a few specific measurements/modes. So I will try to make
results as accurate as I can do/it can be done keeping simple hardware.
Thanks!
Oleg
OS
Oleg Skydan
Sun, May 13, 2018 7:31 AM
The leftmost tau values are skipped and they "stay" inside the counter.
If I setup counter to generate lets say 1s stamps (ADEV starts at 1s) it
will generate internally 1/8sec averaged measurements, but export
combined data for 1s stamps. The result will be strictly speaking
different, but the difference should be insignificant.
What is your motivation for doing this?
My counter can operate in usual Pi mode - I got 2.5ns resolution. I am
primary interested in high frequency signals (not one shoot events), and HW
is able to collect and process some millions of timestamps continuously. So
in Delta or Omega mode I can improve the resolution in theory down to
several ps (for 1s measurement interval). In reality the limit will be
somewhat higher.
So I can compute the classical ADEV (using Pi mode) with a lot of counter
noise at low tau (it will probably not be very useful due to the counter
noise dominance in the leftmost part of ADEV plot), or MDEV (using Delta
mode) with the counter noise much lower.
I would like to try to use the excessive data I have to increase counter
resolution in a manner that ADEV calculation with such preprocessing is
still possible with acceptable accuracy. After Bob's explanations and some
additional reading I was almost sure it is not possible (and it is so in
general case), but then I saw the presentation
http://www.rubiola.org/pdf-slides/2012T-IFCS-Counters.pdf (E. Rubiola, High
resolution time and frequency counters, updated version) and saw inferences
on p.54. They looks reasonable and it is just what I wanted to do.
The mistake is easy to make. Back in the days, it was given that you
should always give the system bandwidth alongside a ADEV plot, a
practice that later got lost. Many people does not know what bandwidth
they have, and the effect it has on the plot. I've even heard
distinguished and knowledgeable person in the field admit of doing it
incorrect.
That makes sense.
We can view at the problem in the frequency domain. We have a DUT, reference
and instrument (counter) noise. In most cases we are interested in
suppressing instrument and reference noise and leaving the DUT noise. The
reference and DUT has more or less the same nature of noise, so it should
not be possible to filter out reference noise without affecting DUT noise
(with the simple HW). The counter noise (in my case) will look like white
noise (at least the noise associated with the absence of the HW
interpolator). When we process timestamps by Omega or Delta data processing
we apply filter, so the correctness of the resulting data will depend of the
DUT noise characteristics and filter shape. The ADEV calculation at tau >
tau0 will also apply some sort of filter during decimation, it also should
be counted for (cause we actually decimate the high rate timestamp stream
making the points data for the following postprocessing). Am I right?
Here is a good illustration how averaging affects ADEV
http://www.leapsecond.com/pages/adev-avg/ . If we drop the leftmost part of
the ADEV affected by averaging, the resulting averaging effects on the ADEV
are minimized. Also they can be minimized by the optimal averaging strategy.
The question is optimal averaging strategy and well defined restrictions
when such preprocessing can be applied.
If it works I would like to add such mode for the compatibility with the
widely spread post processing SW (TimeLab is a good example). Of cause I can
do calculations inside the counter without such limitations, but that will
be another data processing option(s) (which might not be always suitable).
I'm not saying you are necessarilly incorrect, but it would be
interesting to hear your motivation.
The end goal is to have a counter mode when the counter produces data
suitable for post processing for ADEV and other similar statistics with
resolution better (or counter noise lower) that one shoot one (Pi counter).
I understand that, if it will be possible, the counter resolution will be
degraded compared to usual Omega or Delta mode, also there will be some
limitations for the DUT noise when such processing can be applied.
Cross talk exists for sure, but there is a similar effect too which is
not due to cross-talk but due to how the counter is able to interpolate
certain frequencies.
I have no HW interpolator. The similar problem in the firmware was discussed
earlier and now it is fixed.
If fact, you can do a Omega-style counter you can use for PDEV, you just
need to use the right approach to be able to decimate the data. Oh,
there's a draft paper on that:
https://arxiv.org/abs/1604.01004
Thanks for the document. It needs some time to study and maybe I will
add the features to the counter to calculate correct PDEV.
It suggest a very practical method for FPGA based counters, so that you
can make use of the high rate of samples that you have and reduce it in
HW before handing of to SW. As you want to decimate data, you do not
want to lose the Least Square property, and this is a practical method
of achieving it.
I have no FPGA also :) All processing is in the FW, I will see how it fits
the used HW architecture.
Doing it all in FPGA has many benefits, but the HW will be more complicated
and pricier with minimal benefits for my main goals.
You do not want to mix pre-filtering and ADEV that way. We can do things
better.
Are you talking about PDEV?
Here is another question - how to correctly calculate averaging length
in Delta counter? I have 5e6 timestamps in one second, so Pi and Omega
counters process 5e6 samples totally and one measurement have also 5e6
samples, but the Delta one processes 10e6 totally with each of the
averaged measurement having 5e6 samples. Delta counter actually used two
times more data. What should be equal when comparing different counter
types - the number of samples in one measurement (gating time) or the
total number of samples processed?
How does you get so different event-rates?
If you have 5 MHz, the rising edge gives you 5E6 events, and which type
of processing you do, Pi (none), Delta or Omega, is just different types
of post-processing on the raw phase data-set.
So, if I want to compare "apples to apples" (comparing Delta and Omega/Pi
processing) the single measurement of the Delta counter should use a half of
the events (2.5E6) and the same number(2.5e6) of measurements should be
averaged, is that right? The counter in Delta mode currently calculates
results with 50% overlapping, it gives 10e6 stamps for the 1s output data
rate (the counter averages 2 seconds of data).
All the best!
Oleg
Hi Magnus!
From: "Magnus Danielson" <magnus@rubidium.dyndns.org>
>> The leftmost tau values are skipped and they "stay" inside the counter.
>> If I setup counter to generate lets say 1s stamps (ADEV starts at 1s) it
>> will generate internally 1/8sec averaged measurements, but export
>> combined data for 1s stamps. The result will be strictly speaking
>> different, but the difference should be insignificant.
>
> What is your motivation for doing this?
My counter can operate in usual Pi mode - I got 2.5ns resolution. I am
primary interested in high frequency signals (not one shoot events), and HW
is able to collect and process some millions of timestamps continuously. So
in Delta or Omega mode I can improve the resolution in theory down to
several ps (for 1s measurement interval). In reality the limit will be
somewhat higher.
So I can compute the classical ADEV (using Pi mode) with a lot of counter
noise at low tau (it will probably not be very useful due to the counter
noise dominance in the leftmost part of ADEV plot), or MDEV (using Delta
mode) with the counter noise much lower.
I would like to try to use the excessive data I have to increase counter
resolution in a manner that ADEV calculation with such preprocessing is
still possible with acceptable accuracy. After Bob's explanations and some
additional reading I was almost sure it is not possible (and it is so in
general case), but then I saw the presentation
http://www.rubiola.org/pdf-slides/2012T-IFCS-Counters.pdf (E. Rubiola, High
resolution time and frequency counters, updated version) and saw inferences
on p.54. They looks reasonable and it is just what I wanted to do.
> The mistake is easy to make. Back in the days, it was given that you
> should always give the system bandwidth alongside a ADEV plot, a
> practice that later got lost. Many people does not know what bandwidth
> they have, and the effect it has on the plot. I've even heard
> distinguished and knowledgeable person in the field admit of doing it
> incorrect.
That makes sense.
We can view at the problem in the frequency domain. We have a DUT, reference
and instrument (counter) noise. In most cases we are interested in
suppressing instrument and reference noise and leaving the DUT noise. The
reference and DUT has more or less the same nature of noise, so it should
not be possible to filter out reference noise without affecting DUT noise
(with the simple HW). The counter noise (in my case) will look like white
noise (at least the noise associated with the absence of the HW
interpolator). When we process timestamps by Omega or Delta data processing
we apply filter, so the correctness of the resulting data will depend of the
DUT noise characteristics and filter shape. The ADEV calculation at tau >
tau0 will also apply some sort of filter during decimation, it also should
be counted for (cause we actually decimate the high rate timestamp stream
making the points data for the following postprocessing). Am I right?
Here is a good illustration how averaging affects ADEV
http://www.leapsecond.com/pages/adev-avg/ . If we drop the leftmost part of
the ADEV affected by averaging, the resulting averaging effects on the ADEV
are minimized. Also they can be minimized by the optimal averaging strategy.
The question is optimal averaging strategy and well defined restrictions
when such preprocessing can be applied.
If it works I would like to add such mode for the compatibility with the
widely spread post processing SW (TimeLab is a good example). Of cause I can
do calculations inside the counter without such limitations, but that will
be another data processing option(s) (which might not be always suitable).
> I'm not saying you are necessarilly incorrect, but it would be
> interesting to hear your motivation.
The end goal is to have a counter mode when the counter produces data
suitable for post processing for ADEV and other similar statistics with
resolution better (or counter noise lower) that one shoot one (Pi counter).
I understand that, if it will be possible, the counter resolution will be
degraded compared to usual Omega or Delta mode, also there will be some
limitations for the DUT noise when such processing can be applied.
> Cross talk exists for sure, but there is a similar effect too which is
> not due to cross-talk but due to how the counter is able to interpolate
> certain frequencies.
I have no HW interpolator. The similar problem in the firmware was discussed
earlier and now it is fixed.
>>> If fact, you can do a Omega-style counter you can use for PDEV, you just
>>> need to use the right approach to be able to decimate the data. Oh,
>>> there's a draft paper on that:
>>>
>>> https://arxiv.org/abs/1604.01004
>>
>> Thanks for the document. It needs some time to study and maybe I will
>> add the features to the counter to calculate correct PDEV.
>
> It suggest a very practical method for FPGA based counters, so that you
> can make use of the high rate of samples that you have and reduce it in
> HW before handing of to SW. As you want to decimate data, you do not
> want to lose the Least Square property, and this is a practical method
> of achieving it.
I have no FPGA also :) All processing is in the FW, I will see how it fits
the used HW architecture.
Doing it all in FPGA has many benefits, but the HW will be more complicated
and pricier with minimal benefits for my main goals.
> You do not want to mix pre-filtering and ADEV that way. We can do things
> better.
Are you talking about PDEV?
>> Here is another question - how to correctly calculate averaging length
>> in Delta counter? I have 5e6 timestamps in one second, so Pi and Omega
>> counters process 5e6 samples totally and one measurement have also 5e6
>> samples, but the Delta one processes 10e6 totally with each of the
>> averaged measurement having 5e6 samples. Delta counter actually used two
>> times more data. What should be equal when comparing different counter
>> types - the number of samples in one measurement (gating time) or the
>> total number of samples processed?
>
> How does you get so different event-rates?
>
> If you have 5 MHz, the rising edge gives you 5E6 events, and which type
> of processing you do, Pi (none), Delta or Omega, is just different types
> of post-processing on the raw phase data-set.
So, if I want to compare "apples to apples" (comparing Delta and Omega/Pi
processing) the single measurement of the Delta counter should use a half of
the events (2.5E6) and the same number(2.5e6) of measurements should be
averaged, is that right? The counter in Delta mode currently calculates
results with 50% overlapping, it gives 10e6 stamps for the 1s output data
rate (the counter averages 2 seconds of data).
All the best!
Oleg
MD
Magnus Danielson
Sun, May 13, 2018 1:36 PM
Hi Oleg,
On 05/13/2018 09:31 AM, Oleg Skydan wrote:
The leftmost tau values are skipped and they "stay" inside the counter.
If I setup counter to generate lets say 1s stamps (ADEV starts at 1s) it
will generate internally 1/8sec averaged measurements, but export
combined data for 1s stamps. The result will be strictly speaking
different, but the difference should be insignificant.
What is your motivation for doing this?
My counter can operate in usual Pi mode - I got 2.5ns resolution. I am
primary interested in high frequency signals (not one shoot events), and
HW is able to collect and process some millions of timestamps
continuously. So in Delta or Omega mode I can improve the resolution in
theory down to several ps (for 1s measurement interval). In reality the
limit will be somewhat higher.
So I can compute the classical ADEV (using Pi mode) with a lot of
counter noise at low tau (it will probably not be very useful due to the
counter noise dominance in the leftmost part of ADEV plot), or MDEV
(using Delta mode) with the counter noise much lower.
Yes, it helps you to suppress noise. As you extend the measures, you
need to do it properly to maintain MDEV property.
I would like to try to use the excessive data I have to increase counter
resolution in a manner that ADEV calculation with such preprocessing is
still possible with acceptable accuracy. After Bob's explanations and
some additional reading I was almost sure it is not possible (and it is
so in general case), but then I saw the presentation
http://www.rubiola.org/pdf-slides/2012T-IFCS-Counters.pdf (E. Rubiola,
High resolution time and frequency counters, updated version) and saw
inferences on p.54. They looks reasonable and it is just what I wanted
to do.
OK, when you do this you really want to filter out the first lower tau,
but as you get out of the filtered part, or rather, when the dominant
part of the ADEV processing is within the filter bandwidth, the biasing
becomes less.
I would be inclined to just continue the MDEV compliant processing
instead. If you want the matching ADEV, rescale it using the
bias-function, which can be derived out of p.51 of that presentation.
You just need to figure out the dominant noise-type of each range of
tau, something which is much simpler in MDEV since White PM and Flicker
PM separates more clearly than the weak separation of ADEV.
Also, on this page you can see how the system bandwith f_H affects white
PM and flicker PM for Allan, but not modified Allan.
The mistake is easy to make. Back in the days, it was given that you
should always give the system bandwidth alongside a ADEV plot, a
practice that later got lost. Many people does not know what bandwidth
they have, and the effect it has on the plot. I've even heard
distinguished and knowledgeable person in the field admit of doing it
incorrect.
That makes sense.
We can view at the problem in the frequency domain. We have a DUT,
reference and instrument (counter) noise. In most cases we are
interested in suppressing instrument and reference noise and leaving the
DUT noise. The reference and DUT has more or less the same nature of
noise, so it should not be possible to filter out reference noise
without affecting DUT noise (with the simple HW). The counter noise (in
my case) will look like white noise (at least the noise associated with
the absence of the HW interpolator). When we process timestamps by Omega
or Delta data processing we apply filter, so the correctness of the
resulting data will depend of the DUT noise characteristics and filter
shape. The ADEV calculation at tau > tau0 will also apply some sort of
filter during decimation, it also should be counted for (cause we
actually decimate the high rate timestamp stream making the points data
for the following postprocessing). Am I right?
As you measure a DUT, the noise of the DUT, the noise of the counter and
the systematics of the counter adds up and we cannot distinguish them in
that measurement. There is measurement setups, such as
cross-correlation, which makes multiple measurements in parallel which
can start combat the noise separation issue.
For short taus, the systematic noise of quantization will create a 1/tau
limit in ADEV. This is in fact more complex than this simple model, but
let's just assume this for the moment, it is sufficient for the time
being and is what most assume anyway.
ADEV however does not really do decimation. It does combine measurements
to form longer observation time of frequency estimation, and subtract
these before squaring, to form the 2-point deviation, which we call
Allan's deviation or Allan deviation.
ADEV is designed to match how a simple counters deviation would become.
Here is a good illustration how averaging affects ADEV
http://www.leapsecond.com/pages/adev-avg/ . If we drop the leftmost part
of the ADEV affected by averaging, the resulting averaging effects on
the ADEV are minimized. Also they can be minimized by the optimal
averaging strategy. The question is optimal averaging strategy and well
defined restrictions when such preprocessing can be applied.
Ehm no. The optimal averaging strategy for ADEV is to do no averaging.
This is the hard lesson to learn. You can't really cheat if you aim to
get proper ADEV.
You can use averaging, and it will cause biased values, so you might use
the part with less bias, but there is safer ways of doing that, by going
full MDEV or PDEV instead.
With biases, you have something similar to, but not being the ADEV.
ITU-T have made standardization on TDEV measurements where they put
requirements on these things, and there a similar strategy was used,
putting a requirement on number of samples per second and bandwidth,
such that when the tau becomes high enough the bias would be tolerable
low for the range of taus that is being measured.
If it works I would like to add such mode for the compatibility with the
widely spread post processing SW (TimeLab is a good example). Of cause I
can do calculations inside the counter without such limitations, but
that will be another data processing option(s) (which might not be
always suitable).
I'm not saying you are necessarilly incorrect, but it would be
interesting to hear your motivation.
The end goal is to have a counter mode when the counter produces data
suitable for post processing for ADEV and other similar statistics with
resolution better (or counter noise lower) that one shoot one (Pi
counter). I understand that, if it will be possible, the counter
resolution will be degraded compared to usual Omega or Delta mode, also
there will be some limitations for the DUT noise when such processing
can be applied.
Like in the ITU-T case, sure you can use filtering, but one needs to
drop the lower tau part to approximate the ADEV.
Cross talk exists for sure, but there is a similar effect too which is
not due to cross-talk but due to how the counter is able to interpolate
certain frequencies.
I have no HW interpolator. The similar problem in the firmware was
discussed earlier and now it is fixed.
From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock.
If fact, you can do a Omega-style counter you can use for PDEV, you
just
need to use the right approach to be able to decimate the data. Oh,
there's a draft paper on that:
https://arxiv.org/abs/1604.01004
Thanks for the document. It needs some time to study and maybe I will
add the features to the counter to calculate correct PDEV.
It suggest a very practical method for FPGA based counters, so that you
can make use of the high rate of samples that you have and reduce it in
HW before handing of to SW. As you want to decimate data, you do not
want to lose the Least Square property, and this is a practical method
of achieving it.
I have no FPGA also :) All processing is in the FW, I will see how it
fits the used HW architecture.
Doing it all in FPGA has many benefits, but the HW will be more
complicated and pricier with minimal benefits for my main goals.
Exactly what you mean by FW now I don't get, for me that is FPGA code.
You do not want to mix pre-filtering and ADEV that way. We can do things
better.
Are you talking about PDEV?
MDEV and PDEV is better approaches. They continue the filtering action,
and allow for decimation of data with known filtering properties.
Here is another question - how to correctly calculate averaging length
in Delta counter? I have 5e6 timestamps in one second, so Pi and Omega
counters process 5e6 samples totally and one measurement have also 5e6
samples, but the Delta one processes 10e6 totally with each of the
averaged measurement having 5e6 samples. Delta counter actually used two
times more data. What should be equal when comparing different counter
types - the number of samples in one measurement (gating time) or the
total number of samples processed?
How does you get so different event-rates?
If you have 5 MHz, the rising edge gives you 5E6 events, and which type
of processing you do, Pi (none), Delta or Omega, is just different types
of post-processing on the raw phase data-set.
So, if I want to compare "apples to apples" (comparing Delta and
Omega/Pi processing) the single measurement of the Delta counter should
use a half of the events (2.5E6) and the same number(2.5e6) of
measurements should be averaged, is that right? The counter in Delta
mode currently calculates results with 50% overlapping, it gives 10e6
stamps for the 1s output data rate (the counter averages 2 seconds of
data).
Do not report overlapping like that. Report that separate from the event
rate you have. So, ok, if you have a basic rate of 2.5e6 events per
second, and overlapping processing for the Delta, doubling the Delta
processing report rate.
Cheers,
Magnus
Hi Oleg,
On 05/13/2018 09:31 AM, Oleg Skydan wrote:
> Hi Magnus!
>
> From: "Magnus Danielson" <magnus@rubidium.dyndns.org>
>>> The leftmost tau values are skipped and they "stay" inside the counter.
>>> If I setup counter to generate lets say 1s stamps (ADEV starts at 1s) it
>>> will generate internally 1/8sec averaged measurements, but export
>>> combined data for 1s stamps. The result will be strictly speaking
>>> different, but the difference should be insignificant.
>>
>> What is your motivation for doing this?
>
> My counter can operate in usual Pi mode - I got 2.5ns resolution. I am
> primary interested in high frequency signals (not one shoot events), and
> HW is able to collect and process some millions of timestamps
> continuously. So in Delta or Omega mode I can improve the resolution in
> theory down to several ps (for 1s measurement interval). In reality the
> limit will be somewhat higher.
Fair enough.
> So I can compute the classical ADEV (using Pi mode) with a lot of
> counter noise at low tau (it will probably not be very useful due to the
> counter noise dominance in the leftmost part of ADEV plot), or MDEV
> (using Delta mode) with the counter noise much lower.
Yes, it helps you to suppress noise. As you extend the measures, you
need to do it properly to maintain MDEV property.
> I would like to try to use the excessive data I have to increase counter
> resolution in a manner that ADEV calculation with such preprocessing is
> still possible with acceptable accuracy. After Bob's explanations and
> some additional reading I was almost sure it is not possible (and it is
> so in general case), but then I saw the presentation
> http://www.rubiola.org/pdf-slides/2012T-IFCS-Counters.pdf (E. Rubiola,
> High resolution time and frequency counters, updated version) and saw
> inferences on p.54. They looks reasonable and it is just what I wanted
> to do.
OK, when you do this you really want to filter out the first lower tau,
but as you get out of the filtered part, or rather, when the dominant
part of the ADEV processing is within the filter bandwidth, the biasing
becomes less.
I would be inclined to just continue the MDEV compliant processing
instead. If you want the matching ADEV, rescale it using the
bias-function, which can be derived out of p.51 of that presentation.
You just need to figure out the dominant noise-type of each range of
tau, something which is much simpler in MDEV since White PM and Flicker
PM separates more clearly than the weak separation of ADEV.
Also, on this page you can see how the system bandwith f_H affects white
PM and flicker PM for Allan, but not modified Allan.
>> The mistake is easy to make. Back in the days, it was given that you
>> should always give the system bandwidth alongside a ADEV plot, a
>> practice that later got lost. Many people does not know what bandwidth
>> they have, and the effect it has on the plot. I've even heard
>> distinguished and knowledgeable person in the field admit of doing it
>> incorrect.
>
> That makes sense.
>
> We can view at the problem in the frequency domain. We have a DUT,
> reference and instrument (counter) noise. In most cases we are
> interested in suppressing instrument and reference noise and leaving the
> DUT noise. The reference and DUT has more or less the same nature of
> noise, so it should not be possible to filter out reference noise
> without affecting DUT noise (with the simple HW). The counter noise (in
> my case) will look like white noise (at least the noise associated with
> the absence of the HW interpolator). When we process timestamps by Omega
> or Delta data processing we apply filter, so the correctness of the
> resulting data will depend of the DUT noise characteristics and filter
> shape. The ADEV calculation at tau > tau0 will also apply some sort of
> filter during decimation, it also should be counted for (cause we
> actually decimate the high rate timestamp stream making the points data
> for the following postprocessing). Am I right?
As you measure a DUT, the noise of the DUT, the noise of the counter and
the systematics of the counter adds up and we cannot distinguish them in
that measurement. There is measurement setups, such as
cross-correlation, which makes multiple measurements in parallel which
can start combat the noise separation issue.
For short taus, the systematic noise of quantization will create a 1/tau
limit in ADEV. This is in fact more complex than this simple model, but
let's just assume this for the moment, it is sufficient for the time
being and is what most assume anyway.
ADEV however does not really do decimation. It does combine measurements
to form longer observation time of frequency estimation, and subtract
these before squaring, to form the 2-point deviation, which we call
Allan's deviation or Allan deviation.
ADEV is designed to match how a simple counters deviation would become.
> Here is a good illustration how averaging affects ADEV
> http://www.leapsecond.com/pages/adev-avg/ . If we drop the leftmost part
> of the ADEV affected by averaging, the resulting averaging effects on
> the ADEV are minimized. Also they can be minimized by the optimal
> averaging strategy. The question is optimal averaging strategy and well
> defined restrictions when such preprocessing can be applied.
Ehm no. The optimal averaging strategy for ADEV is to do no averaging.
This is the hard lesson to learn. You can't really cheat if you aim to
get proper ADEV.
You can use averaging, and it will cause biased values, so you might use
the part with less bias, but there is safer ways of doing that, by going
full MDEV or PDEV instead.
With biases, you have something similar to, but not being _the_ ADEV.
ITU-T have made standardization on TDEV measurements where they put
requirements on these things, and there a similar strategy was used,
putting a requirement on number of samples per second and bandwidth,
such that when the tau becomes high enough the bias would be tolerable
low for the range of taus that is being measured.
> If it works I would like to add such mode for the compatibility with the
> widely spread post processing SW (TimeLab is a good example). Of cause I
> can do calculations inside the counter without such limitations, but
> that will be another data processing option(s) (which might not be
> always suitable).
>
>> I'm not saying you are necessarilly incorrect, but it would be
>> interesting to hear your motivation.
>
> The end goal is to have a counter mode when the counter produces data
> suitable for post processing for ADEV and other similar statistics with
> resolution better (or counter noise lower) that one shoot one (Pi
> counter). I understand that, if it will be possible, the counter
> resolution will be degraded compared to usual Omega or Delta mode, also
> there will be some limitations for the DUT noise when such processing
> can be applied.
Like in the ITU-T case, sure you can use filtering, but one needs to
drop the lower tau part to approximate the ADEV.
>> Cross talk exists for sure, but there is a similar effect too which is
>> not due to cross-talk but due to how the counter is able to interpolate
>> certain frequencies.
>
> I have no HW interpolator. The similar problem in the firmware was
> discussed earlier and now it is fixed.
>From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock.
>>>> If fact, you can do a Omega-style counter you can use for PDEV, you
>>>> just
>>>> need to use the right approach to be able to decimate the data. Oh,
>>>> there's a draft paper on that:
>>>>
>>>> https://arxiv.org/abs/1604.01004
>>>
>>> Thanks for the document. It needs some time to study and maybe I will
>>> add the features to the counter to calculate correct PDEV.
>>
>> It suggest a very practical method for FPGA based counters, so that you
>> can make use of the high rate of samples that you have and reduce it in
>> HW before handing of to SW. As you want to decimate data, you do not
>> want to lose the Least Square property, and this is a practical method
>> of achieving it.
>
> I have no FPGA also :) All processing is in the FW, I will see how it
> fits the used HW architecture.
>
> Doing it all in FPGA has many benefits, but the HW will be more
> complicated and pricier with minimal benefits for my main goals.
Exactly what you mean by FW now I don't get, for me that is FPGA code.
>> You do not want to mix pre-filtering and ADEV that way. We can do things
>> better.
>
> Are you talking about PDEV?
MDEV and PDEV is better approaches. They continue the filtering action,
and allow for decimation of data with known filtering properties.
>>> Here is another question - how to correctly calculate averaging length
>>> in Delta counter? I have 5e6 timestamps in one second, so Pi and Omega
>>> counters process 5e6 samples totally and one measurement have also 5e6
>>> samples, but the Delta one processes 10e6 totally with each of the
>>> averaged measurement having 5e6 samples. Delta counter actually used two
>>> times more data. What should be equal when comparing different counter
>>> types - the number of samples in one measurement (gating time) or the
>>> total number of samples processed?
>>
>> How does you get so different event-rates?
>>
>> If you have 5 MHz, the rising edge gives you 5E6 events, and which type
>> of processing you do, Pi (none), Delta or Omega, is just different types
>> of post-processing on the raw phase data-set.
>
> So, if I want to compare "apples to apples" (comparing Delta and
> Omega/Pi processing) the single measurement of the Delta counter should
> use a half of the events (2.5E6) and the same number(2.5e6) of
> measurements should be averaged, is that right? The counter in Delta
> mode currently calculates results with 50% overlapping, it gives 10e6
> stamps for the 1s output data rate (the counter averages 2 seconds of
> data).
Do not report overlapping like that. Report that separate from the event
rate you have. So, ok, if you have a basic rate of 2.5e6 events per
second, and overlapping processing for the Delta, doubling the Delta
processing report rate.
Cheers,
Magnus