time-nuts@lists.febo.com

Discussion of precise time and frequency measurement

View all threads

Re: [time-nuts] Question about frequency counter testing

BK
Bob kb8tq
Sun, May 13, 2018 1:49 PM

Hi

I guess it is time to ask:

Is this a commercial product you are designing?

If so, that raises a whole added layer to this discussion in terms of “does it do
what it says it does?”.

Bob

On May 13, 2018, at 3:07 AM, Oleg Skydan olegskydan@gmail.com wrote:

Hi Bob!

From: "Bob kb8tq" kb8tq@n1k.org

It’s only useful if it is accurate. Since you can “do code” that gives you results that are better than reality,
simply coming up with a number is not the full answer. To be useful as ADEV, it needs to be correct.

I understand it, so I try to investigate the problem to understand what can be done (if any :).

I’m sure it will come out to be a very cool counter. My only concern here is creating inaccurate results
by stretching to far with what you are trying to do. Keep it to the stuff that is accurate.

I am interested in accurate results or at least with well defined limitations for a few specific measurements/modes. So I will try to make results as accurate as I can do/it can be done keeping simple hardware.

Thanks!
Oleg


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi I guess it is time to ask: Is this a commercial product you are designing? If so, that raises a whole added layer to this discussion in terms of “does it do what it says it does?”. Bob > On May 13, 2018, at 3:07 AM, Oleg Skydan <olegskydan@gmail.com> wrote: > > Hi Bob! > > From: "Bob kb8tq" <kb8tq@n1k.org> >> It’s only useful if it is accurate. Since you can “do code” that gives you results that are better than reality, >> simply coming up with a number is not the full answer. To be useful as ADEV, it needs to be correct. > > I understand it, so I try to investigate the problem to understand what can be done (if any :). > >> I’m sure it will come out to be a very cool counter. My *only* concern here is creating inaccurate results >> by stretching to far with what you are trying to do. Keep it to the stuff that is accurate. > > I am interested in accurate results or at least with well defined limitations for a few specific measurements/modes. So I will try to make results as accurate as I can do/it can be done keeping simple hardware. > > Thanks! > Oleg > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there.
OS
Oleg Skydan
Sun, May 13, 2018 5:31 PM

Hi Bob!

From: "Bob kb8tq" kb8tq@n1k.org

I guess it is time to ask:

Is this a commercial product you are designing?

No. I have no abilities to produce it commercially and I see no market for
such product. I will build one unit for myself, I may build several more
units for friends or if somebody will like it. I will show the HW details
when it will be ready.

What I gain doing it?

  1. I will have a new counter that suits my current needs
  2. I will study something new

If so, that raises a whole added layer to this discussion in terms of
“does it do
what it says it does?”.

This question is also important for amateur/hobby measurement equipment. I
do not need equipment that "does not do what it says it does" even if it is
build for hobby use.

The theme about *DEV calculations has many important details I want to
understand right, sorry if I asked too many questions (some of them probably
were naive) and thank you for the help, it is very appreciated! I hope our
discussion is useful not only for me.

Thanks!
Oleg

Hi Bob! From: "Bob kb8tq" <kb8tq@n1k.org> > I guess it is time to ask: > > Is this a commercial product you are designing? No. I have no abilities to produce it commercially and I see no market for such product. I will build one unit for myself, I may build several more units for friends or if somebody will like it. I will show the HW details when it will be ready. What I gain doing it? 1. I will have a new counter that suits my current needs 2. I will study something new > If so, that raises a whole added layer to this discussion in terms of > “does it do > what it says it does?”. This question is also important for amateur/hobby measurement equipment. I do not need equipment that "does not do what it says it does" even if it is build for hobby use. The theme about *DEV calculations has many important details I want to understand right, sorry if I asked too many questions (some of them probably were naive) and thank you for the help, it is very appreciated! I hope our discussion is useful not only for me. Thanks! Oleg
BK
Bob kb8tq
Sun, May 13, 2018 6:09 PM

Hi

On May 13, 2018, at 1:31 PM, Oleg Skydan olegskydan@gmail.com wrote:

Hi Bob!

From: "Bob kb8tq" kb8tq@n1k.org

I guess it is time to ask:

Is this a commercial product you are designing?

No. I have no abilities to produce it commercially and I see no market for such product. I will build one unit for myself, I may build several more units for friends or if somebody will like it. I will show the HW details when it will be ready.

What I gain doing it?

  1. I will have a new counter that suits my current needs
  2. I will study something new

If so, that raises a whole added layer to this discussion in terms of “does it do
what it says it does?”.

This question is also important for amateur/hobby measurement equipment. I do not need equipment that "does not do what it says it does" even if it is build for hobby use.

The theme about *DEV calculations has many important details I want to understand right, sorry if I asked too many questions (some of them probably were naive) and thank you for the help, it is very appreciated! I hope our discussion is useful not only for me.

You are very much not the first person to run into these issues. They date back to the very early use of things like
ADEV. The debate has been active ever since. There are a few other sub debates that also come up. The proper
definition of ADEV allows “drift correction” to be used. Just how you do drift correction is up to you. As with filtering,
drift elimination impacts the results. It also needs to be defined ( if used).,

Bob

Thanks!
Oleg


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi > On May 13, 2018, at 1:31 PM, Oleg Skydan <olegskydan@gmail.com> wrote: > > Hi Bob! > > From: "Bob kb8tq" <kb8tq@n1k.org> >> I guess it is time to ask: >> >> Is this a commercial product you are designing? > > No. I have no abilities to produce it commercially and I see no market for such product. I will build one unit for myself, I may build several more units for friends or if somebody will like it. I will show the HW details when it will be ready. > > What I gain doing it? > 1. I will have a new counter that suits my current needs > 2. I will study something new > >> If so, that raises a whole added layer to this discussion in terms of “does it do >> what it says it does?”. > > This question is also important for amateur/hobby measurement equipment. I do not need equipment that "does not do what it says it does" even if it is build for hobby use. > > The theme about *DEV calculations has many important details I want to understand right, sorry if I asked too many questions (some of them probably were naive) and thank you for the help, it is very appreciated! I hope our discussion is useful not only for me. > You are very much *not* the first person to run into these issues. They date back to the very early use of things like ADEV. The debate has been active ever since. There are a few other sub debates that also come up. The proper definition of ADEV allows “drift correction” to be used. Just how you do drift correction is up to you. As with filtering, drift elimination impacts the results. It also needs to be defined ( if used)., Bob > Thanks! > Oleg > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there.
MD
Magnus Danielson
Sun, May 13, 2018 6:40 PM

Hi,

On 05/13/2018 08:09 PM, Bob kb8tq wrote:

If so, that raises a whole added layer to this discussion in terms of “does it do
what it says it does?”.

This question is also important for amateur/hobby measurement equipment. I do not need equipment that "does not do what it says it does" even if it is build for hobby use.

The theme about *DEV calculations has many important details I want to understand right, sorry if I asked too many questions (some of them probably were naive) and thank you for the help, it is very appreciated! I hope our discussion is useful not only for me.

You are very much not the first person to run into these issues. They date back to the very early use of things like
ADEV. The debate has been active ever since. There are a few other sub debates that also come up. The proper
definition of ADEV allows “drift correction” to be used. Just how you do drift correction is up to you. As with filtering,
drift elimination impacts the results. It also needs to be defined ( if used).,

There is actually two uses of ADEV, one is to represent the amplitude of
the various noise types, and the other is to represent the behavior of
the frequency measure. The classical use is the former, and you do not
want to fool those estimates, but for the later pre-filtering is not
only allowed, but encouraged!

Cheers,
Magnus

Hi, On 05/13/2018 08:09 PM, Bob kb8tq wrote: >>> If so, that raises a whole added layer to this discussion in terms of “does it do >>> what it says it does?”. >> >> This question is also important for amateur/hobby measurement equipment. I do not need equipment that "does not do what it says it does" even if it is build for hobby use. >> >> The theme about *DEV calculations has many important details I want to understand right, sorry if I asked too many questions (some of them probably were naive) and thank you for the help, it is very appreciated! I hope our discussion is useful not only for me. >> > > You are very much *not* the first person to run into these issues. They date back to the very early use of things like > ADEV. The debate has been active ever since. There are a few other sub debates that also come up. The proper > definition of ADEV allows “drift correction” to be used. Just how you do drift correction is up to you. As with filtering, > drift elimination impacts the results. It also needs to be defined ( if used)., There is actually two uses of ADEV, one is to represent the amplitude of the various noise types, and the other is to represent the behavior of the frequency measure. The classical use is the former, and you do not want to fool those estimates, but for the later pre-filtering is not only allowed, but encouraged! Cheers, Magnus
OS
Oleg Skydan
Sun, May 13, 2018 9:13 PM

Hi Magnus,

From: "Magnus Danielson" magnus@rubidium.dyndns.org

I would be inclined to just continue the MDEV compliant processing
instead. If you want the matching ADEV, rescale it using the
bias-function, which can be derived out of p.51 of that presentation.
You just need to figure out the dominant noise-type of each range of
tau, something which is much simpler in MDEV since White PM and Flicker
PM separates more clearly than the weak separation of ADEV.

As you measure a DUT, the noise of the DUT, the noise of the counter and
the systematics of the counter adds up and we cannot distinguish them in
that measurement.

Probably I did not express what I meant clearly. I understand that we can
not separate them, but if the DUT noise has most of the power inside the
filter BW while instrument noise is wideband one, we can filter out part of
instrument noise with minimal influence to the DUT one.

There is measurement setups, such as
cross-correlation, which makes multiple measurements in parallel which
can start combat the noise separation issue.

Yes, I am aware of that technique. I event did some experiments with cross
correlation phase noise measurements.

Ehm no. The optimal averaging strategy for ADEV is to do no averaging.
This is the hard lesson to learn. You can't really cheat if you aim to
get proper ADEV.

You can use averaging, and it will cause biased values, so you might use
the part with less bias, but there is safer ways of doing that, by going
full MDEV or PDEV instead.

With biases, you have something similar to, but not being the ADEV.

OK. It looks like the last sentence very precisely describes what I was
going to do, so we understood each other right. Summarizing the discussion,
as far as I understand, the best strategy regarding *DEV calculations is:

  1. Make MDEV the primary variant. It is suitable for calculation inside
    counter as well as for exporting data for the following post processing.
  2. Study how PDEV calculation fits on the used HW. If it is possible to do
    in real time PDEV option can be added.
  3. ADEV can be safely calculated only from the Pi mode counter data.
    Probably it will not be very useful because of low single shoot resolution,
    but Pi mode and corresponding data export can be easily added.

I think it will be more than enough for my needs, at least now.

From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock.

Yes. It is approx. 400MHz.

I have no FPGA also :) All processing is in the FW, I will see how it
fits the used HW architecture.

Doing it all in FPGA has many benefits, but the HW will be more
complicated and pricier with minimal benefits for my main goals.

Exactly what you mean by FW now I don't get, for me that is FPGA code.

I meant MCU code, to make things clearer I can use the SW term for it.

Thank you for the answers and explanations, they are highly appreciated!

All the best!
Oleg

Hi Magnus, From: "Magnus Danielson" <magnus@rubidium.dyndns.org> > I would be inclined to just continue the MDEV compliant processing > instead. If you want the matching ADEV, rescale it using the > bias-function, which can be derived out of p.51 of that presentation. > You just need to figure out the dominant noise-type of each range of > tau, something which is much simpler in MDEV since White PM and Flicker > PM separates more clearly than the weak separation of ADEV. > As you measure a DUT, the noise of the DUT, the noise of the counter and > the systematics of the counter adds up and we cannot distinguish them in > that measurement. Probably I did not express what I meant clearly. I understand that we can not separate them, but if the DUT noise has most of the power inside the filter BW while instrument noise is wideband one, we can filter out part of instrument noise with minimal influence to the DUT one. > There is measurement setups, such as > cross-correlation, which makes multiple measurements in parallel which > can start combat the noise separation issue. Yes, I am aware of that technique. I event did some experiments with cross correlation phase noise measurements. > Ehm no. The optimal averaging strategy for ADEV is to do no averaging. > This is the hard lesson to learn. You can't really cheat if you aim to > get proper ADEV. > > You can use averaging, and it will cause biased values, so you might use > the part with less bias, but there is safer ways of doing that, by going > full MDEV or PDEV instead. > > With biases, you have something similar to, but not being _the_ ADEV. OK. It looks like the last sentence very precisely describes what I was going to do, so we understood each other right. Summarizing the discussion, as far as I understand, the best strategy regarding *DEV calculations is: 1. Make MDEV the primary variant. It is suitable for calculation inside counter as well as for exporting data for the following post processing. 2. Study how PDEV calculation fits on the used HW. If it is possible to do in real time PDEV option can be added. 3. ADEV can be safely calculated only from the Pi mode counter data. Probably it will not be very useful because of low single shoot resolution, but Pi mode and corresponding data export can be easily added. I think it will be more than enough for my needs, at least now. > From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock. Yes. It is approx. 400MHz. >> I have no FPGA also :) All processing is in the FW, I will see how it >> fits the used HW architecture. >> >> Doing it all in FPGA has many benefits, but the HW will be more >> complicated and pricier with minimal benefits for my main goals. > > Exactly what you mean by FW now I don't get, for me that is FPGA code. I meant MCU code, to make things clearer I can use the SW term for it. Thank you for the answers and explanations, they are highly appreciated! All the best! Oleg
BK
Bob kb8tq
Sun, May 13, 2018 10:47 PM

Hi

On May 13, 2018, at 5:13 PM, Oleg Skydan olegskydan@gmail.com wrote:

Hi Magnus,

From: "Magnus Danielson" magnus@rubidium.dyndns.org

I would be inclined to just continue the MDEV compliant processing
instead. If you want the matching ADEV, rescale it using the
bias-function, which can be derived out of p.51 of that presentation.
You just need to figure out the dominant noise-type of each range of
tau, something which is much simpler in MDEV since White PM and Flicker
PM separates more clearly than the weak separation of ADEV.

As you measure a DUT, the noise of the DUT, the noise of the counter and
the systematics of the counter adds up and we cannot distinguish them in
that measurement.

Probably I did not express what I meant clearly. I understand that we can not separate them, but if the DUT noise has most of the power inside the filter BW while instrument noise is wideband one, we can filter out part of instrument noise with minimal influence to the DUT one.

There is measurement setups, such as
cross-correlation, which makes multiple measurements in parallel which
can start combat the noise separation issue.

Yes, I am aware of that technique. I event did some experiments with cross correlation phase noise measurements.

Ehm no. The optimal averaging strategy for ADEV is to do no averaging.
This is the hard lesson to learn. You can't really cheat if you aim to
get proper ADEV.

You can use averaging, and it will cause biased values, so you might use
the part with less bias, but there is safer ways of doing that, by going
full MDEV or PDEV instead.

With biases, you have something similar to, but not being the ADEV.

OK. It looks like the last sentence very precisely describes what I was going to do, so we understood each other right. Summarizing the discussion, as far as I understand, the best strategy regarding *DEV calculations is:

  1. Make MDEV the primary variant. It is suitable for calculation inside counter as well as for exporting data for the following post processing.
  2. Study how PDEV calculation fits on the used HW. If it is possible to do in real time PDEV option can be added.
  3. ADEV can be safely calculated only from the Pi mode counter data. Probably it will not be very useful because of low single shoot resolution, but Pi mode and corresponding data export can be easily added.

I think it will be more than enough for my needs, at least now.

From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock.

Yes. It is approx. 400MHz.

I think I would spend more time working out what happens at “about 400 MHz” X N or
“about 400 MHz / M” …….

Bob

I have no FPGA also :) All processing is in the FW, I will see how it
fits the used HW architecture.

Doing it all in FPGA has many benefits, but the HW will be more
complicated and pricier with minimal benefits for my main goals.

Exactly what you mean by FW now I don't get, for me that is FPGA code.

I meant MCU code, to make things clearer I can use the SW term for it.

Thank you for the answers and explanations, they are highly appreciated!

All the best!
Oleg


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi > On May 13, 2018, at 5:13 PM, Oleg Skydan <olegskydan@gmail.com> wrote: > > Hi Magnus, > > From: "Magnus Danielson" <magnus@rubidium.dyndns.org> >> I would be inclined to just continue the MDEV compliant processing >> instead. If you want the matching ADEV, rescale it using the >> bias-function, which can be derived out of p.51 of that presentation. >> You just need to figure out the dominant noise-type of each range of >> tau, something which is much simpler in MDEV since White PM and Flicker >> PM separates more clearly than the weak separation of ADEV. > > >> As you measure a DUT, the noise of the DUT, the noise of the counter and >> the systematics of the counter adds up and we cannot distinguish them in >> that measurement. > > Probably I did not express what I meant clearly. I understand that we can not separate them, but if the DUT noise has most of the power inside the filter BW while instrument noise is wideband one, we can filter out part of instrument noise with minimal influence to the DUT one. > >> There is measurement setups, such as >> cross-correlation, which makes multiple measurements in parallel which >> can start combat the noise separation issue. > > Yes, I am aware of that technique. I event did some experiments with cross correlation phase noise measurements. > >> Ehm no. The optimal averaging strategy for ADEV is to do no averaging. >> This is the hard lesson to learn. You can't really cheat if you aim to >> get proper ADEV. >> >> You can use averaging, and it will cause biased values, so you might use >> the part with less bias, but there is safer ways of doing that, by going >> full MDEV or PDEV instead. >> >> With biases, you have something similar to, but not being _the_ ADEV. > > OK. It looks like the last sentence very precisely describes what I was going to do, so we understood each other right. Summarizing the discussion, as far as I understand, the best strategy regarding *DEV calculations is: > 1. Make MDEV the primary variant. It is suitable for calculation inside counter as well as for exporting data for the following post processing. > 2. Study how PDEV calculation fits on the used HW. If it is possible to do in real time PDEV option can be added. > 3. ADEV can be safely calculated only from the Pi mode counter data. Probably it will not be very useful because of low single shoot resolution, but Pi mode and corresponding data export can be easily added. > > I think it will be more than enough for my needs, at least now. > >> From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock. > > Yes. It is approx. 400MHz. I think I would spend more time working out what happens at “about 400 MHz” X N or “about 400 MHz / M” ……. Bob > >>> I have no FPGA also :) All processing is in the FW, I will see how it >>> fits the used HW architecture. >>> >>> Doing it all in FPGA has many benefits, but the HW will be more >>> complicated and pricier with minimal benefits for my main goals. >> >> Exactly what you mean by FW now I don't get, for me that is FPGA code. > > I meant MCU code, to make things clearer I can use the SW term for it. > > Thank you for the answers and explanations, they are highly appreciated! > > All the best! > Oleg > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there.
OS
Oleg Skydan
Mon, May 14, 2018 9:25 AM

Hi Bob!

From: "Bob kb8tq" kb8tq@n1k.org

I think it will be more than enough for my needs, at least now.

From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock.

Yes. It is approx. 400MHz.

I think I would spend more time working out what happens at “about 400
MHz” X N or
“about 400 MHz / M” …….

If such conditions detected, I avoid problem by changing the counter clock.
But it does not solve the effects at "about OCXO" * N or "about OCXO" / M.
It is related to HW and I can probably control it only partially. I will try
to improve clock and reference isolation in the "normal" HW and of cause I
will thoroughly test such frequencies when that HW will be ready.

All the best!
Oleg

Hi Bob! From: "Bob kb8tq" <kb8tq@n1k.org> >> I think it will be more than enough for my needs, at least now. >> >>> From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock. >> >> Yes. It is approx. 400MHz. > > I think I would spend more time working out what happens at “about 400 > MHz” X N or > “about 400 MHz / M” ……. If such conditions detected, I avoid problem by changing the counter clock. But it does not solve the effects at "about OCXO" * N or "about OCXO" / M. It is related to HW and I can probably control it only partially. I will try to improve clock and reference isolation in the "normal" HW and of cause I will thoroughly test such frequencies when that HW will be ready. All the best! Oleg
BK
Bob kb8tq
Mon, May 14, 2018 1:13 PM

Hi

On May 14, 2018, at 5:25 AM, Oleg Skydan olegskydan@gmail.com wrote:

Hi Bob!

From: "Bob kb8tq" kb8tq@n1k.org

I think it will be more than enough for my needs, at least now.

From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock.

Yes. It is approx. 400MHz.

I think I would spend more time working out what happens at “about 400  MHz” X N or
“about 400 MHz / M” …….

If such conditions detected, I avoid problem by changing the counter clock. But it does not solve the effects at "about OCXO" * N or "about OCXO" / M. It is related to HW and I can probably control it only partially. I will try to improve clock and reference isolation in the "normal" HW and of cause I will thoroughly test such frequencies when that HW will be ready.

It’s a very common problem in this sort of counter. The “experts” have a lot of trouble with it
on their designs. One answer with simple enough hardware could be to run two clocks
all the time. Digitize them both and process the results from both. …. just a thought …. You
still have the issue of a frequency that is a multiple (or sub multiple) of both clocks. With
some care in clock selection you could make that a pretty rare occurrence ( thus making
it easy to identify in firmware ….).

Bob

All the best!
Oleg


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi > On May 14, 2018, at 5:25 AM, Oleg Skydan <olegskydan@gmail.com> wrote: > > Hi Bob! > > From: "Bob kb8tq" <kb8tq@n1k.org> >>> I think it will be more than enough for my needs, at least now. >>> >>>> From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock. >>> >>> Yes. It is approx. 400MHz. >> >> I think I would spend more time working out what happens at “about 400 MHz” X N or >> “about 400 MHz / M” ……. > > If such conditions detected, I avoid problem by changing the counter clock. But it does not solve the effects at "about OCXO" * N or "about OCXO" / M. It is related to HW and I can probably control it only partially. I will try to improve clock and reference isolation in the "normal" HW and of cause I will thoroughly test such frequencies when that HW will be ready. It’s a very common problem in this sort of counter. The “experts” have a lot of trouble with it on their designs. One answer with simple enough hardware could be to run *two* clocks all the time. Digitize them both and process the results from both. …. just a thought …. You still have the issue of a frequency that is a multiple (or sub multiple) of both clocks. With some care in clock selection you could make that a pretty rare occurrence ( thus making it easy to identify in firmware ….). Bob > > All the best! > Oleg > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there.
OS
Oleg Skydan
Mon, May 14, 2018 5:50 PM

Hi!

From: "Bob kb8tq" kb8tq@n1k.org

If such conditions detected, I avoid problem by changing the counter
clock. But it does not solve the effects at "about OCXO" * N or "about
OCXO" / M. It is related to HW and I can probably control it only
partially. I will try to improve clock and reference isolation in the
"normal" HW and of cause I will thoroughly test such frequencies when
that HW will be ready.

It’s a very common problem in this sort of counter. The “experts” have a
lot of trouble with it
on their designs. One answer with simple enough hardware could be to run
two clocks
all the time. Digitize them both and process the results from both.

I thought about such solution, unfortunately it can not be implemented
because of HW limitations. Switching 400MHz clock is also not ideal
solution, cause it will make trouble to GPS correction calculations, the
latter can be fixed in software, but it is not an elegant solution. It all
still needs some polishing...

still have the issue of a frequency that is a multiple (or sub multiple)
of both clocks.

The clocks (if we are talking about 400MHz) has very interesting values like
397501220.703Hz or 395001831.055Hz , so it will really occur very rarely.
Also I am not limited by two or three values, so clock switching should
solve the problem, but not in elegant way, cause it breaks normal work of
GPS frequency correction algorithm, so additional steps to fix that will be
required :-.

BTW, after quick check of the GPS module specs and OCXO's one it looks like
a very simple algorithm can be used for frequency correction. OCXO frequency
can be measured against GPS for a long enough period (some thousands of
seconds, LR algorithm can be used here also) and we have got a correction
coefficient. It can be updated at a rate of one second (probably we do not
need to do it as fast). I do not believe it can be as simple. I feel I
missed something :)...

All the best!
Oleg

Hi! From: "Bob kb8tq" <kb8tq@n1k.org> >> If such conditions detected, I avoid problem by changing the counter >> clock. But it does not solve the effects at "about OCXO" * N or "about >> OCXO" / M. It is related to HW and I can probably control it only >> partially. I will try to improve clock and reference isolation in the >> "normal" HW and of cause I will thoroughly test such frequencies when >> that HW will be ready. > > It’s a very common problem in this sort of counter. The “experts” have a > lot of trouble with it > on their designs. One answer with simple enough hardware could be to run > *two* clocks > all the time. Digitize them both and process the results from both. I thought about such solution, unfortunately it can not be implemented because of HW limitations. Switching 400MHz clock is also not ideal solution, cause it will make trouble to GPS correction calculations, the latter can be fixed in software, but it is not an elegant solution. It all still needs some polishing... > still have the issue of a frequency that is a multiple (or sub multiple) > of both clocks. The clocks (if we are talking about 400MHz) has very interesting values like 397501220.703Hz or 395001831.055Hz , so it will really occur very rarely. Also I am not limited by two or three values, so clock switching should solve the problem, but not in elegant way, cause it breaks normal work of GPS frequency correction algorithm, so additional steps to fix that will be required :-\. BTW, after quick check of the GPS module specs and OCXO's one it looks like a very simple algorithm can be used for frequency correction. OCXO frequency can be measured against GPS for a long enough period (some thousands of seconds, LR algorithm can be used here also) and we have got a correction coefficient. It can be updated at a rate of one second (probably we do not need to do it as fast). I do not believe it can be as simple. I feel I missed something :)... All the best! Oleg
BK
Bob kb8tq
Mon, May 14, 2018 7:15 PM

Hi

On May 14, 2018, at 1:50 PM, Oleg Skydan olegskydan@gmail.com wrote:

Hi!

From: "Bob kb8tq" kb8tq@n1k.org

If such conditions detected, I avoid problem by changing the counter clock. But it does not solve the effects at "about OCXO" * N or "about OCXO" / M. It is related to HW and I can probably control it only partially. I will try to improve clock and reference isolation in the "normal" HW and of cause I will thoroughly test such frequencies when that HW will be ready.

It’s a very common problem in this sort of counter. The “experts” have a lot of trouble with it
on their designs. One answer with simple enough hardware could be to run two clocks
all the time. Digitize them both and process the results from both.

I thought about such solution, unfortunately it can not be implemented because of HW limitations. Switching 400MHz clock is also not ideal solution, cause it will make trouble to GPS correction calculations, the latter can be fixed in software, but it is not an elegant solution. It all still needs some polishing...

still have the issue of a frequency that is a multiple (or sub multiple) of both clocks.

The clocks (if we are talking about 400MHz) has very interesting values like 397501220.703Hz or 395001831.055Hz , so it will really occur very rarely. Also I am not limited by two or three values, so clock switching should solve the problem, but not in elegant way, cause it breaks normal work of GPS frequency correction algorithm, so additional steps to fix that will be required :-.

What I’m suggesting is that if the hardware is very simple and very cheap, simply put two chips on the board.
One runs at Clock A and the other runs at Clock B. At some point in the process you move the decimated data
from B over to A and finish out all the math there ….

BTW, after quick check of the GPS module specs and OCXO's one it looks like a very simple algorithm can be used for frequency correction. OCXO frequency can be measured against GPS for a long enough period (some thousands of seconds, LR algorithm can be used here also) and we have got a correction coefficient. It can be updated at a rate of one second (probably we do not need to do it as fast). I do not believe it can be as simple. I feel I missed something :)…

That is one way it is done. A lot depends on the accuracy of the GPS PPS on your module. It is unfortunately fairly easy to find
modules that are in the 10’s of ns error on a second to second basis. Sawtooth correction can help this a bit. OCXO’s have warmup
characteristics that also can move them a bit in the first hours of use.

More or less, with a thousand second observation time you will likely get below parts in 10^-10, but maybe not to the 1x10^-11 level.

Bob

All the best!
Oleg


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi > On May 14, 2018, at 1:50 PM, Oleg Skydan <olegskydan@gmail.com> wrote: > > Hi! > > From: "Bob kb8tq" <kb8tq@n1k.org> >>> If such conditions detected, I avoid problem by changing the counter clock. But it does not solve the effects at "about OCXO" * N or "about OCXO" / M. It is related to HW and I can probably control it only partially. I will try to improve clock and reference isolation in the "normal" HW and of cause I will thoroughly test such frequencies when that HW will be ready. >> >> It’s a very common problem in this sort of counter. The “experts” have a lot of trouble with it >> on their designs. One answer with simple enough hardware could be to run *two* clocks >> all the time. Digitize them both and process the results from both. > > I thought about such solution, unfortunately it can not be implemented because of HW limitations. Switching 400MHz clock is also not ideal solution, cause it will make trouble to GPS correction calculations, the latter can be fixed in software, but it is not an elegant solution. It all still needs some polishing... > >> still have the issue of a frequency that is a multiple (or sub multiple) of both clocks. > > The clocks (if we are talking about 400MHz) has very interesting values like 397501220.703Hz or 395001831.055Hz , so it will really occur very rarely. Also I am not limited by two or three values, so clock switching should solve the problem, but not in elegant way, cause it breaks normal work of GPS frequency correction algorithm, so additional steps to fix that will be required :-\. > What I’m suggesting is that if the hardware is very simple and very cheap, simply put two chips on the board. One runs at Clock A and the other runs at Clock B. At some point in the process you move the decimated data from B over to A and finish out all the math there …. > BTW, after quick check of the GPS module specs and OCXO's one it looks like a very simple algorithm can be used for frequency correction. OCXO frequency can be measured against GPS for a long enough period (some thousands of seconds, LR algorithm can be used here also) and we have got a correction coefficient. It can be updated at a rate of one second (probably we do not need to do it as fast). I do not believe it can be as simple. I feel I missed something :)… That is one way it is done. A lot depends on the accuracy of the GPS PPS on your module. It is unfortunately fairly easy to find modules that are in the 10’s of ns error on a second to second basis. Sawtooth correction can help this a bit. OCXO’s have warmup characteristics that also can move them a bit in the first hours of use. More or less, with a thousand second observation time you will likely get below parts in 10^-10, but maybe not to the 1x10^-11 level. Bob > > All the best! > Oleg > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there.