time-nuts@lists.febo.com

Discussion of precise time and frequency measurement

View all threads

Re: [time-nuts] Question about frequency counter testing

MD
Magnus Danielson
Wed, Jun 6, 2018 9:18 PM

Hi Oleg,

On 06/06/2018 02:53 PM, Oleg Skydan wrote:

Hi, Magnus!

Sorry for the late answer, I injured my left eye last Monday, so had
very limited abilities to use computer.

Sorry to hear that. Hope you heal up well and quick enough.

From: "Magnus Danielson" magnus@rubidium.dyndns.org

As long as the sums C and D becomes correct, your
path to it can be whatever.

Yes. It produces the same sums.

Yes please do, then I can double check it.

I have write a note and attached it. The described modifications to the
original method were successfully tested on my experimental HW.

You should add the basic formula

x_{N_1+n} = x_{N_1} + x_n^0

prior to (5) and explain that the expected phase-ramp within the block
will have a common offset in x_{N-1} and that the x_n^0 series is the
series of values with the offset removed from the series. This is fine,
it should just be introduced before applied on (5).

Notice that E as introduced in (8) and (9) is not needed, as you can
directly convert it into N(N_2-1)/2.

Anyway, you have sure understood the toolbox given to you, and your
contribution is to play the same game, but to reduce the needed dynamics
of the blocks. Neat. I may include that with due reference.

Yeah, now you can move your harware focus on considering interpolation
techniques beyond the processing power of least-square estimation, which
integrate noise way down.

If you are talking about adding traditional HW interpolation of the
trigger events I have no plans to do it. It is not possible to do it
keeping 2.5ns base counter resolution (there is no way to output 400MHz
clock signal out of the chip) and I do not want to add extra complexity
to the HW of this project.

But, the HW I use can simultaneously sample up to 10 timestamps. So, I
can push the one shoot resolution down to 250ps using several delay
lines (theoretically). I do not think that going down to 250ps has much
sense (also I have another plans for that additional HW), but 2x or 4x
one shot resolution improvement (down to 1.25ns or 625ps) is relatively
simple to implement in HW and should be a good idea to try.

Sounds fun!

I will probably throw out the power hungry and expensive SDRAM chip or
use much smaller one :).

Yeah, it would only be if you build multi-tau PDEV plots that you would
need much memory, other than that it is just buffer memory to buffer
before it goes to off-board processing, at which time you would need to
convey the C, D, N and tau0 values.

Yes, I want to produce multi-tau PDEV plots :).

Make good sense. :)

They can be computed with small memory footprint, but it will be non
overlapped PDEVs, so the confidence level at large taus will be poor
(with the practical durations of the measurements). I have a working
code that realizes such algorithm. It uses only 272bytes of memory for
each decade (1-2-5 values).

Seems very reasonable. If you are willing to use more memory, you can do
overlapping once decimated down to suitable rate. On the other hand,
considering the rate of samples, lots of gain already there.

I need to think how to do the overlapping PDEV calculations with minimal
memory/processing power requirements (I am aware that decimation
routines should not use the overlapped calculations).

It's fairly simple, as you decimate samples and/or blocks, the produced
blocks overlaps one way or another. The multiple overlap variants should
each behave as a complete PDEV stream, and the variances can then be
added safely.

BTW, are there any "optimal overlapping"? Or I should just use as much
data as I can process?

"optimal overlapping" would be when all overlapping variants is used,
that is all with tau0 offsets available. When done for Allan Deviation
some refer to this as OADEV. This is however an misnomer as it is an
ADEV estimator which just has better confidence intervals than the
non-overlapping ADEV estimator. Thus, both estimator algorithms have the
same scale of measure, that of ADEV, but different amount of Equivalent
Degrees of Freedom (EDF) which has direct implications on the confidence
interval bounds. The more EDF, the better confidence interval. The more
overlapping, the more EDF. Further improvements would be TOTAL ADEV and
Theo, which both aim to squeeze out as much EDF as possible from the
dataset, in an attempt of reducing the length of measurement.

Please report on that progress! Sounds fun!

I will drop a note when I will move on the next step. The things are a
bit slower now.

Take care. Heal up properly. It's a hobby after all. :)

Good work there.

Cheers,
Magnus

Thanks!
Oleg


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi Oleg, On 06/06/2018 02:53 PM, Oleg Skydan wrote: > Hi, Magnus! > > Sorry for the late answer, I injured my left eye last Monday, so had > very limited abilities to use computer. Sorry to hear that. Hope you heal up well and quick enough. > From: "Magnus Danielson" <magnus@rubidium.dyndns.org> >> As long as the sums C and D becomes correct, your >> path to it can be whatever. > > Yes. It produces the same sums. > >> Yes please do, then I can double check it. > > I have write a note and attached it. The described modifications to the > original method were successfully tested on my experimental HW. You should add the basic formula x_{N_1+n} = x_{N_1} + x_n^0 prior to (5) and explain that the expected phase-ramp within the block will have a common offset in x_{N-1} and that the x_n^0 series is the series of values with the offset removed from the series. This is fine, it should just be introduced before applied on (5). Notice that E as introduced in (8) and (9) is not needed, as you can directly convert it into N(N_2-1)/2. Anyway, you have sure understood the toolbox given to you, and your contribution is to play the same game, but to reduce the needed dynamics of the blocks. Neat. I may include that with due reference. >> Yeah, now you can move your harware focus on considering interpolation >> techniques beyond the processing power of least-square estimation, which >> integrate noise way down. > > If you are talking about adding traditional HW interpolation of the > trigger events I have no plans to do it. It is not possible to do it > keeping 2.5ns base counter resolution (there is no way to output 400MHz > clock signal out of the chip) and I do not want to add extra complexity > to the HW of this project. > > But, the HW I use can simultaneously sample up to 10 timestamps. So, I > can push the one shoot resolution down to 250ps using several delay > lines (theoretically). I do not think that going down to 250ps has much > sense (also I have another plans for that additional HW), but 2x or 4x > one shot resolution improvement (down to 1.25ns or 625ps) is relatively > simple to implement in HW and should be a good idea to try. Sounds fun! >>> I will probably throw out the power hungry and expensive SDRAM chip or >>> use much smaller one :). >> >> Yeah, it would only be if you build multi-tau PDEV plots that you would >> need much memory, other than that it is just buffer memory to buffer >> before it goes to off-board processing, at which time you would need to >> convey the C, D, N and tau0 values. > > Yes, I want to produce multi-tau PDEV plots :). Make good sense. :) > They can be computed with small memory footprint, but it will be non > overlapped PDEVs, so the confidence level at large taus will be poor > (with the practical durations of the measurements). I have a working > code that realizes such algorithm. It uses only 272bytes of memory for > each decade (1-2-5 values). Seems very reasonable. If you are willing to use more memory, you can do overlapping once decimated down to suitable rate. On the other hand, considering the rate of samples, lots of gain already there. > I need to think how to do the overlapping PDEV calculations with minimal > memory/processing power requirements (I am aware that decimation > routines should not use the overlapped calculations). It's fairly simple, as you decimate samples and/or blocks, the produced blocks overlaps one way or another. The multiple overlap variants should each behave as a complete PDEV stream, and the variances can then be added safely. > BTW, are there any "optimal overlapping"? Or I should just use as much > data as I can process? "optimal overlapping" would be when all overlapping variants is used, that is all with tau0 offsets available. When done for Allan Deviation some refer to this as OADEV. This is however an misnomer as it is an ADEV estimator which just has better confidence intervals than the non-overlapping ADEV estimator. Thus, both estimator algorithms have the same scale of measure, that of ADEV, but different amount of Equivalent Degrees of Freedom (EDF) which has direct implications on the confidence interval bounds. The more EDF, the better confidence interval. The more overlapping, the more EDF. Further improvements would be TOTAL ADEV and Theo, which both aim to squeeze out as much EDF as possible from the dataset, in an attempt of reducing the length of measurement. >> Please report on that progress! Sounds fun! > > I will drop a note when I will move on the next step. The things are a > bit slower now. Take care. Heal up properly. It's a hobby after all. :) Good work there. Cheers, Magnus > Thanks! > Oleg > > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there. >
OS
Oleg Skydan
Thu, Jun 21, 2018 1:05 PM

Hi!

From: "Magnus Danielson" magnus@rubidium.dyndns.org

I have write a note and attached it. The described modifications to the
original method were successfully tested on my experimental HW.

You should add the basic formula

x_{N_1+n} = x_{N_1} + x_n^0

prior to (5) and explain that the expected phase-ramp within the block
will have a common offset in x_{N-1} and that the x_n^0 series is the
series of values with the offset removed from the series. This is fine,
it should just be introduced before applied on (5).

I have corrected the document and put it here (it should be clearer now):
http://skydan.in.ua/FC/Efficient_C_and_D_sums.pdf

It should be more clear now.

Notice that E as introduced in (8) and (9) is not needed, as you can
directly convert it into N(N_2-1)/2.

Oh! I should notice it, thanks for the valuable comment!

They can be computed with small memory footprint, but it will be non
overlapped PDEVs, so the confidence level at large taus will be poor
(with the practical durations of the measurements). I have a working
code that realizes such algorithm. It uses only 272bytes of memory for
each decade (1-2-5 values).

Seems very reasonable. If you are willing to use more memory, you can do
overlapping once decimated down to suitable rate. On the other hand,
considering the rate of samples, lots of gain already there.

I have optimized continuous PDEV calculation algorithm, and it uses only
140bytes per decade now.

I will not probably implement overlapping PDEV calculations to keep the
things simple (with no external memory) and will just do the continuous PDEV
calculations only. The more sophisticated calculations can be easily done on
the PC side.

... but 2x or 4x
one shot resolution improvement (down to 1.25ns or 625ps) is relatively
simple to implement in HW and should be a good idea to try.

So, I tried it with a "quick and dirty" HW. It appeared to be not as simple
in real life :) There was a problem (probably the crosstalk or grounding
issue) which leaded to unstable phase measurements. So, I got no
improvements (the results with 1.25ns resolution were worse then with the
2.5ns resolution). I have to do more experiments with better HW
implementation.

Take care. Heal up properly. It's a hobby after all. :)

Thanks!

Best!
Oleg

Hi! From: "Magnus Danielson" <magnus@rubidium.dyndns.org> >> I have write a note and attached it. The described modifications to the >> original method were successfully tested on my experimental HW. > > You should add the basic formula > > x_{N_1+n} = x_{N_1} + x_n^0 > > prior to (5) and explain that the expected phase-ramp within the block > will have a common offset in x_{N-1} and that the x_n^0 series is the > series of values with the offset removed from the series. This is fine, > it should just be introduced before applied on (5). I have corrected the document and put it here (it should be clearer now): http://skydan.in.ua/FC/Efficient_C_and_D_sums.pdf It should be more clear now. > Notice that E as introduced in (8) and (9) is not needed, as you can > directly convert it into N(N_2-1)/2. Oh! I should notice it, thanks for the valuable comment! >> They can be computed with small memory footprint, but it will be non >> overlapped PDEVs, so the confidence level at large taus will be poor >> (with the practical durations of the measurements). I have a working >> code that realizes such algorithm. It uses only 272bytes of memory for >> each decade (1-2-5 values). > > Seems very reasonable. If you are willing to use more memory, you can do > overlapping once decimated down to suitable rate. On the other hand, > considering the rate of samples, lots of gain already there. I have optimized continuous PDEV calculation algorithm, and it uses only 140bytes per decade now. I will not probably implement overlapping PDEV calculations to keep the things simple (with no external memory) and will just do the continuous PDEV calculations only. The more sophisticated calculations can be easily done on the PC side. >> ... but 2x or 4x >> one shot resolution improvement (down to 1.25ns or 625ps) is relatively >> simple to implement in HW and should be a good idea to try. So, I tried it with a "quick and dirty" HW. It appeared to be not as simple in real life :) There was a problem (probably the crosstalk or grounding issue) which leaded to unstable phase measurements. So, I got no improvements (the results with 1.25ns resolution were worse then with the 2.5ns resolution). I have to do more experiments with better HW implementation. > Take care. Heal up properly. It's a hobby after all. :) Thanks! Best! Oleg
MD
Magnus Danielson
Sat, Jun 23, 2018 4:40 PM

Hi Oleg,

On 06/21/2018 03:05 PM, Oleg Skydan wrote:

Hi!

From: "Magnus Danielson" magnus@rubidium.dyndns.org

I have write a note and attached it. The described modifications to the
original method were successfully tested on my experimental HW.

You should add the basic formula

x_{N_1+n} = x_{N_1} + x_n^0

prior to (5) and explain that the expected phase-ramp within the block
will have a common offset in x_{N-1} and that the x_n^0 series is the
series of values with the offset removed from the series. This is fine,
it should just be introduced before applied on (5).

I have corrected the document and put it here (it should be clearer now):
http://skydan.in.ua/FC/Efficient_C_and_D_sums.pdf

It should be more clear now.

It is much better now. You should consider publish that, with more
description of the surrounding setup.

Notice that E as introduced in (8) and (9) is not needed, as you can
directly convert it into N(N_2-1)/2.

Oh! I should notice it, thanks for the valuable comment!

Well, you should realize that it is exactly sums like these that I need
to solve for the full processing-trick, so it was natural and should be
used even for this application of the basic approach.

They can be computed with small memory footprint, but it will be non
overlapped PDEVs, so the confidence level at large taus will be poor
(with the practical durations of the measurements). I have a working
code that realizes such algorithm. It uses only 272bytes of memory for
each decade (1-2-5 values).

Seems very reasonable. If you are willing to use more memory, you can do
overlapping once decimated down to suitable rate. On the other hand,
considering the rate of samples, lots of gain already there.

I have optimized continuous PDEV calculation algorithm, and it uses only
140bytes per decade now.

I will not probably implement overlapping PDEV calculations to keep the
things simple (with no external memory) and will just do the continuous
PDEV calculations only. The more sophisticated calculations can be
easily done on the PC side.

Notice that tau0, N, C and D should be delivered to the PC, one way or
another. To continue the processing that is what you need to extend it,
so you do not want to produce phase or frequency measures.

... but 2x or 4x
one shot resolution improvement (down to 1.25ns or 625ps) is relatively
simple to implement in HW and should be a good idea to try.

So, I tried it with a "quick and dirty" HW. It appeared to be not as
simple in real life :) There was a problem (probably the crosstalk or
grounding issue) which leaded to unstable phase measurements. So, I got
no improvements (the results with 1.25ns resolution were worse then with
the 2.5ns resolution). I have to do more experiments with better HW
implementation.

Yes, for that type of time, you need to get good separation, where
ground-bounce can be troublesome. It's a future improvement if you can
learn how to design that part properly.

Cheers,
Magnus

Hi Oleg, On 06/21/2018 03:05 PM, Oleg Skydan wrote: > Hi! > > From: "Magnus Danielson" <magnus@rubidium.dyndns.org> > >>> I have write a note and attached it. The described modifications to the >>> original method were successfully tested on my experimental HW. >> >> You should add the basic formula >> >> x_{N_1+n} = x_{N_1} + x_n^0 >> >> prior to (5) and explain that the expected phase-ramp within the block >> will have a common offset in x_{N-1} and that the x_n^0 series is the >> series of values with the offset removed from the series. This is fine, >> it should just be introduced before applied on (5). > > I have corrected the document and put it here (it should be clearer now): > http://skydan.in.ua/FC/Efficient_C_and_D_sums.pdf > > It should be more clear now. It is much better now. You should consider publish that, with more description of the surrounding setup. >> Notice that E as introduced in (8) and (9) is not needed, as you can >> directly convert it into N(N_2-1)/2. > > Oh! I should notice it, thanks for the valuable comment! Well, you should realize that it is exactly sums like these that I need to solve for the full processing-trick, so it was natural and should be used even for this application of the basic approach. >>> They can be computed with small memory footprint, but it will be non >>> overlapped PDEVs, so the confidence level at large taus will be poor >>> (with the practical durations of the measurements). I have a working >>> code that realizes such algorithm. It uses only 272bytes of memory for >>> each decade (1-2-5 values). >> >> Seems very reasonable. If you are willing to use more memory, you can do >> overlapping once decimated down to suitable rate. On the other hand, >> considering the rate of samples, lots of gain already there. > > I have optimized continuous PDEV calculation algorithm, and it uses only > 140bytes per decade now. > > I will not probably implement overlapping PDEV calculations to keep the > things simple (with no external memory) and will just do the continuous > PDEV calculations only. The more sophisticated calculations can be > easily done on the PC side. Notice that tau0, N, C and D should be delivered to the PC, one way or another. To continue the processing that is what you need to extend it, so you do not want to produce phase or frequency measures. >>> ... but 2x or 4x >>> one shot resolution improvement (down to 1.25ns or 625ps) is relatively >>> simple to implement in HW and should be a good idea to try. > > So, I tried it with a "quick and dirty" HW. It appeared to be not as > simple in real life :) There was a problem (probably the crosstalk or > grounding issue) which leaded to unstable phase measurements. So, I got > no improvements (the results with 1.25ns resolution were worse then with > the 2.5ns resolution). I have to do more experiments with better HW > implementation. Yes, for that type of time, you need to get good separation, where ground-bounce can be troublesome. It's a future improvement if you can learn how to design that part properly. Cheers, Magnus