OS
Oleg Skydan
Tue, May 15, 2018 11:47 AM
What I’m suggesting is that if the hardware is very simple and very cheap,
simply put two chips on the board.
One runs at Clock A and the other runs at Clock B. At some point in the
process you move the decimated data
from B over to A and finish out all the math there ….
The hardware is simple and cheap cause it is all digital, requires no
calibrations and the same HW is capable of driving TFT, UI handling,
providing all control functionalities for input conditioning circuits, GPS
module and etc. It also provides USB interface for data exchange or remote
control. So doubling it is not a way to go if I want to keep things simple
and relatively cheap.
I think I will stay with the current plans for HW and try to handle some
troubles to GPS timing in software. I have to make initial variant of HW, so
I will be able to move on with the SW part towards useful counter. Then I
will see how well it performs and will decide if it satisfies the
requirements or I need to change something.
BTW, after quick check of the GPS module specs and OCXO's one it looks
like a very simple algorithm can be used for frequency correction. OCXO
frequency can be measured against GPS for a long enough period (some
thousands of seconds, LR algorithm can be used here also) and we have got
a correction coefficient. It can be updated at a rate of one second
(probably we do not need to do it as fast). I do not believe it can be as
simple. I feel I missed something :)…
That is one way it is done. A lot depends on the accuracy of the GPS PPS
on your module.
The module is uBlox NEO-6M, I know there is better suited for my needs
NEO-6T, but the first one was easy to get and insanely cheap. It should be
enough to start.
More or less, with a thousand second observation time you will likely get
below parts in 10^-10, but maybe not to the 1x10^-11 level.
1e-10 should satisfy my requirements. More sophisticated algorithm can be
developed and used later, if needed.
Thanks!
Oleg
Hi
From: "Bob kb8tq" <kb8tq@n1k.org>
> What I’m suggesting is that if the hardware is very simple and very cheap,
> simply put two chips on the board.
> One runs at Clock A and the other runs at Clock B. At some point in the
> process you move the decimated data
> from B over to A and finish out all the math there ….
The hardware is simple and cheap cause it is all digital, requires no
calibrations and the same HW is capable of driving TFT, UI handling,
providing all control functionalities for input conditioning circuits, GPS
module and etc. It also provides USB interface for data exchange or remote
control. So doubling it is not a way to go if I want to keep things simple
and relatively cheap.
I think I will stay with the current plans for HW and try to handle some
troubles to GPS timing in software. I have to make initial variant of HW, so
I will be able to move on with the SW part towards useful counter. Then I
will see how well it performs and will decide if it satisfies the
requirements or I need to change something.
>> BTW, after quick check of the GPS module specs and OCXO's one it looks
>> like a very simple algorithm can be used for frequency correction. OCXO
>> frequency can be measured against GPS for a long enough period (some
>> thousands of seconds, LR algorithm can be used here also) and we have got
>> a correction coefficient. It can be updated at a rate of one second
>> (probably we do not need to do it as fast). I do not believe it can be as
>> simple. I feel I missed something :)…
>
> That is one way it is done. A lot depends on the accuracy of the GPS PPS
> on your module.
The module is uBlox NEO-6M, I know there is better suited for my needs
NEO-6T, but the first one was easy to get and insanely cheap. It should be
enough to start.
> More or less, with a thousand second observation time you will likely get
> below parts in 10^-10, but maybe not to the 1x10^-11 level.
1e-10 should satisfy my requirements. More sophisticated algorithm can be
developed and used later, if needed.
Thanks!
Oleg
MD
Magnus Danielson
Thu, May 17, 2018 9:36 PM
Hi,
On 05/13/2018 11:13 PM, Oleg Skydan wrote:
I would be inclined to just continue the MDEV compliant processing
instead. If you want the matching ADEV, rescale it using the
bias-function, which can be derived out of p.51 of that presentation.
You just need to figure out the dominant noise-type of each range of
tau, something which is much simpler in MDEV since White PM and Flicker
PM separates more clearly than the weak separation of ADEV.
As you measure a DUT, the noise of the DUT, the noise of the counter and
the systematics of the counter adds up and we cannot distinguish them in
that measurement.
Probably I did not express what I meant clearly. I understand that we
can not separate them, but if the DUT noise has most of the power inside
the filter BW while instrument noise is wideband one, we can filter out
part of instrument noise with minimal influence to the DUT one.
Yes, if you for a certain range can show that the instruments noise is
not dominant, then you measure the DUT. This is what happens as the
1/tau slope on the ADEV reaches down the DUT noise, where the resulting
curve is mostly DUT noise.
We may then hunt better counters to shift that slope leftwards on the
plot to see more of the DUT noise.
There is measurement setups, such as
cross-correlation, which makes multiple measurements in parallel which
can start combat the noise separation issue.
Yes, I am aware of that technique. I event did some experiments with
cross correlation phase noise measurements.
Ehm no. The optimal averaging strategy for ADEV is to do no averaging.
This is the hard lesson to learn. You can't really cheat if you aim to
get proper ADEV.
You can use averaging, and it will cause biased values, so you might use
the part with less bias, but there is safer ways of doing that, by going
full MDEV or PDEV instead.
With biases, you have something similar to, but not being the ADEV.
OK. It looks like the last sentence very precisely describes what I was
going to do, so we understood each other right. Summarizing the
discussion, as far as I understand, the best strategy regarding *DEV
calculations is:
- Make MDEV the primary variant. It is suitable for calculation inside
counter as well as for exporting data for the following post processing.
- Study how PDEV calculation fits on the used HW. If it is possible to
do in real time PDEV option can be added.
You build two sums C and D, one is the phase-samples and the other is
phase-samples scaled with their index n in the block. From this you can
then using the formulas I provided calculate the least-square phase and
frequency, and using the least square frequency measures you can do
PDEV. The up-front processing is thus cheap, and there is meathods to
combine measurement blocks into longer measurement blocks, thus
decimation, using relatively simple linear processing on the block sums
C and D, with their respective lengths. The end result is that you can
very cheaply decimate data in HW/FW and then extend the properties to
arbitrary long observation intervals using cheap software processing and
create unbiased least square measurements this way. Once the linear
algebra of least square processing has vanished in a puff of logic, it
is fairly simple processing with very little memory requirements at
hand. For multi-tau, you can reach O(N log N) type of processing rather
than O(N^2), which is pretty cool.
I hope to have an updated version of that article available soon.
- ADEV can be safely calculated only from the Pi mode counter data.
Probably it will not be very useful because of low single shoot
resolution, but Pi mode and corresponding data export can be easily added.
You will be assured it is bias-free. You want to have that option.
I think it will be more than enough for my needs, at least now.
From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock.
Yes. It is approx. 400MHz.
OK, good to have that verified. Free-running or locked to a 10 MHz
reference?
I have no FPGA also :) All processing is in the FW, I will see how it
fits the used HW architecture.
Doing it all in FPGA has many benefits, but the HW will be more
complicated and pricier with minimal benefits for my main goals.
Exactly what you mean by FW now I don't get, for me that is FPGA code.
I meant MCU code, to make things clearer I can use the SW term for it.
Thank you for the answers and explanations, they are highly appreciated!
Nice! Really hope you can make sense out of them and apply them. I hope
I contribute to insight about what to do when to do good measurements.
Cheers,
Magnus
Hi,
On 05/13/2018 11:13 PM, Oleg Skydan wrote:
> Hi Magnus,
>
> From: "Magnus Danielson" <magnus@rubidium.dyndns.org>
>> I would be inclined to just continue the MDEV compliant processing
>> instead. If you want the matching ADEV, rescale it using the
>> bias-function, which can be derived out of p.51 of that presentation.
>> You just need to figure out the dominant noise-type of each range of
>> tau, something which is much simpler in MDEV since White PM and Flicker
>> PM separates more clearly than the weak separation of ADEV.
>
>
>> As you measure a DUT, the noise of the DUT, the noise of the counter and
>> the systematics of the counter adds up and we cannot distinguish them in
>> that measurement.
>
> Probably I did not express what I meant clearly. I understand that we
> can not separate them, but if the DUT noise has most of the power inside
> the filter BW while instrument noise is wideband one, we can filter out
> part of instrument noise with minimal influence to the DUT one.
Yes, if you for a certain range can show that the instruments noise is
not dominant, then you measure the DUT. This is what happens as the
1/tau slope on the ADEV reaches down the DUT noise, where the resulting
curve is mostly DUT noise.
We may then hunt better counters to shift that slope leftwards on the
plot to see more of the DUT noise.
>> There is measurement setups, such as
>> cross-correlation, which makes multiple measurements in parallel which
>> can start combat the noise separation issue.
>
> Yes, I am aware of that technique. I event did some experiments with
> cross correlation phase noise measurements.
Check.
>> Ehm no. The optimal averaging strategy for ADEV is to do no averaging.
>> This is the hard lesson to learn. You can't really cheat if you aim to
>> get proper ADEV.
>>
>> You can use averaging, and it will cause biased values, so you might use
>> the part with less bias, but there is safer ways of doing that, by going
>> full MDEV or PDEV instead.
>>
>> With biases, you have something similar to, but not being _the_ ADEV.
>
> OK. It looks like the last sentence very precisely describes what I was
> going to do, so we understood each other right. Summarizing the
> discussion, as far as I understand, the best strategy regarding *DEV
> calculations is:
> 1. Make MDEV the primary variant. It is suitable for calculation inside
> counter as well as for exporting data for the following post processing.
Doable.
> 2. Study how PDEV calculation fits on the used HW. If it is possible to
> do in real time PDEV option can be added.
You build two sums C and D, one is the phase-samples and the other is
phase-samples scaled with their index n in the block. From this you can
then using the formulas I provided calculate the least-square phase and
frequency, and using the least square frequency measures you can do
PDEV. The up-front processing is thus cheap, and there is meathods to
combine measurement blocks into longer measurement blocks, thus
decimation, using relatively simple linear processing on the block sums
C and D, with their respective lengths. The end result is that you can
very cheaply decimate data in HW/FW and then extend the properties to
arbitrary long observation intervals using cheap software processing and
create unbiased least square measurements this way. Once the linear
algebra of least square processing has vanished in a puff of logic, it
is fairly simple processing with very little memory requirements at
hand. For multi-tau, you can reach O(N log N) type of processing rather
than O(N^2), which is pretty cool.
I hope to have an updated version of that article available soon.
> 3. ADEV can be safely calculated only from the Pi mode counter data.
> Probably it will not be very useful because of low single shoot
> resolution, but Pi mode and corresponding data export can be easily added.
You will be assured it is bias-free. You want to have that option.
> I think it will be more than enough for my needs, at least now.
>
>> From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock.
>
> Yes. It is approx. 400MHz.
OK, good to have that verified. Free-running or locked to a 10 MHz
reference?
>>> I have no FPGA also :) All processing is in the FW, I will see how it
>>> fits the used HW architecture.
>>>
>>> Doing it all in FPGA has many benefits, but the HW will be more
>>> complicated and pricier with minimal benefits for my main goals.
>>
>> Exactly what you mean by FW now I don't get, for me that is FPGA code.
>
> I meant MCU code, to make things clearer I can use the SW term for it.
>
> Thank you for the answers and explanations, they are highly appreciated!
Nice! Really hope you can make sense out of them and apply them. I hope
I contribute to insight about what to do when to do good measurements.
Cheers,
Magnus
OS
Oleg Skydan
Thu, May 17, 2018 10:25 PM
- Study how PDEV calculation fits on the used HW. If it is possible to
do in real time PDEV option can be added.
You build two sums C and D, one is the phase-samples and the other is
phase-samples scaled with their index n in the block. From this you can
then using the formulas I provided calculate the least-square phase and
frequency, and using the least square frequency measures you can do
PDEV. The up-front processing is thus cheap, and there is meathods to
combine measurement blocks into longer measurement blocks, thus
decimation, using relatively simple linear processing on the block sums
C and D, with their respective lengths. The end result is that you can
very cheaply decimate data in HW/FW and then extend the properties to
arbitrary long observation intervals using cheap software processing and
create unbiased least square measurements this way. Once the linear
algebra of least square processing has vanished in a puff of logic, it
is fairly simple processing with very little memory requirements at
hand. For multi-tau, you can reach O(N log N) type of processing rather
than O(N^2), which is pretty cool.
I had some free time today to study the document you suggested and do some
experiments in matlab - it was very useful reading and experiments, thanks!
It looks like the proposed method of decimation can be efficiently realized
on the current HW. Also as a side effect calculating large averaging in
several blocks should reduce floating point associated errors which can
reach significant values with careless coding.
Also all modes can be unified and can reuse the same acquisition code,
nice... :)
I hope to have an updated version of that article available soon.
Please share the link if it will be publicly available.
From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock.
Yes. It is approx. 400MHz.
OK, good to have that verified. Free-running or locked to a 10 MHz
reference?
Locked to OCXO (10MHz).
All the best!
Oleg
Hi, Magnus!
--------------------------------------------------
From: "Magnus Danielson" <magnus@rubidium.dyndns.org>
>> 2. Study how PDEV calculation fits on the used HW. If it is possible to
>> do in real time PDEV option can be added.
>
> You build two sums C and D, one is the phase-samples and the other is
> phase-samples scaled with their index n in the block. From this you can
> then using the formulas I provided calculate the least-square phase and
> frequency, and using the least square frequency measures you can do
> PDEV. The up-front processing is thus cheap, and there is meathods to
> combine measurement blocks into longer measurement blocks, thus
> decimation, using relatively simple linear processing on the block sums
> C and D, with their respective lengths. The end result is that you can
> very cheaply decimate data in HW/FW and then extend the properties to
> arbitrary long observation intervals using cheap software processing and
> create unbiased least square measurements this way. Once the linear
> algebra of least square processing has vanished in a puff of logic, it
> is fairly simple processing with very little memory requirements at
> hand. For multi-tau, you can reach O(N log N) type of processing rather
> than O(N^2), which is pretty cool.
I had some free time today to study the document you suggested and do some
experiments in matlab - it was very useful reading and experiments, thanks!
It looks like the proposed method of decimation can be efficiently realized
on the current HW. Also as a side effect calculating large averaging in
several blocks should reduce floating point associated errors which can
reach significant values with careless coding.
Also all modes can be unified and can reuse the same acquisition code,
nice... :)
> I hope to have an updated version of that article available soon.
Please share the link if it will be publicly available.
>>> From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock.
>>
>> Yes. It is approx. 400MHz.
>
> OK, good to have that verified. Free-running or locked to a 10 MHz
> reference?
Locked to OCXO (10MHz).
All the best!
Oleg
MD
Magnus Danielson
Thu, May 17, 2018 10:54 PM
Hi Oleg,
On 05/18/2018 12:25 AM, Oleg Skydan wrote:
- Study how PDEV calculation fits on the used HW. If it is possible to
do in real time PDEV option can be added.
You build two sums C and D, one is the phase-samples and the other is
phase-samples scaled with their index n in the block. From this you can
then using the formulas I provided calculate the least-square phase and
frequency, and using the least square frequency measures you can do
PDEV. The up-front processing is thus cheap, and there is meathods to
combine measurement blocks into longer measurement blocks, thus
decimation, using relatively simple linear processing on the block sums
C and D, with their respective lengths. The end result is that you can
very cheaply decimate data in HW/FW and then extend the properties to
arbitrary long observation intervals using cheap software processing and
create unbiased least square measurements this way. Once the linear
algebra of least square processing has vanished in a puff of logic, it
is fairly simple processing with very little memory requirements at
hand. For multi-tau, you can reach O(N log N) type of processing rather
than O(N^2), which is pretty cool.
I had some free time today to study the document you suggested and do
some experiments in matlab - it was very useful reading and experiments,
thanks!
Thanks for the kind words!
It looks like the proposed method of decimation can be
efficiently realized on the current HW.
The algorithm was crafted with the aim of achieving just that. It's
really a powerful method.
Also as a side effect calculating large averaging in several blocks should reduce floating
point associated errors which can reach significant values with careless coding.
Indeed. The framework provided should allow numerically precision to be
crafted without too much difficulty, which is another goal.
Also all modes can be unified and can reuse the same acquisition code,
nice... :)
As intended. :)
The C sums is what you use of MDEV type of processing.
I hope to have an updated version of that article available soon.
Please share the link if it will be publicly available.
From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock.
Yes. It is approx. 400MHz.
OK, good to have that verified. Free-running or locked to a 10 MHz
reference?
OK. I saw some odd frequencies, and I agree with Bob that if you can,
using two of those with non-trivial relationship can be used to get
really good performance.
Cheers,
Magnus
Hi Oleg,
On 05/18/2018 12:25 AM, Oleg Skydan wrote:
> Hi, Magnus!
>
> --------------------------------------------------
> From: "Magnus Danielson" <magnus@rubidium.dyndns.org>
>>> 2. Study how PDEV calculation fits on the used HW. If it is possible to
>>> do in real time PDEV option can be added.
>>
>> You build two sums C and D, one is the phase-samples and the other is
>> phase-samples scaled with their index n in the block. From this you can
>> then using the formulas I provided calculate the least-square phase and
>> frequency, and using the least square frequency measures you can do
>> PDEV. The up-front processing is thus cheap, and there is meathods to
>> combine measurement blocks into longer measurement blocks, thus
>> decimation, using relatively simple linear processing on the block sums
>> C and D, with their respective lengths. The end result is that you can
>> very cheaply decimate data in HW/FW and then extend the properties to
>> arbitrary long observation intervals using cheap software processing and
>> create unbiased least square measurements this way. Once the linear
>> algebra of least square processing has vanished in a puff of logic, it
>> is fairly simple processing with very little memory requirements at
>> hand. For multi-tau, you can reach O(N log N) type of processing rather
>> than O(N^2), which is pretty cool.
>
> I had some free time today to study the document you suggested and do
> some experiments in matlab - it was very useful reading and experiments,
> thanks!
Thanks for the kind words!
> It looks like the proposed method of decimation can be
> efficiently realized on the current HW.
The algorithm was crafted with the aim of achieving just that. It's
really a powerful method.
> Also as a side effect calculating large averaging in several blocks should reduce floating
> point associated errors which can reach significant values with careless coding.
Indeed. The framework provided should allow numerically precision to be
crafted without too much difficulty, which is another goal.
> Also all modes can be unified and can reuse the same acquisition code,
> nice... :)
As intended. :)
The C sums is what you use of MDEV type of processing.
>> I hope to have an updated version of that article available soon.
>
> Please share the link if it will be publicly available.
Will do.
>>>> From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock.
>>>
>>> Yes. It is approx. 400MHz.
>>
>> OK, good to have that verified. Free-running or locked to a 10 MHz
>> reference?
>
> Locked to OCXO (10MHz).
OK. I saw some odd frequencies, and I agree with Bob that if you can,
using two of those with non-trivial relationship can be used to get
really good performance.
Cheers,
Magnus
OS
Oleg Skydan
Fri, May 18, 2018 8:51 PM
From the 2.5 ns single shot resolution, I deduce a 400 MHz count
clock.
Yes. It is approx. 400MHz.
OK, good to have that verified. Free-running or locked to a 10 MHz
reference?
OK. I saw some odd frequencies, and I agree with Bob that if you can,
using two of those with non-trivial relationship can be used to get
really good performance.
I can use two or more, but unfortunately not simultaneously. So I will
switch frequency if the problem is detected. Switching will interact with
GPS data processing, but that probably can be fixed in software (I had no
time to investigate the possible solutions and find the best one yet).
BTW, the single shoot resolution can be doubled (to 1.25ns) with almost no
additional HW (just a delay line for a bit more than 1.25ns and some
resistors). Not sure if it worth to do (it also will halve the timestamping
speed and double the timestamps memory requirements, so, in averaging modes
it will be only sqrt(2) improvement).
All the best!
Oleg
Hi!
--------------------------------------------------
From: "Magnus Danielson" <magnus@rubidium.dyndns.org>
>>>>> From the 2.5 ns single shot resolution, I deduce a 400 MHz count
>>>>> clock.
>>>>
>>>> Yes. It is approx. 400MHz.
>>>
>>> OK, good to have that verified. Free-running or locked to a 10 MHz
>>> reference?
>>
>> Locked to OCXO (10MHz).
>
> OK. I saw some odd frequencies, and I agree with Bob that if you can,
> using two of those with non-trivial relationship can be used to get
> really good performance.
I can use two or more, but unfortunately not simultaneously. So I will
switch frequency if the problem is detected. Switching will interact with
GPS data processing, but that probably can be fixed in software (I had no
time to investigate the possible solutions and find the best one yet).
BTW, the single shoot resolution can be doubled (to 1.25ns) with almost no
additional HW (just a delay line for a bit more than 1.25ns and some
resistors). Not sure if it worth to do (it also will halve the timestamping
speed and double the timestamps memory requirements, so, in averaging modes
it will be only sqrt(2) improvement).
All the best!
Oleg
OS
Oleg Skydan
Sun, May 27, 2018 3:52 PM
You build two sums C and D, one is the phase-samples and the other is
phase-samples scaled with their index n in the block. From this you can
then using the formulas I provided calculate the least-square phase and
frequency, and using the least square frequency measures you can do
PDEV. The up-front processing is thus cheap, and there is meathods to
combine measurement blocks into longer measurement blocks, thus
decimation, using relatively simple linear processing on the block sums
C and D, with their respective lengths. The end result is that you can
very cheaply decimate data in HW/FW and then extend the properties to
arbitrary long observation intervals using cheap software processing and
create unbiased least square measurements this way. Once the linear
algebra of least square processing has vanished in a puff of logic, it
is fairly simple processing with very little memory requirements at
hand. For multi-tau, you can reach O(N log N) type of processing rather
than O(N^2), which is pretty cool.
I had some free time today to study the document you suggested and do
some experiments in matlab - it was very useful reading and experiments,
thanks!
Thanks for the kind words!
It looks like the proposed method of decimation can be
efficiently realized on the current HW.
I had some free time yesterday and today, so I decided to test the new
algorithms on the real hardware (the HW is still an old "ugly construction"
one, but I hope I will have some time to make normal HW - I have already got
almost all components I need).
I had to modify the original decimation scheme you propose in the paper, so
it better fits my HW, also the calculation precision and speed should be
higher now. The nice side effect - I do not need to care about phase
unwrapping anymore. I can prepare a short description of the modifications
and post it here, if it is interesting.
It works like a charm!
The new algorithm (base on C and D sums calculation and decimation) uses
much less memory (less than 256KB for any gaiting time/sampling speed, the
old one (direct LR calculation) was very memory hungry - it used
4xSampling_Rate bytes/s - 20MB per second of the gate time for 5MSPS). Now I
can fit all data into the internal memory and have a single chip digital
part of the frequency counter, well, almost single chip ;) The timestamping
speed has increased and is limited now by the bus/bus matrix switch/DMA unit
at a bit more then 24MSPS with continuous real time data processing. It
looks like it is the limit for the used chip (I expected a bit higher
numbers). The calculation speed is also much higher now (approx 23ns per one
timestamp, so up to 43MSPS can be processed in realtime). I plan to stay at
20MSPS rate or 10MSPS with the double time resolution (1.25ns). It will
leave a plenty of CPU time for the UI/communication/GPS/statistics stuff.
I will probably throw out the power hungry and expensive SDRAM chip or use
much smaller one :).
I have some plans to experiment with doubling the one shoot resolution down
to 1.25ns. I see no much benefits from it, but it can be made with just a
piece of coax and a couple of resistors, so it is interesting to try :).
All the best!
Oleg UR3IQO
Hi!
From: "Magnus Danielson" <magnus@rubidium.dyndns.org>
>>> You build two sums C and D, one is the phase-samples and the other is
>>> phase-samples scaled with their index n in the block. From this you can
>>> then using the formulas I provided calculate the least-square phase and
>>> frequency, and using the least square frequency measures you can do
>>> PDEV. The up-front processing is thus cheap, and there is meathods to
>>> combine measurement blocks into longer measurement blocks, thus
>>> decimation, using relatively simple linear processing on the block sums
>>> C and D, with their respective lengths. The end result is that you can
>>> very cheaply decimate data in HW/FW and then extend the properties to
>>> arbitrary long observation intervals using cheap software processing and
>>> create unbiased least square measurements this way. Once the linear
>>> algebra of least square processing has vanished in a puff of logic, it
>>> is fairly simple processing with very little memory requirements at
>>> hand. For multi-tau, you can reach O(N log N) type of processing rather
>>> than O(N^2), which is pretty cool.
>>
>> I had some free time today to study the document you suggested and do
>> some experiments in matlab - it was very useful reading and experiments,
>> thanks!
>
> Thanks for the kind words!
>
>> It looks like the proposed method of decimation can be
>> efficiently realized on the current HW.
I had some free time yesterday and today, so I decided to test the new
algorithms on the real hardware (the HW is still an old "ugly construction"
one, but I hope I will have some time to make normal HW - I have already got
almost all components I need).
I had to modify the original decimation scheme you propose in the paper, so
it better fits my HW, also the calculation precision and speed should be
higher now. The nice side effect - I do not need to care about phase
unwrapping anymore. I can prepare a short description of the modifications
and post it here, if it is interesting.
It works like a charm!
The new algorithm (base on C and D sums calculation and decimation) uses
much less memory (less than 256KB for any gaiting time/sampling speed, the
old one (direct LR calculation) was very memory hungry - it used
4xSampling_Rate bytes/s - 20MB per second of the gate time for 5MSPS). Now I
can fit all data into the internal memory and have a single chip digital
part of the frequency counter, well, almost single chip ;) The timestamping
speed has increased and is limited now by the bus/bus matrix switch/DMA unit
at a bit more then 24MSPS with continuous real time data processing. It
looks like it is the limit for the used chip (I expected a bit higher
numbers). The calculation speed is also much higher now (approx 23ns per one
timestamp, so up to 43MSPS can be processed in realtime). I plan to stay at
20MSPS rate or 10MSPS with the double time resolution (1.25ns). It will
leave a plenty of CPU time for the UI/communication/GPS/statistics stuff.
I will probably throw out the power hungry and expensive SDRAM chip or use
much smaller one :).
I have some plans to experiment with doubling the one shoot resolution down
to 1.25ns. I see no much benefits from it, but it can be made with just a
piece of coax and a couple of resistors, so it is interesting to try :).
All the best!
Oleg UR3IQO
MD
Magnus Danielson
Sun, May 27, 2018 4:58 PM
Hi Oleg,
On 05/27/2018 05:52 PM, Oleg Skydan wrote:
It looks like the proposed method of decimation can be
efficiently realized on the current HW.
I had some free time yesterday and today, so I decided to test the new
algorithms on the real hardware (the HW is still an old "ugly
construction" one, but I hope I will have some time to make normal HW -
I have already got almost all components I need).
I had to modify the original decimation scheme you propose in the paper,
so it better fits my HW, also the calculation precision and speed should
be higher now.
The point about the decimation scheme I did was to provide a toolbox,
and as long as you respect the rules within that toolbox you can adapt
it just as you like. As long as the sums C and D becomes correct, your
path to it can be whatever.
The nice side effect - I do not need to care about phase
unwrapping anymore.
You should always care about how that works out, and if you play your
cards right, it works out very smoothly.
I can prepare a short description of the
modifications and post it here, if it is interesting.
Yes please do, then I can double check it.
The new algorithm (base on C and D sums calculation and decimation) uses
much less memory (less than 256KB for any gaiting time/sampling speed,
the old one (direct LR calculation) was very memory hungry - it used
4xSampling_Rate bytes/s - 20MB per second of the gate time for 5MSPS).
This is one of the benefits of this. Assuming the same tau0, it is all
contained in the C, D and N triplet, and the memory need of these values
can be trivially analyzed, but it is very small, so it's a really
effective decimation technique while maintaining the least-square
properties.
Now I can fit all data into the internal memory and have a single chip
digital part of the frequency counter, well, almost single chip ;) The
timestamping speed has increased and is limited now by the bus/bus
matrix switch/DMA unit at a bit more then 24MSPS with continuous real
time data processing. It looks like it is the limit for the used chip (I
expected a bit higher numbers).
Yeah, now you can move your harware focus on considering interpolation
techniques beyond the processing power of least-square estimation, which
integrate noise way down.
The calculation speed is also much higher now (approx 23ns per one
timestamp, so up to 43MSPS can be processed in realtime).
Just to indicate that my claim for "High speed" is not completely wrong.
For each time-stamp, the pseudo-code becomes:
C = C + x_0
D = D + n*x_0
n = n + 1
Whenever n reaches N, C and D is output, and the values C, D and n is
set to 0.
However, this may be varied in several fun ways, but is left over as an
exercise for the implementer. Much of the other complexity is gone, so
this is the fun problem.
I plan to stay at 20MSPS rate or 10MSPS with the
double time resolution (1.25ns). It will leave a plenty of CPU time for
the UI/communication/GPS/statistics stuff.
I will probably throw out the power hungry and expensive SDRAM chip or
use much smaller one :).
Yeah, it would only be if you build multi-tau PDEV plots that you would
need much memory, other than that it is just buffer memory to buffer
before it goes to off-board processing, at which time you would need to
convey the C, D, N and tau0 values.
I have some plans to experiment with doubling the one shoot resolution
down to 1.25ns. I see no much benefits from it, but it can be made with
just a piece of coax and a couple of resistors, so it is interesting to
try :).
Please report on that progress! Sounds fun!
Cheers,
Magnus
Hi Oleg,
On 05/27/2018 05:52 PM, Oleg Skydan wrote:
> Hi!
>
>>> It looks like the proposed method of decimation can be
>>> efficiently realized on the current HW.
>
> I had some free time yesterday and today, so I decided to test the new
> algorithms on the real hardware (the HW is still an old "ugly
> construction" one, but I hope I will have some time to make normal HW -
> I have already got almost all components I need).
>
> I had to modify the original decimation scheme you propose in the paper,
> so it better fits my HW, also the calculation precision and speed should
> be higher now.
The point about the decimation scheme I did was to provide a toolbox,
and as long as you respect the rules within that toolbox you can adapt
it just as you like. As long as the sums C and D becomes correct, your
path to it can be whatever.
> The nice side effect - I do not need to care about phase
> unwrapping anymore.
You should always care about how that works out, and if you play your
cards right, it works out very smoothly.
> I can prepare a short description of the
> modifications and post it here, if it is interesting.
Yes please do, then I can double check it.
> It works like a charm!
Good. :)
> The new algorithm (base on C and D sums calculation and decimation) uses
> much less memory (less than 256KB for any gaiting time/sampling speed,
> the old one (direct LR calculation) was very memory hungry - it used
> 4xSampling_Rate bytes/s - 20MB per second of the gate time for 5MSPS).
This is one of the benefits of this. Assuming the same tau0, it is all
contained in the C, D and N triplet, and the memory need of these values
can be trivially analyzed, but it is very small, so it's a really
effective decimation technique while maintaining the least-square
properties.
> Now I can fit all data into the internal memory and have a single chip
> digital part of the frequency counter, well, almost single chip ;) The
> timestamping speed has increased and is limited now by the bus/bus
> matrix switch/DMA unit at a bit more then 24MSPS with continuous real
> time data processing. It looks like it is the limit for the used chip (I
> expected a bit higher numbers).
Yeah, now you can move your harware focus on considering interpolation
techniques beyond the processing power of least-square estimation, which
integrate noise way down.
> The calculation speed is also much higher now (approx 23ns per one
> timestamp, so up to 43MSPS can be processed in realtime).
Just to indicate that my claim for "High speed" is not completely wrong.
For each time-stamp, the pseudo-code becomes:
C = C + x_0
D = D + n*x_0
n = n + 1
Whenever n reaches N, C and D is output, and the values C, D and n is
set to 0.
However, this may be varied in several fun ways, but is left over as an
exercise for the implementer. Much of the other complexity is gone, so
this is the fun problem.
> I plan to stay at 20MSPS rate or 10MSPS with the
> double time resolution (1.25ns). It will leave a plenty of CPU time for
> the UI/communication/GPS/statistics stuff.
Sounds like a good plan.
> I will probably throw out the power hungry and expensive SDRAM chip or
> use much smaller one :).
Yeah, it would only be if you build multi-tau PDEV plots that you would
need much memory, other than that it is just buffer memory to buffer
before it goes to off-board processing, at which time you would need to
convey the C, D, N and tau0 values.
> I have some plans to experiment with doubling the one shoot resolution
> down to 1.25ns. I see no much benefits from it, but it can be made with
> just a piece of coax and a couple of resistors, so it is interesting to
> try :).
Please report on that progress! Sounds fun!
Cheers,
Magnus
GL
Glenn Little WB4UIV
Sun, May 27, 2018 5:02 PM
The MSDS is here:
https://simplegreen.com/data-sheets/
They claim that it is non reactive and chemically stable.
It is for water tolerant surfaces and should be rinsed.
Probably due to the citric acid.
Glenn
On 5/27/2018 11:52 AM, Oleg Skydan wrote:
You build two sums C and D, one is the phase-samples and the other is
phase-samples scaled with their index n in the block. From this you
can
then using the formulas I provided calculate the least-square phase
and
frequency, and using the least square frequency measures you can do
PDEV. The up-front processing is thus cheap, and there is meathods to
combine measurement blocks into longer measurement blocks, thus
decimation, using relatively simple linear processing on the block
sums
C and D, with their respective lengths. The end result is that you can
very cheaply decimate data in HW/FW and then extend the properties to
arbitrary long observation intervals using cheap software
processing and
create unbiased least square measurements this way. Once the linear
algebra of least square processing has vanished in a puff of logic, it
is fairly simple processing with very little memory requirements at
hand. For multi-tau, you can reach O(N log N) type of processing
rather
than O(N^2), which is pretty cool.
I had some free time today to study the document you suggested and do
some experiments in matlab - it was very useful reading and
experiments,
thanks!
Thanks for the kind words!
It looks like the proposed method of decimation can be
efficiently realized on the current HW.
I had some free time yesterday and today, so I decided to test the new
algorithms on the real hardware (the HW is still an old "ugly
construction" one, but I hope I will have some time to make normal HW
- I have already got almost all components I need).
I had to modify the original decimation scheme you propose in the
paper, so it better fits my HW, also the calculation precision and
speed should be higher now. The nice side effect - I do not need to
care about phase unwrapping anymore. I can prepare a short description
of the modifications and post it here, if it is interesting.
It works like a charm!
The new algorithm (base on C and D sums calculation and decimation)
uses much less memory (less than 256KB for any gaiting time/sampling
speed, the old one (direct LR calculation) was very memory hungry - it
used 4xSampling_Rate bytes/s - 20MB per second of the gate time for
5MSPS). Now I can fit all data into the internal memory and have a
single chip digital part of the frequency counter, well, almost single
chip ;) The timestamping speed has increased and is limited now by the
bus/bus matrix switch/DMA unit at a bit more then 24MSPS with
continuous real time data processing. It looks like it is the limit
for the used chip (I expected a bit higher numbers). The calculation
speed is also much higher now (approx 23ns per one timestamp, so up to
43MSPS can be processed in realtime). I plan to stay at 20MSPS rate or
10MSPS with the double time resolution (1.25ns). It will leave a
plenty of CPU time for the UI/communication/GPS/statistics stuff.
I will probably throw out the power hungry and expensive SDRAM chip or
use much smaller one :).
I have some plans to experiment with doubling the one shoot resolution
down to 1.25ns. I see no much benefits from it, but it can be made
with just a piece of coax and a couple of resistors, so it is
interesting to try :).
All the best!
Oleg UR3IQO
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
--
Glenn Little ARRL Technical Specialist QCWA LM 28417
Amateur Callsign: WB4UIV wb4uiv@arrl.net AMSAT LM 2178
QTH: Goose Creek, SC USA (EM92xx) USSVI LM NRA LM SBE ARRL TAPR
"It is not the class of license that the Amateur holds but the class
of the Amateur that holds the license"
The MSDS is here:
https://simplegreen.com/data-sheets/
They claim that it is non reactive and chemically stable.
It is for water tolerant surfaces and should be rinsed.
Probably due to the citric acid.
Glenn
On 5/27/2018 11:52 AM, Oleg Skydan wrote:
> Hi!
>
> From: "Magnus Danielson" <magnus@rubidium.dyndns.org>
>>>> You build two sums C and D, one is the phase-samples and the other is
>>>> phase-samples scaled with their index n in the block. From this you
>>>> can
>>>> then using the formulas I provided calculate the least-square phase
>>>> and
>>>> frequency, and using the least square frequency measures you can do
>>>> PDEV. The up-front processing is thus cheap, and there is meathods to
>>>> combine measurement blocks into longer measurement blocks, thus
>>>> decimation, using relatively simple linear processing on the block
>>>> sums
>>>> C and D, with their respective lengths. The end result is that you can
>>>> very cheaply decimate data in HW/FW and then extend the properties to
>>>> arbitrary long observation intervals using cheap software
>>>> processing and
>>>> create unbiased least square measurements this way. Once the linear
>>>> algebra of least square processing has vanished in a puff of logic, it
>>>> is fairly simple processing with very little memory requirements at
>>>> hand. For multi-tau, you can reach O(N log N) type of processing
>>>> rather
>>>> than O(N^2), which is pretty cool.
>>>
>>> I had some free time today to study the document you suggested and do
>>> some experiments in matlab - it was very useful reading and
>>> experiments,
>>> thanks!
>>
>> Thanks for the kind words!
>>
>>> It looks like the proposed method of decimation can be
>>> efficiently realized on the current HW.
>
> I had some free time yesterday and today, so I decided to test the new
> algorithms on the real hardware (the HW is still an old "ugly
> construction" one, but I hope I will have some time to make normal HW
> - I have already got almost all components I need).
>
> I had to modify the original decimation scheme you propose in the
> paper, so it better fits my HW, also the calculation precision and
> speed should be higher now. The nice side effect - I do not need to
> care about phase unwrapping anymore. I can prepare a short description
> of the modifications and post it here, if it is interesting.
>
> It works like a charm!
>
> The new algorithm (base on C and D sums calculation and decimation)
> uses much less memory (less than 256KB for any gaiting time/sampling
> speed, the old one (direct LR calculation) was very memory hungry - it
> used 4xSampling_Rate bytes/s - 20MB per second of the gate time for
> 5MSPS). Now I can fit all data into the internal memory and have a
> single chip digital part of the frequency counter, well, almost single
> chip ;) The timestamping speed has increased and is limited now by the
> bus/bus matrix switch/DMA unit at a bit more then 24MSPS with
> continuous real time data processing. It looks like it is the limit
> for the used chip (I expected a bit higher numbers). The calculation
> speed is also much higher now (approx 23ns per one timestamp, so up to
> 43MSPS can be processed in realtime). I plan to stay at 20MSPS rate or
> 10MSPS with the double time resolution (1.25ns). It will leave a
> plenty of CPU time for the UI/communication/GPS/statistics stuff.
>
> I will probably throw out the power hungry and expensive SDRAM chip or
> use much smaller one :).
>
> I have some plans to experiment with doubling the one shoot resolution
> down to 1.25ns. I see no much benefits from it, but it can be made
> with just a piece of coax and a couple of resistors, so it is
> interesting to try :).
>
> All the best!
> Oleg UR3IQO
>
>
> _______________________________________________
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to
> https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.
>
--
-----------------------------------------------------------------------
Glenn Little ARRL Technical Specialist QCWA LM 28417
Amateur Callsign: WB4UIV wb4uiv@arrl.net AMSAT LM 2178
QTH: Goose Creek, SC USA (EM92xx) USSVI LM NRA LM SBE ARRL TAPR
"It is not the class of license that the Amateur holds but the class
of the Amateur that holds the license"
GL
Glenn Little WB4UIV
Sun, May 27, 2018 5:03 PM
It appears that I replied to the wrong message, please ignore.
Glenn
On 5/27/2018 11:52 AM, Oleg Skydan wrote:
You build two sums C and D, one is the phase-samples and the other is
phase-samples scaled with their index n in the block. From this you
can
then using the formulas I provided calculate the least-square phase
and
frequency, and using the least square frequency measures you can do
PDEV. The up-front processing is thus cheap, and there is meathods to
combine measurement blocks into longer measurement blocks, thus
decimation, using relatively simple linear processing on the block
sums
C and D, with their respective lengths. The end result is that you can
very cheaply decimate data in HW/FW and then extend the properties to
arbitrary long observation intervals using cheap software
processing and
create unbiased least square measurements this way. Once the linear
algebra of least square processing has vanished in a puff of logic, it
is fairly simple processing with very little memory requirements at
hand. For multi-tau, you can reach O(N log N) type of processing
rather
than O(N^2), which is pretty cool.
I had some free time today to study the document you suggested and do
some experiments in matlab - it was very useful reading and
experiments,
thanks!
Thanks for the kind words!
It looks like the proposed method of decimation can be
efficiently realized on the current HW.
I had some free time yesterday and today, so I decided to test the new
algorithms on the real hardware (the HW is still an old "ugly
construction" one, but I hope I will have some time to make normal HW
- I have already got almost all components I need).
I had to modify the original decimation scheme you propose in the
paper, so it better fits my HW, also the calculation precision and
speed should be higher now. The nice side effect - I do not need to
care about phase unwrapping anymore. I can prepare a short description
of the modifications and post it here, if it is interesting.
It works like a charm!
The new algorithm (base on C and D sums calculation and decimation)
uses much less memory (less than 256KB for any gaiting time/sampling
speed, the old one (direct LR calculation) was very memory hungry - it
used 4xSampling_Rate bytes/s - 20MB per second of the gate time for
5MSPS). Now I can fit all data into the internal memory and have a
single chip digital part of the frequency counter, well, almost single
chip ;) The timestamping speed has increased and is limited now by the
bus/bus matrix switch/DMA unit at a bit more then 24MSPS with
continuous real time data processing. It looks like it is the limit
for the used chip (I expected a bit higher numbers). The calculation
speed is also much higher now (approx 23ns per one timestamp, so up to
43MSPS can be processed in realtime). I plan to stay at 20MSPS rate or
10MSPS with the double time resolution (1.25ns). It will leave a
plenty of CPU time for the UI/communication/GPS/statistics stuff.
I will probably throw out the power hungry and expensive SDRAM chip or
use much smaller one :).
I have some plans to experiment with doubling the one shoot resolution
down to 1.25ns. I see no much benefits from it, but it can be made
with just a piece of coax and a couple of resistors, so it is
interesting to try :).
All the best!
Oleg UR3IQO
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
--
Glenn Little ARRL Technical Specialist QCWA LM 28417
Amateur Callsign: WB4UIV wb4uiv@arrl.net AMSAT LM 2178
QTH: Goose Creek, SC USA (EM92xx) USSVI LM NRA LM SBE ARRL TAPR
"It is not the class of license that the Amateur holds but the class
of the Amateur that holds the license"
It appears that I replied to the wrong message, please ignore.
Glenn
On 5/27/2018 11:52 AM, Oleg Skydan wrote:
> Hi!
>
> From: "Magnus Danielson" <magnus@rubidium.dyndns.org>
>>>> You build two sums C and D, one is the phase-samples and the other is
>>>> phase-samples scaled with their index n in the block. From this you
>>>> can
>>>> then using the formulas I provided calculate the least-square phase
>>>> and
>>>> frequency, and using the least square frequency measures you can do
>>>> PDEV. The up-front processing is thus cheap, and there is meathods to
>>>> combine measurement blocks into longer measurement blocks, thus
>>>> decimation, using relatively simple linear processing on the block
>>>> sums
>>>> C and D, with their respective lengths. The end result is that you can
>>>> very cheaply decimate data in HW/FW and then extend the properties to
>>>> arbitrary long observation intervals using cheap software
>>>> processing and
>>>> create unbiased least square measurements this way. Once the linear
>>>> algebra of least square processing has vanished in a puff of logic, it
>>>> is fairly simple processing with very little memory requirements at
>>>> hand. For multi-tau, you can reach O(N log N) type of processing
>>>> rather
>>>> than O(N^2), which is pretty cool.
>>>
>>> I had some free time today to study the document you suggested and do
>>> some experiments in matlab - it was very useful reading and
>>> experiments,
>>> thanks!
>>
>> Thanks for the kind words!
>>
>>> It looks like the proposed method of decimation can be
>>> efficiently realized on the current HW.
>
> I had some free time yesterday and today, so I decided to test the new
> algorithms on the real hardware (the HW is still an old "ugly
> construction" one, but I hope I will have some time to make normal HW
> - I have already got almost all components I need).
>
> I had to modify the original decimation scheme you propose in the
> paper, so it better fits my HW, also the calculation precision and
> speed should be higher now. The nice side effect - I do not need to
> care about phase unwrapping anymore. I can prepare a short description
> of the modifications and post it here, if it is interesting.
>
> It works like a charm!
>
> The new algorithm (base on C and D sums calculation and decimation)
> uses much less memory (less than 256KB for any gaiting time/sampling
> speed, the old one (direct LR calculation) was very memory hungry - it
> used 4xSampling_Rate bytes/s - 20MB per second of the gate time for
> 5MSPS). Now I can fit all data into the internal memory and have a
> single chip digital part of the frequency counter, well, almost single
> chip ;) The timestamping speed has increased and is limited now by the
> bus/bus matrix switch/DMA unit at a bit more then 24MSPS with
> continuous real time data processing. It looks like it is the limit
> for the used chip (I expected a bit higher numbers). The calculation
> speed is also much higher now (approx 23ns per one timestamp, so up to
> 43MSPS can be processed in realtime). I plan to stay at 20MSPS rate or
> 10MSPS with the double time resolution (1.25ns). It will leave a
> plenty of CPU time for the UI/communication/GPS/statistics stuff.
>
> I will probably throw out the power hungry and expensive SDRAM chip or
> use much smaller one :).
>
> I have some plans to experiment with doubling the one shoot resolution
> down to 1.25ns. I see no much benefits from it, but it can be made
> with just a piece of coax and a couple of resistors, so it is
> interesting to try :).
>
> All the best!
> Oleg UR3IQO
>
>
> _______________________________________________
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to
> https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.
>
--
-----------------------------------------------------------------------
Glenn Little ARRL Technical Specialist QCWA LM 28417
Amateur Callsign: WB4UIV wb4uiv@arrl.net AMSAT LM 2178
QTH: Goose Creek, SC USA (EM92xx) USSVI LM NRA LM SBE ARRL TAPR
"It is not the class of license that the Amateur holds but the class
of the Amateur that holds the license"
OS
Oleg Skydan
Wed, Jun 6, 2018 12:53 PM
Hi, Magnus!
Sorry for the late answer, I injured my left eye last Monday, so had very
limited abilities to use computer.
From: "Magnus Danielson" magnus@rubidium.dyndns.org
As long as the sums C and D becomes correct, your
path to it can be whatever.
Yes. It produces the same sums.
Yes please do, then I can double check it.
I have write a note and attached it. The described modifications to the
original method were successfully tested on my experimental HW.
Yeah, now you can move your harware focus on considering interpolation
techniques beyond the processing power of least-square estimation, which
integrate noise way down.
If you are talking about adding traditional HW interpolation of the trigger
events I have no plans to do it. It is not possible to do it keeping 2.5ns
base counter resolution (there is no way to output 400MHz clock signal out
of the chip) and I do not want to add extra complexity to the HW of this
project.
But, the HW I use can simultaneously sample up to 10 timestamps. So, I can
push the one shoot resolution down to 250ps using several delay lines
(theoretically). I do not think that going down to 250ps has much sense
(also I have another plans for that additional HW), but 2x or 4x one shot
resolution improvement (down to 1.25ns or 625ps) is relatively simple to
implement in HW and should be a good idea to try.
I will probably throw out the power hungry and expensive SDRAM chip or
use much smaller one :).
Yeah, it would only be if you build multi-tau PDEV plots that you would
need much memory, other than that it is just buffer memory to buffer
before it goes to off-board processing, at which time you would need to
convey the C, D, N and tau0 values.
Yes, I want to produce multi-tau PDEV plots :).
They can be computed with small memory footprint, but it will be non
overlapped PDEVs, so the confidence level at large taus will be poor (with
the practical durations of the measurements). I have a working code that
realizes such algorithm. It uses only 272bytes of memory for each decade
(1-2-5 values).
I need to think how to do the overlapping PDEV calculations with minimal
memory/processing power requirements (I am aware that decimation routines
should not use the overlapped calculations).
BTW, are there any "optimal overlapping"? Or I should just use as much data
as I can process?
Please report on that progress! Sounds fun!
I will drop a note when I will move on the next step. The things are a bit
slower now.
Thanks!
Oleg
Hi, Magnus!
Sorry for the late answer, I injured my left eye last Monday, so had very
limited abilities to use computer.
From: "Magnus Danielson" <magnus@rubidium.dyndns.org>
> As long as the sums C and D becomes correct, your
> path to it can be whatever.
Yes. It produces the same sums.
> Yes please do, then I can double check it.
I have write a note and attached it. The described modifications to the
original method were successfully tested on my experimental HW.
> Yeah, now you can move your harware focus on considering interpolation
> techniques beyond the processing power of least-square estimation, which
> integrate noise way down.
If you are talking about adding traditional HW interpolation of the trigger
events I have no plans to do it. It is not possible to do it keeping 2.5ns
base counter resolution (there is no way to output 400MHz clock signal out
of the chip) and I do not want to add extra complexity to the HW of this
project.
But, the HW I use can simultaneously sample up to 10 timestamps. So, I can
push the one shoot resolution down to 250ps using several delay lines
(theoretically). I do not think that going down to 250ps has much sense
(also I have another plans for that additional HW), but 2x or 4x one shot
resolution improvement (down to 1.25ns or 625ps) is relatively simple to
implement in HW and should be a good idea to try.
>> I will probably throw out the power hungry and expensive SDRAM chip or
>> use much smaller one :).
>
> Yeah, it would only be if you build multi-tau PDEV plots that you would
> need much memory, other than that it is just buffer memory to buffer
> before it goes to off-board processing, at which time you would need to
> convey the C, D, N and tau0 values.
Yes, I want to produce multi-tau PDEV plots :).
They can be computed with small memory footprint, but it will be non
overlapped PDEVs, so the confidence level at large taus will be poor (with
the practical durations of the measurements). I have a working code that
realizes such algorithm. It uses only 272bytes of memory for each decade
(1-2-5 values).
I need to think how to do the overlapping PDEV calculations with minimal
memory/processing power requirements (I am aware that decimation routines
should not use the overlapped calculations).
BTW, are there any "optimal overlapping"? Or I should just use as much data
as I can process?
> Please report on that progress! Sounds fun!
I will drop a note when I will move on the next step. The things are a bit
slower now.
Thanks!
Oleg