Hej Anders!
On 10/28/18 1:14 PM, Anders Wallin wrote:
I made a revised figure with a few improvements:
This is a great illustration of what we have been discussing.
Notice that while the phase and frequency power densities neatly cross
at 1 Hz, the Allan Deviation and Modified Allan Deviation does not cross
at 1 s, in fact the cross-point between two noise-types is unique for
that pair, but around the same point.
The scale factors is due to how the details of the integration comes out.
I forget where the MDEV-coefficients come from - maybe the Dawkins et al.
paper? (worth adding to wikipedia also?)
Let me dig into that. I did that for ADEV and biases, and there is a
fair amount of copy from someone else, with some minor reformulations.
The biases for instance ended up with a rarely mentioned NIST publication.
Also for flicker-PM there seems to be (slightly) different versions of the
ADEV pre-factor in different references.
Yes, I've seen that too. For the Wikipedia I chose the more accurate
one, where as various forms of short-hands have been used. They are all
consistent as I recall it, but the difference is how they express the
value. Some combine the two constants into a single number, where as if
you do the integration you get three numbers, of which one depends on
tau and f_H.
It is worth mentioning that for WPM and FPM the integral does not
converge if allowed to go to infinite frequency, so the integral needs
to stop at the highest frequency f_H. The others do converge for
infinite frequency so the integrals continue to infinity rather than
stopping at f_H. So, the factors is not even created under the same
circumstances. On the other hand, for all practical matters I expect
them to be fairly close.
Cheers,
Magnus
Moin,
I'm bunching a few mails together, to not clutter the mailinglist too much
On Sat, 27 Oct 2018 23:25:30 +0200
Magnus Danielson magnus@rubidium.dyndns.org wrote:
The integration is very important aspect, as a number of assumptions
becomes embedded into it, such as the f_H frequency which is the Nyquist
frequency for counters, so sampling interval is also a relevant
parameter for expected level.
An important thing to note here is that Gaussian white noise is,
as it is defined, non-continuous (by any continuity measure).
Ie if you take two samples, no matter how close they are time-wise,
their difference in value can be arbitrary large. If you are integrating
over (time-continuous) Gaussian white noise, you have to argue
carefully, why this integral is defined (meaning why calculating it
leads to a single, well defined value). In our case, it's usually
enough to assume that there is a finite cut-off frequency at which
point the signal falls off with at least 1/f^2 (or >=40dB/dec) to
ensure 1) continuity and 2) convergence of the integral.
For more details, see a textbook on Ito-calculus, e.g. [1]
On Sat, 27 Oct 2018 23:43:33 +0200
Magnus Danielson magnus@rubidium.dyndns.org wrote:
A simple trick to transform uniform distribution to normal distribution
like shape is to take 12 samples and add them together. A special trick
is to take them pair-wise and subtract them and then add 6 differences,
to avoid DC bias of typical uniform distribution generation (as typical
pseudo-noise generators does not have all 0 state in them). The result
of this subtract-add trick is a normal distribution like thing with the
standard distribution of 1. More or fewer sample-pairs can be added if
the product is scaled appropriately.
The Box-Mueller algorithm is another way to convert uniform distribution
to normal distribution.
Please, plase, please, do not use "just 12 samples and add them together"
as an approach for generating normal distributed values! Even if it will
get you something that looks like a normal distribution, it's quite far
from it. It is also a very slow method and uses up a lot of randomnes.
Box-Müller is a usable alternative, though I would recommend using
the Ziggurat Method[2], which is very fast and leads to a very good
approximation. When I replaced the "take 30 samples and add them" of
François Vernotte's Sigmatheta package[3] and used the Ziggurat Method,
combined with xorshift1024*[4] for random number generation, I got
a total speed up of a factor of more than 2 (including the FFT and
everything)[5] (yes, I know that xorshift1024* does have some problems
in the quality of random numbers generated, but they shouldn't be
relevant for the application at hand).
Attila Kinali
[1] "Stochastic Differential Equations", by Bernt Øksendal, 2013 (6th ed)
[2] "The Ziggurat Method for Generating Random Variables",
by Marsaglia and Tsang, 2000
http://dx.doi.org/10.18637/jss.v005.i08
[3] https://theta.obs-besancon.fr/spip.php?article103&lang=fr
[4] http://xoshiro.di.unimi.it/
or more specifically: http://xoroshiro.di.unimi.it/xorshift1024star.c
It is upon moral qualities that a society is ultimately founded. All
the prosperity and technological sophistication in the world is of no
use without that foundation.
-- Miss Matheson, The Diamond Age, Neal Stephenson
Hi Attila,
On 10/29/18 2:59 PM, Attila Kinali wrote:
Moin,
I'm bunching a few mails together, to not clutter the mailinglist too much
On Sat, 27 Oct 2018 23:25:30 +0200
Magnus Danielson magnus@rubidium.dyndns.org wrote:
The integration is very important aspect, as a number of assumptions
becomes embedded into it, such as the f_H frequency which is the Nyquist
frequency for counters, so sampling interval is also a relevant
parameter for expected level.
An important thing to note here is that Gaussian white noise is,
as it is defined, non-continuous (by any continuity measure).
Ie if you take two samples, no matter how close they are time-wise,
their difference in value can be arbitrary large. If you are integrating
over (time-continuous) Gaussian white noise, you have to argue
carefully, why this integral is defined (meaning why calculating it
leads to a single, well defined value). In our case, it's usually
enough to assume that there is a finite cut-off frequency at which
point the signal falls off with at least 1/f^2 (or >=40dB/dec) to
ensure 1) continuity and 2) convergence of the integral.
There is aspects of noise which is more or less important depending on
what you do. As we leave WPM it is no longer gaussian anyway. For ADEV
and friends the shape of the PDF isn't as important as for other things.
The slope of the frequency range is however the important one. It is
only when we do the confidence intervals where the Gaussian shape
becomes relevant for the Chi-square bounds, but those are usually not
precise enough that even rough Gaussian shape is relevant. Even for
noises with none-Gaussian properties, the Chi-square seems to be valid
enough.
For other measures, like bit-error simulations, proper Gaussian shape is
much more important, but only to a certain point. For higher BER values,
the details of the outer part of the shape isn't all that important,
it's only as you push into lower BER numbers you need to care.
For more details, see a textbook on Ito-calculus, e.g. [1]
On Sat, 27 Oct 2018 23:43:33 +0200
Magnus Danielson magnus@rubidium.dyndns.org wrote:
A simple trick to transform uniform distribution to normal distribution
like shape is to take 12 samples and add them together. A special trick
is to take them pair-wise and subtract them and then add 6 differences,
to avoid DC bias of typical uniform distribution generation (as typical
pseudo-noise generators does not have all 0 state in them). The result
of this subtract-add trick is a normal distribution like thing with the
standard distribution of 1. More or fewer sample-pairs can be added if
the product is scaled appropriately.
The Box-Mueller algorithm is another way to convert uniform distribution
to normal distribution.
Please, plase, please, do not use "just 12 samples and add them together"
as an approach for generating normal distributed values! Even if it will
get you something that looks like a normal distribution, it's quite far
from it. It is also a very slow method and uses up a lot of randomnes.
Actually, for many simulations you do not need better "shape".
There is some simulations where shape comes in, but others where it has
little to no consequence.
Box-Müller is a usable alternative, though I would recommend using
the Ziggurat Method[2], which is very fast and leads to a very good
approximation. When I replaced the "take 30 samples and add them" of
François Vernotte's Sigmatheta package[3] and used the Ziggurat Method,
combined with xorshift1024*[4] for random number generation, I got
a total speed up of a factor of more than 2 (including the FFT and
everything)[5] (yes, I know that xorshift1024* does have some problems
in the quality of random numbers generated, but they shouldn't be
relevant for the application at hand).
Getting suitable PRNG polynomials isn't all that hard, if the length of
the "random" sequence is of concern compared to the length of the
sequence used. It's a solved problem.
Never the less, thanks for the many references. Will read up on them
eventually.
Cheers,
Magnus
Attila Kinali
[1] "Stochastic Differential Equations", by Bernt Øksendal, 2013 (6th ed)
[2] "The Ziggurat Method for Generating Random Variables",
by Marsaglia and Tsang, 2000
http://dx.doi.org/10.18637/jss.v005.i08
[3] https://theta.obs-besancon.fr/spip.php?article103&lang=fr
[4] http://xoshiro.di.unimi.it/
or more specifically: http://xoroshiro.di.unimi.it/xorshift1024star.c