time-nuts@lists.febo.com

Discussion of precise time and frequency measurement

View all threads

σ vs s in ADEV

MD
Magnus Danielson
Mon, Jan 9, 2017 9:49 PM

Scott,

On 01/09/2017 07:41 PM, Scott Stobbe wrote:

I could be wrong here, but it is my understanding that Allan's pioneering
work was in response to finding a statistic which is convergent to 1/f
noise. Ordinary standard deviation is not convergent to 1/f processes. So I
don't know that trying to compare the two is wise. Disclaimer: I could be
totally wrong, if someone has better grasp on how the allan deviation came
to be, please correct me.

There where precursor work to Allans Feb 1966 article, but essentially
that where he amalgamed several properties into one to rule them all
(almost). It is indeed the non-convergent properties which motivates a
stronger method. Standard statistics is relevant for many of the basic
blocks, bit things work differently with the non-convergent noise.
Another aspect which was important then was the fact that it was a
counter-based measure. Some of the assumptions is due to the fact that
they used counters. I asked David some questions about why the integral
looks the way it does, and well, it reflects the hardware at the time.

What drives Allan vs. standard deviation is that extra derive function
before squaring
The bias functions that Allan derives for M-sample is really the
behavior of the s-deviation. See Allan variance wikipedia article as
there is good references there for the bias function. That bias function
is really illustrating the lack of convergence for M-sample standard
deviation. The Allan is really a power-average over the 2-sample
standard deviation.

Cheers,
Magnus

On Wed, Jan 4, 2017 at 3:12 PM, Attila Kinali attila@kinali.ch wrote:

Hi,

A small detail caught my eye, when reading a paper that informally
introduced ADEV. In statistics, when calculating a variance over
a sample of a population the square-sum is divided by (n-1)(denoted by s in
statistics) instead of (n) (denoted by σ) in order to account for a small
bias
the "standard" variance introduces
(c.f. https://en.wikipedia.org/wiki/Unbiased_estimation_of_
standard_deviation )
In almost all literature I have seen, ADEV is defined using an average,
i.e. dividing by (n) and very few use (n-1).

My question is two-fold: Why is (n) being used even though it's known
to be an biased estimator? And why do people not use s when using (n-1)?

                     Attila Kinali

--
It is upon moral qualities that a society is ultimately founded. All
the prosperity and technological sophistication in the world is of no
use without that foundation.
-- Miss Matheson, The Diamond Age, Neil Stephenson


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/
mailman/listinfo/time-nuts
and follow the instructions there.


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Scott, On 01/09/2017 07:41 PM, Scott Stobbe wrote: > I could be wrong here, but it is my understanding that Allan's pioneering > work was in response to finding a statistic which is convergent to 1/f > noise. Ordinary standard deviation is not convergent to 1/f processes. So I > don't know that trying to compare the two is wise. Disclaimer: I could be > totally wrong, if someone has better grasp on how the allan deviation came > to be, please correct me. There where precursor work to Allans Feb 1966 article, but essentially that where he amalgamed several properties into one to rule them all (almost). It is indeed the non-convergent properties which motivates a stronger method. Standard statistics is relevant for many of the basic blocks, bit things work differently with the non-convergent noise. Another aspect which was important then was the fact that it was a counter-based measure. Some of the assumptions is due to the fact that they used counters. I asked David some questions about why the integral looks the way it does, and well, it reflects the hardware at the time. What drives Allan vs. standard deviation is that extra derive function before squaring The bias functions that Allan derives for M-sample is really the behavior of the s-deviation. See Allan variance wikipedia article as there is good references there for the bias function. That bias function is really illustrating the lack of convergence for M-sample standard deviation. The Allan is really a power-average over the 2-sample standard deviation. Cheers, Magnus > On Wed, Jan 4, 2017 at 3:12 PM, Attila Kinali <attila@kinali.ch> wrote: > >> Hi, >> >> A small detail caught my eye, when reading a paper that informally >> introduced ADEV. In statistics, when calculating a variance over >> a sample of a population the square-sum is divided by (n-1)(denoted by s in >> statistics) instead of (n) (denoted by σ) in order to account for a small >> bias >> the "standard" variance introduces >> (c.f. https://en.wikipedia.org/wiki/Unbiased_estimation_of_ >> standard_deviation ) >> In almost all literature I have seen, ADEV is defined using an average, >> i.e. dividing by (n) and very few use (n-1). >> >> My question is two-fold: Why is (n) being used even though it's known >> to be an biased estimator? And why do people not use s when using (n-1)? >> >> Attila Kinali >> >> -- >> It is upon moral qualities that a society is ultimately founded. All >> the prosperity and technological sophistication in the world is of no >> use without that foundation. >> -- Miss Matheson, The Diamond Age, Neil Stephenson >> _______________________________________________ >> time-nuts mailing list -- time-nuts@febo.com >> To unsubscribe, go to https://www.febo.com/cgi-bin/ >> mailman/listinfo/time-nuts >> and follow the instructions there. >> > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there. >
BK
Bob kb8tq
Mon, Jan 9, 2017 10:00 PM

Hi

On Jan 9, 2017, at 4:49 PM, Magnus Danielson magnus@rubidium.dyndns.org wrote:

Scott,

On 01/09/2017 07:41 PM, Scott Stobbe wrote:

I could be wrong here, but it is my understanding that Allan's pioneering
work was in response to finding a statistic which is convergent to 1/f
noise. Ordinary standard deviation is not convergent to 1/f processes. So I
don't know that trying to compare the two is wise. Disclaimer: I could be
totally wrong, if someone has better grasp on how the allan deviation came
to be, please correct me.

There where precursor work to Allans Feb 1966 article, but essentially that where he amalgamed several properties into one to rule them all (almost). It is indeed the non-convergent properties which motivates a stronger method.

A number of outfits were measuring and spec’ing short term stability in the 1950’s and early 1960’s. Some were doing measures that are pretty close to ADEV. Others were doing straight standard deviation of frequency measurements. Since both got tossed up as “short term stability” confusion was the main result. NIST came in (as it rightly should) and gave us a measurement that does converge. They also spend the next two decades
thumping on a bunch of hard heads to get everybody to use the measurement rather than something with more issues. Once that effort was underway, we got a whole raft of alternatives that each have benefits in certain areas.
ADEV is far from the only measure that could be properly be used today to characterize short term stability.

Bob

Standard statistics is relevant for many of the basic blocks, bit things work differently with the non-convergent noise.
Another aspect which was important then was the fact that it was a counter-based measure. Some of the assumptions is due to the fact that they used counters. I asked David some questions about why the integral looks the way it does, and well, it reflects the hardware at the time.

What drives Allan vs. standard deviation is that extra derive function before squaring
The bias functions that Allan derives for M-sample is really the behavior of the s-deviation. See Allan variance wikipedia article as there is good references there for the bias function. That bias function is really illustrating the lack of convergence for M-sample standard deviation. The Allan is really a power-average over the 2-sample standard deviation.

Cheers,
Magnus

On Wed, Jan 4, 2017 at 3:12 PM, Attila Kinali attila@kinali.ch wrote:

Hi > On Jan 9, 2017, at 4:49 PM, Magnus Danielson <magnus@rubidium.dyndns.org> wrote: > > Scott, > > On 01/09/2017 07:41 PM, Scott Stobbe wrote: >> I could be wrong here, but it is my understanding that Allan's pioneering >> work was in response to finding a statistic which is convergent to 1/f >> noise. Ordinary standard deviation is not convergent to 1/f processes. So I >> don't know that trying to compare the two is wise. Disclaimer: I could be >> totally wrong, if someone has better grasp on how the allan deviation came >> to be, please correct me. > > There where precursor work to Allans Feb 1966 article, but essentially that where he amalgamed several properties into one to rule them all (almost). It is indeed the non-convergent properties which motivates a stronger method. A number of outfits were measuring and spec’ing short term stability in the 1950’s and early 1960’s. Some were doing measures that are pretty close to ADEV. Others were doing straight standard deviation of frequency measurements. Since both got tossed up as “short term stability” confusion was the main result. NIST came in (as it rightly should) and gave us a measurement that does converge. They also spend the next two decades thumping on a bunch of hard heads to get everybody to use the measurement rather than something with more issues. Once that effort was underway, we got a whole raft of alternatives that each have benefits in certain areas. ADEV is far from the only measure that could be properly be used today to characterize short term stability. Bob > Standard statistics is relevant for many of the basic blocks, bit things work differently with the non-convergent noise. > Another aspect which was important then was the fact that it was a counter-based measure. Some of the assumptions is due to the fact that they used counters. I asked David some questions about why the integral looks the way it does, and well, it reflects the hardware at the time. > > What drives Allan vs. standard deviation is that extra derive function before squaring > The bias functions that Allan derives for M-sample is really the behavior of the s-deviation. See Allan variance wikipedia article as there is good references there for the bias function. That bias function is really illustrating the lack of convergence for M-sample standard deviation. The Allan is really a power-average over the 2-sample standard deviation. > > Cheers, > Magnus > >> On Wed, Jan 4, 2017 at 3:12 PM, Attila Kinali <attila@kinali.ch> wrote:
MD
Magnus Danielson
Mon, Jan 9, 2017 10:07 PM

Hi Bob,

On 01/09/2017 11:00 PM, Bob kb8tq wrote:

Hi

On Jan 9, 2017, at 4:49 PM, Magnus Danielson magnus@rubidium.dyndns.org wrote:

Scott,

On 01/09/2017 07:41 PM, Scott Stobbe wrote:

I could be wrong here, but it is my understanding that Allan's pioneering
work was in response to finding a statistic which is convergent to 1/f
noise. Ordinary standard deviation is not convergent to 1/f processes. So I
don't know that trying to compare the two is wise. Disclaimer: I could be
totally wrong, if someone has better grasp on how the allan deviation came
to be, please correct me.

There where precursor work to Allans Feb 1966 article, but essentially that where he amalgamed several properties into one to rule them all (almost). It is indeed the non-convergent properties which motivates a stronger method.

A number of outfits were measuring and spec’ing short term stability in the 1950’s and early 1960’s. Some were doing measures that are pretty close to ADEV. Others were doing straight standard deviation of frequency measurements. Since both got tossed up as “short term stability” confusion was the main result. NIST came in (as it rightly should) and gave us a measurement that does converge. They also spend the next two decades
thumping on a bunch of hard heads to get everybody to use the measurement rather than something with more issues. Once that effort was underway, we got a whole raft of alternatives that each have benefits in certain areas.
ADEV is far from the only measure that could be properly be used today to characterize short term stability.

Rather, it provides a basic form of measure for which there exist a
number of estimators. The repairment of the 15 years of troubles was
found inn the MDEV as introduced in 1981, providing a new line of
measure. Improvement in estimator degrees of freedom have rendered
variants. The PDEV provides yet improvement over MDEV.

Cheers,
Magnus

Hi Bob, On 01/09/2017 11:00 PM, Bob kb8tq wrote: > Hi > >> On Jan 9, 2017, at 4:49 PM, Magnus Danielson <magnus@rubidium.dyndns.org> wrote: >> >> Scott, >> >> On 01/09/2017 07:41 PM, Scott Stobbe wrote: >>> I could be wrong here, but it is my understanding that Allan's pioneering >>> work was in response to finding a statistic which is convergent to 1/f >>> noise. Ordinary standard deviation is not convergent to 1/f processes. So I >>> don't know that trying to compare the two is wise. Disclaimer: I could be >>> totally wrong, if someone has better grasp on how the allan deviation came >>> to be, please correct me. >> >> There where precursor work to Allans Feb 1966 article, but essentially that where he amalgamed several properties into one to rule them all (almost). It is indeed the non-convergent properties which motivates a stronger method. > > > A number of outfits were measuring and spec’ing short term stability in the 1950’s and early 1960’s. Some were doing measures that are pretty close to ADEV. Others were doing straight standard deviation of frequency measurements. Since both got tossed up as “short term stability” confusion was the main result. NIST came in (as it rightly should) and gave us a measurement that does converge. They also spend the next two decades > thumping on a bunch of hard heads to get everybody to use the measurement rather than something with more issues. Once that effort was underway, we got a whole raft of alternatives that each have benefits in certain areas. > ADEV is far from the only measure that could be properly be used today to characterize short term stability. Rather, it provides a basic form of measure for which there exist a number of estimators. The repairment of the 15 years of troubles was found inn the MDEV as introduced in 1981, providing a new line of measure. Improvement in estimator degrees of freedom have rendered variants. The PDEV provides yet improvement over MDEV. Cheers, Magnus
JN
Jeremy Nichols
Mon, Jan 9, 2017 10:09 PM

In the late 1960s, Hewlett-Packard engineers worked up a program to have
the 5360A "Computing Pig" (so-called from its weight, 55 pounds without
plug-ins) compute a "fractional frequency standard deviation." It appears
to be similar to the Allen Deviation; I've never figured out the difference
and would appreciate hearing from someone with stronger math skills who can
explain the two.

Jeremy

On Mon, Jan 9, 2017 at 2:00 PM Bob kb8tq kb8tq@n1k.org wrote:

Hi

On Jan 9, 2017, at 4:49 PM, Magnus Danielson magnus@rubidium.dyndns.org

wrote:

Scott,

On 01/09/2017 07:41 PM, Scott Stobbe wrote:

I could be wrong here, but it is my understanding that Allan's

pioneering

work was in response to finding a statistic which is convergent to 1/f

noise. Ordinary standard deviation is not convergent to 1/f processes.

So I

don't know that trying to compare the two is wise. Disclaimer: I could

be

totally wrong, if someone has better grasp on how the allan deviation

came

to be, please correct me.

There where precursor work to Allans Feb 1966 article, but essentially

that where he amalgamed several properties into one to rule them all
(almost). It is indeed the non-convergent properties which motivates a
stronger method.

A number of outfits were measuring and spec’ing short term stability in
the 1950’s and early 1960’s. Some were doing measures that are pretty close
to ADEV. Others were doing straight standard deviation of frequency
measurements. Since both got tossed up as “short term stability” confusion
was the main result. NIST came in (as it rightly should) and gave us a
measurement that does converge. They also spend the next two decades

thumping on a bunch of hard heads to get everybody to use the measurement
rather than something with more issues. Once that effort was underway, we
got a whole raft of alternatives that each have benefits in certain areas.

ADEV is far from the only measure that could be properly be used today to
characterize short term stability.

Bob

Standard statistics is relevant for many of the basic blocks, bit things

work differently with the non-convergent noise.

Another aspect which was important then was the fact that it was a

counter-based measure. Some of the assumptions is due to the fact that they
used counters. I asked David some questions about why the integral looks
the way it does, and well, it reflects the hardware at the time.

What drives Allan vs. standard deviation is that extra derive function

before squaring

The bias functions that Allan derives for M-sample is really the

behavior of the s-deviation. See Allan variance wikipedia article as there
is good references there for the bias function. That bias function is
really illustrating the lack of convergence for M-sample standard
deviation. The Allan is really a power-average over the 2-sample standard
deviation.

Cheers,

Magnus

On Wed, Jan 4, 2017 at 3:12 PM, Attila Kinali attila@kinali.ch wrote:


time-nuts mailing list -- time-nuts@febo.com

To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts

and follow the instructions there.

--

Sent from Gmail Mobile

In the late 1960s, Hewlett-Packard engineers worked up a program to have the 5360A "Computing Pig" (so-called from its weight, 55 pounds without plug-ins) compute a "fractional frequency standard deviation." It appears to be similar to the Allen Deviation; I've never figured out the difference and would appreciate hearing from someone with stronger math skills who can explain the two. Jeremy On Mon, Jan 9, 2017 at 2:00 PM Bob kb8tq <kb8tq@n1k.org> wrote: > Hi > > > > > On Jan 9, 2017, at 4:49 PM, Magnus Danielson <magnus@rubidium.dyndns.org> > wrote: > > > > > > Scott, > > > > > > On 01/09/2017 07:41 PM, Scott Stobbe wrote: > > >> I could be wrong here, but it is my understanding that Allan's > pioneering > > >> work was in response to finding a statistic which is convergent to 1/f > > >> noise. Ordinary standard deviation is not convergent to 1/f processes. > So I > > >> don't know that trying to compare the two is wise. Disclaimer: I could > be > > >> totally wrong, if someone has better grasp on how the allan deviation > came > > >> to be, please correct me. > > > > > > There where precursor work to Allans Feb 1966 article, but essentially > that where he amalgamed several properties into one to rule them all > (almost). It is indeed the non-convergent properties which motivates a > stronger method. > > > > > > A number of outfits were measuring and spec’ing short term stability in > the 1950’s and early 1960’s. Some were doing measures that are pretty close > to ADEV. Others were doing straight standard deviation of frequency > measurements. Since both got tossed up as “short term stability” confusion > was the main result. NIST came in (as it rightly should) and gave us a > measurement that does converge. They also spend the next two decades > > thumping on a bunch of hard heads to get everybody to use the measurement > rather than something with more issues. Once that effort was underway, we > got a whole raft of alternatives that each have benefits in certain areas. > > ADEV is far from the only measure that could be properly be used today to > characterize short term stability. > > > > Bob > > > > > Standard statistics is relevant for many of the basic blocks, bit things > work differently with the non-convergent noise. > > > Another aspect which was important then was the fact that it was a > counter-based measure. Some of the assumptions is due to the fact that they > used counters. I asked David some questions about why the integral looks > the way it does, and well, it reflects the hardware at the time. > > > > > > What drives Allan vs. standard deviation is that extra derive function > before squaring > > > The bias functions that Allan derives for M-sample is really the > behavior of the s-deviation. See Allan variance wikipedia article as there > is good references there for the bias function. That bias function is > really illustrating the lack of convergence for M-sample standard > deviation. The Allan is really a power-average over the 2-sample standard > deviation. > > > > > > Cheers, > > > Magnus > > > > > >> On Wed, Jan 4, 2017 at 3:12 PM, Attila Kinali <attila@kinali.ch> wrote: > > > > _______________________________________________ > > time-nuts mailing list -- time-nuts@febo.com > > To unsubscribe, go to > https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > > and follow the instructions there. > > -- Sent from Gmail Mobile
BK
Bob kb8tq
Mon, Jan 9, 2017 11:20 PM

Hi

On Jan 9, 2017, at 5:09 PM, Jeremy Nichols jn6wfo@gmail.com wrote:

In the late 1960s, Hewlett-Packard engineers worked up a program to have
the 5360A "Computing Pig" (so-called from its weight, 55 pounds without
plug-ins) compute a "fractional frequency standard deviation." It appears
to be similar to the Allen Deviation; I've never figured out the difference
and would appreciate hearing from someone with stronger math skills who can
explain the two.

The 5360A did ADEV. It only started being called ADEV after a few years had passed.
The 5360A program and it’s various quirks became the topic of a number of post paper
questions in the early 1970’s. The main focus of most of the questions was on bandwidth
limiting ahead of the counter. That question really didn’t get a proper answer for several
more decades.

Bob

Jeremy

On Mon, Jan 9, 2017 at 2:00 PM Bob kb8tq kb8tq@n1k.org wrote:

Hi

On Jan 9, 2017, at 4:49 PM, Magnus Danielson magnus@rubidium.dyndns.org

wrote:

Scott,

On 01/09/2017 07:41 PM, Scott Stobbe wrote:

I could be wrong here, but it is my understanding that Allan's

pioneering

work was in response to finding a statistic which is convergent to 1/f

noise. Ordinary standard deviation is not convergent to 1/f processes.

So I

don't know that trying to compare the two is wise. Disclaimer: I could

be

totally wrong, if someone has better grasp on how the allan deviation

came

to be, please correct me.

There where precursor work to Allans Feb 1966 article, but essentially

that where he amalgamed several properties into one to rule them all
(almost). It is indeed the non-convergent properties which motivates a
stronger method.

A number of outfits were measuring and spec’ing short term stability in
the 1950’s and early 1960’s. Some were doing measures that are pretty close
to ADEV. Others were doing straight standard deviation of frequency
measurements. Since both got tossed up as “short term stability” confusion
was the main result. NIST came in (as it rightly should) and gave us a
measurement that does converge. They also spend the next two decades

thumping on a bunch of hard heads to get everybody to use the measurement
rather than something with more issues. Once that effort was underway, we
got a whole raft of alternatives that each have benefits in certain areas.

ADEV is far from the only measure that could be properly be used today to
characterize short term stability.

Bob

Standard statistics is relevant for many of the basic blocks, bit things

work differently with the non-convergent noise.

Another aspect which was important then was the fact that it was a

counter-based measure. Some of the assumptions is due to the fact that they
used counters. I asked David some questions about why the integral looks
the way it does, and well, it reflects the hardware at the time.

What drives Allan vs. standard deviation is that extra derive function

before squaring

The bias functions that Allan derives for M-sample is really the

behavior of the s-deviation. See Allan variance wikipedia article as there
is good references there for the bias function. That bias function is
really illustrating the lack of convergence for M-sample standard
deviation. The Allan is really a power-average over the 2-sample standard
deviation.

Cheers,

Magnus

On Wed, Jan 4, 2017 at 3:12 PM, Attila Kinali attila@kinali.ch wrote:


time-nuts mailing list -- time-nuts@febo.com

To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts

and follow the instructions there.

--

Sent from Gmail Mobile


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi > On Jan 9, 2017, at 5:09 PM, Jeremy Nichols <jn6wfo@gmail.com> wrote: > > In the late 1960s, Hewlett-Packard engineers worked up a program to have > the 5360A "Computing Pig" (so-called from its weight, 55 pounds without > plug-ins) compute a "fractional frequency standard deviation." It appears > to be similar to the Allen Deviation; I've never figured out the difference > and would appreciate hearing from someone with stronger math skills who can > explain the two. The 5360A did ADEV. It only started being called ADEV after a few years had passed. The 5360A program and it’s various quirks became the topic of a number of post paper questions in the early 1970’s. The main focus of most of the questions was on bandwidth limiting ahead of the counter. That question really didn’t get a proper answer for several more decades. Bob > > Jeremy > > > On Mon, Jan 9, 2017 at 2:00 PM Bob kb8tq <kb8tq@n1k.org> wrote: > >> Hi >> >> >> >>> On Jan 9, 2017, at 4:49 PM, Magnus Danielson <magnus@rubidium.dyndns.org> >> wrote: >> >>> >> >>> Scott, >> >>> >> >>> On 01/09/2017 07:41 PM, Scott Stobbe wrote: >> >>>> I could be wrong here, but it is my understanding that Allan's >> pioneering >> >>>> work was in response to finding a statistic which is convergent to 1/f >> >>>> noise. Ordinary standard deviation is not convergent to 1/f processes. >> So I >> >>>> don't know that trying to compare the two is wise. Disclaimer: I could >> be >> >>>> totally wrong, if someone has better grasp on how the allan deviation >> came >> >>>> to be, please correct me. >> >>> >> >>> There where precursor work to Allans Feb 1966 article, but essentially >> that where he amalgamed several properties into one to rule them all >> (almost). It is indeed the non-convergent properties which motivates a >> stronger method. >> >> >> >> >> >> A number of outfits were measuring and spec’ing short term stability in >> the 1950’s and early 1960’s. Some were doing measures that are pretty close >> to ADEV. Others were doing straight standard deviation of frequency >> measurements. Since both got tossed up as “short term stability” confusion >> was the main result. NIST came in (as it rightly should) and gave us a >> measurement that does converge. They also spend the next two decades >> >> thumping on a bunch of hard heads to get everybody to use the measurement >> rather than something with more issues. Once that effort was underway, we >> got a whole raft of alternatives that each have benefits in certain areas. >> >> ADEV is far from the only measure that could be properly be used today to >> characterize short term stability. >> >> >> >> Bob >> >> >> >>> Standard statistics is relevant for many of the basic blocks, bit things >> work differently with the non-convergent noise. >> >>> Another aspect which was important then was the fact that it was a >> counter-based measure. Some of the assumptions is due to the fact that they >> used counters. I asked David some questions about why the integral looks >> the way it does, and well, it reflects the hardware at the time. >> >>> >> >>> What drives Allan vs. standard deviation is that extra derive function >> before squaring >> >>> The bias functions that Allan derives for M-sample is really the >> behavior of the s-deviation. See Allan variance wikipedia article as there >> is good references there for the bias function. That bias function is >> really illustrating the lack of convergence for M-sample standard >> deviation. The Allan is really a power-average over the 2-sample standard >> deviation. >> >>> >> >>> Cheers, >> >>> Magnus >> >>> >> >>>> On Wed, Jan 4, 2017 at 3:12 PM, Attila Kinali <attila@kinali.ch> wrote: >> >> >> >> _______________________________________________ >> >> time-nuts mailing list -- time-nuts@febo.com >> >> To unsubscribe, go to >> https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts >> >> and follow the instructions there. >> >> -- > Sent from Gmail Mobile > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there.
TV
Tom Van Baak
Mon, Jan 9, 2017 11:26 PM

A number of outfits were measuring and spec’ing short term stability in the 1950’s and early 1960’s.

For "ADEV" in the 1950's -- follow the words in yellow [1]:

http://leapsecond.com/pdf/1953-Quartz-Greenwich.pdf

For "ADEV" in the 1930's -- I've seen similar treatment done by precision pendulum clock guys. Basically anyone that works with clocks knows to remove initial time offset and constant rate offset when determining the quality of the clock. What's left is the 2nd difference, which is the unexpected deviation in rate. I remember reading a pendulum clock paper from that era where they take the rms of these deviations and call it some kind figure of merit. It goes by other names now, like two-sample variance, sigma(tau), Allan deviation, etc.

/tvb

[1] The original, without OCR or highlight: http://adsabs.harvard.edu/full/1953MNRAS.113...67S

> A number of outfits were measuring and spec’ing short term stability in the 1950’s and early 1960’s. For "ADEV" in the 1950's -- follow the words in yellow [1]: http://leapsecond.com/pdf/1953-Quartz-Greenwich.pdf For "ADEV" in the 1930's -- I've seen similar treatment done by precision pendulum clock guys. Basically anyone that works with clocks knows to remove initial time offset and constant rate offset when determining the quality of the clock. What's left is the 2nd difference, which is the unexpected deviation in rate. I remember reading a pendulum clock paper from that era where they take the rms of these deviations and call it some kind figure of merit. It goes by other names now, like two-sample variance, sigma(tau), Allan deviation, etc. /tvb [1] The original, without OCR or highlight: http://adsabs.harvard.edu/full/1953MNRAS.113...67S
TV
Tom Van Baak
Mon, Jan 9, 2017 11:52 PM

Hi Jeremy,

the 5360A "Computing Pig"
I've never figured out the difference.

See the top of page 11 of http://www.hpl.hp.com/hpjournal/pdfs/IssuePDFs/1971-11.pdf where it says:

"Fractional frequency deviation is the term used to describe the frequency
instabilities of a source in the time domain. It has been shown [4] [5] that a
meaningful quantitative measure of fractional frequency deviation is given
by the Allan variance:"
...
4. D. W. Allan, 'Statistics of Atomic Frequency Stan dards,' IEEE Proceedings, Vol. 54, No. 2, 1966.
5. NBS Technical Note 394, 'Characterization of Fre quency Stability.' 

So that's the same as the definition given, for example, here:
http://www.wriley.com/paper2ht.htm#Allan_Variance

There is one source of confusion: ADEV (Allan deviation) is the square root of AVAR (Allan variance). We almost always calculate and plot AVAR. The other confusion is Allan (it's not Allen).


The 5360A is still a favorite of us old-tyme collectors. Lots of information on the old thread (you're in it):

"HP 5360A History?"
https://www.febo.com/pipermail/time-nuts/2016-April/097227.html

The 5360A source code to the ADEV program is here:
http://leapsecond.com/hpj/v22n4/v22n4p10.jpg

/tvb

----- Original Message -----
From: "Jeremy Nichols" jn6wfo@gmail.com
To: "Discussion of precise time and frequency measurement" time-nuts@febo.com
Sent: Monday, January 09, 2017 2:09 PM
Subject: Re: [time-nuts] σ vs s in ADEV

In the late 1960s, Hewlett-Packard engineers worked up a program to have
the 5360A "Computing Pig" (so-called from its weight, 55 pounds without
plug-ins) compute a "fractional frequency standard deviation." It appears
to be similar to the Allen Deviation; I've never figured out the difference
and would appreciate hearing from someone with stronger math skills who can
explain the two.

Jeremy

Hi Jeremy, > the 5360A "Computing Pig" > I've never figured out the difference. See the top of page 11 of http://www.hpl.hp.com/hpjournal/pdfs/IssuePDFs/1971-11.pdf where it says: "Fractional frequency deviation is the term used to describe the frequency instabilities of a source in the time domain. It has been shown [4] [5] that a meaningful quantitative measure of fractional frequency deviation is given by the Allan variance:" ... 4. D. W. Allan, 'Statistics of Atomic Frequency Stan dards,' IEEE Proceedings, Vol. 54, No. 2, 1966. 5. NBS Technical Note 394, 'Characterization of Fre quency Stability.' So that's the same as the definition given, for example, here: http://www.wriley.com/paper2ht.htm#Allan_Variance There is one source of confusion: ADEV (Allan deviation) is the square root of AVAR (Allan variance). We almost always calculate and plot AVAR. The other confusion is Allan (it's not Allen). ---- The 5360A is still a favorite of us old-tyme collectors. Lots of information on the old thread (you're in it): "HP 5360A History?" https://www.febo.com/pipermail/time-nuts/2016-April/097227.html The 5360A source code to the ADEV program is here: http://leapsecond.com/hpj/v22n4/v22n4p10.jpg /tvb ----- Original Message ----- From: "Jeremy Nichols" <jn6wfo@gmail.com> To: "Discussion of precise time and frequency measurement" <time-nuts@febo.com> Sent: Monday, January 09, 2017 2:09 PM Subject: Re: [time-nuts] σ vs s in ADEV In the late 1960s, Hewlett-Packard engineers worked up a program to have the 5360A "Computing Pig" (so-called from its weight, 55 pounds without plug-ins) compute a "fractional frequency standard deviation." It appears to be similar to the Allen Deviation; I've never figured out the difference and would appreciate hearing from someone with stronger math skills who can explain the two. Jeremy
TV
Tom Van Baak
Mon, Jan 9, 2017 11:59 PM

We almost always calculate and plot AVAR.

I mean we almost always calculate and plot ADEV.

> We almost always calculate and plot AVAR. I mean we almost always calculate and plot ADEV.
MD
Magnus Danielson
Tue, Jan 10, 2017 6:22 PM

Hi Bob,

On 01/10/2017 12:20 AM, Bob kb8tq wrote:

Hi

On Jan 9, 2017, at 5:09 PM, Jeremy Nichols jn6wfo@gmail.com wrote:

In the late 1960s, Hewlett-Packard engineers worked up a program to have
the 5360A "Computing Pig" (so-called from its weight, 55 pounds without
plug-ins) compute a "fractional frequency standard deviation." It appears
to be similar to the Allen Deviation; I've never figured out the difference
and would appreciate hearing from someone with stronger math skills who can
explain the two.

The 5360A did ADEV. It only started being called ADEV after a few years had passed.
The 5360A program and it’s various quirks became the topic of a number of post paper
questions in the early 1970’s. The main focus of most of the questions was on bandwidth
limiting ahead of the counter. That question really didn’t get a proper answer for several
more decades.

I've not found much on that topic as I've searched. Care to point to a
few papers?

I've been looking at it, and you get somewhat different formulas if you
consider the filter.

Cheers,
Magnus

Hi Bob, On 01/10/2017 12:20 AM, Bob kb8tq wrote: > Hi > >> On Jan 9, 2017, at 5:09 PM, Jeremy Nichols <jn6wfo@gmail.com> wrote: >> >> In the late 1960s, Hewlett-Packard engineers worked up a program to have >> the 5360A "Computing Pig" (so-called from its weight, 55 pounds without >> plug-ins) compute a "fractional frequency standard deviation." It appears >> to be similar to the Allen Deviation; I've never figured out the difference >> and would appreciate hearing from someone with stronger math skills who can >> explain the two. > > The 5360A did ADEV. It only started being called ADEV after a few years had passed. > The 5360A program and it’s various quirks became the topic of a number of post paper > questions in the early 1970’s. The main focus of most of the questions was on bandwidth > limiting ahead of the counter. That question really didn’t get a proper answer for several > more decades. I've not found much on that topic as I've searched. Care to point to a few papers? I've been looking at it, and you get somewhat different formulas if you consider the filter. Cheers, Magnus
BK
Bob kb8tq
Tue, Jan 10, 2017 10:59 PM

Hi

On Jan 10, 2017, at 1:22 PM, Magnus Danielson magnus@rubidium.dyndns.org wrote:

Hi Bob,

On 01/10/2017 12:20 AM, Bob kb8tq wrote:

Hi

On Jan 9, 2017, at 5:09 PM, Jeremy Nichols jn6wfo@gmail.com wrote:

In the late 1960s, Hewlett-Packard engineers worked up a program to have
the 5360A "Computing Pig" (so-called from its weight, 55 pounds without
plug-ins) compute a "fractional frequency standard deviation." It appears
to be similar to the Allen Deviation; I've never figured out the difference
and would appreciate hearing from someone with stronger math skills who can
explain the two.

The 5360A did ADEV. It only started being called ADEV after a few years had passed.
The 5360A program and it’s various quirks became the topic of a number of post paper
questions in the early 1970’s. The main focus of most of the questions was on bandwidth
limiting ahead of the counter. That question really didn’t get a proper answer for several
more decades.

I've not found much on that topic as I've searched. Care to point to a few papers?

I've been looking at it, and you get somewhat different formulas if you consider the filter.

It never came up in a paper. It was a question asked from the audience every time the NIST
guys presented an ADEV paper. After a while it got very predictable in terms of who would
stand up and ask what.

Bob

Cheers,
Magnus


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi > On Jan 10, 2017, at 1:22 PM, Magnus Danielson <magnus@rubidium.dyndns.org> wrote: > > Hi Bob, > > On 01/10/2017 12:20 AM, Bob kb8tq wrote: >> Hi >> >>> On Jan 9, 2017, at 5:09 PM, Jeremy Nichols <jn6wfo@gmail.com> wrote: >>> >>> In the late 1960s, Hewlett-Packard engineers worked up a program to have >>> the 5360A "Computing Pig" (so-called from its weight, 55 pounds without >>> plug-ins) compute a "fractional frequency standard deviation." It appears >>> to be similar to the Allen Deviation; I've never figured out the difference >>> and would appreciate hearing from someone with stronger math skills who can >>> explain the two. >> >> The 5360A did ADEV. It only started being called ADEV after a few years had passed. >> The 5360A program and it’s various quirks became the topic of a number of post paper >> questions in the early 1970’s. The main focus of most of the questions was on bandwidth >> limiting ahead of the counter. That question really didn’t get a proper answer for several >> more decades. > > I've not found much on that topic as I've searched. Care to point to a few papers? > > I've been looking at it, and you get somewhat different formulas if you consider the filter. It never came up in a paper. It was a question asked from the audience every time the NIST guys presented an ADEV paper. After a while it got very predictable in terms of who would stand up and ask what. Bob > > Cheers, > Magnus > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there.