On 7/22/2016 2:22 AM, Attila Kinali wrote:
Hoi Rick,
On Thu, 21 Jul 2016 18:47:24 -0700
"Richard (Rick) Karlquist" richard@karlquist.com wrote:
Also in 1996, phase microsteppers were already legacy
technology and didn't have a good reputation for spectral
purity. Another non-panacea.
If they are legacy, what is current state of the art?
And is that how your DDS approach came to be?
Attila Kinali
Yes the DDS papers of 1995/1996 were in response to
the limitations of phase microsteppers. I have been
out of the precision frequency field since 1998,
so I can't speak for the current state of the art.
Rick
Capacitor matching (Moore's law) has improved for switch-cap designs.
Also depends on the tuning gain, 10 ppm/V would be very demanding versus 10
ppb/V.
On Thu, Jul 21, 2016 at 9:47 PM, Richard (Rick) Karlquist <
richard@karlquist.com> wrote:
On 7/21/2016 4:56 PM, Nick Sayer via time-nuts wrote:
Oh my. That’s a bit more than I was originally considering… What I had in
mind was adding a DAC front end to an OCXO so that you could tune the EFC
with serial commands rather than analog and calling that a product.
20 years ago when HP was working on a so called "smart clock"
GPS box, they decided to do what you said, use a DAC to
tune the EFC on the E1938A oscillator. I
recommended to them NOT to try to do that, but they
didn't listen to me. At that time, it
was nearly impossible to come up with a DAC, buffer
amplifier, and voltage reference that didn't degrade
the stability of the E1938A, which isn't even as stable
as a 10811. What you need to ask yourself is: in
2016, can I finally get analog components that are
an order of magnitude or two better than what was
available in 1996? I don't know, without researching
it. Certainly, we can't depend on Moore's law coming
to the rescue. If anything, that works against analog
IC's by obsoleting older analog processes.
Also in 1996, phase microsteppers were already legacy
technology and didn't have a good reputation for spectral
purity. Another non-panacea.
Rick
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
On Fri, 22 Jul 2016 11:33:12 +0200, you wrote:
On Thu, 21 Jul 2016 22:22:14 -0500
David davidwhess@gmail.com wrote:
Increased integration has only helped insofar as you can attempt
designs which would have been prohibitive before.
I keep trying to come up with a charge balancing design but what about
Linear Technology's solution from back in 2001?
A Standards Lab Grade 20-Bit DAC with 0.1ppm/°C Drift
http://www.linear.com/docs/4177
You can already get 24bit DAC's off the shelf from TI (DAC1282).
I do not know how stable they are in reality.
There are a lot of DACs like the TI DAC1282 which are not primarily
intended for DC applications. At least according to its
specifications, it's gain error drift is 100 times greater and its
offset error drift is 150 times greater than the LTC2400 ADC used for
discipline in the Linear Technology application note. The best DC
DACs I could quickly find are only 4 time better than that so still
more than an order of magnitude below the performance of ADCs.
I looked into high
precision DAC's a year or two ago and figured out that once you
cross the 20bit line, all kind of weird stuff happens that is
hard or almost impossible to compensate for. The trick with using
an ADC (which are available up to 32bit) doesn't really work either,
as offset drifts, thermal voltage etc are hard to impossible to
compensate completely. If you go through the volt-nuts mailinglist,
you'll see how much effort it is to even keep 1ppm (~20bit) stability
of a voltage reference... and then to get that performance out of a DAC.
If you expect analog specifications in line with the claimed digital
resolution of ADCs and DACs, you will be disappointed. 20 bits is
about where they top out no matter how many bits are available; the
best you can hope for is that they are monotonic but how meaningful is
that when it is buried in noise?
The LTC2400 is considered suitable for 6 digit designs before software
calibration is used which the application note and datasheet mention.
In this case, it is its repeatable INL which can be corrected for and
its low gain and offset drift which matter.
It is too bad voltage control of an oscillator cannot be made
ratiometric. Or can it? I have never heard of such a thing. That
would remove some of the demands on a low drift reference.
If anything, I think the better approach is to use high resolution DAC
like the DAC1282 or maybe the DAC1280 with a custom modulator and put
it inside the control loop such that the real measurement happens in
the frequency/time domain. The results from Sherman & Jördens[1]
seems to indicate that sub-100fs stability should be possible, though
there are a couple of open questions there.
Attila Kinali
[1] "Oscillator metrology with software defined radio",
by Jeff Sherman and Robert Jördens, 2016
http://dx.doi.org/10.1063/1.4950898
(it's available from NIST as well)
Based on Rick's description of the problem, it seemed like that was
what HP tried and it did not work because the DAC had too much drift
to compensate for in the frequency/time domain:
20 years ago when HP was working on a so called "smart clock"
GPS box, they decided to do what you said, use a DAC to
tune the EFC on the E1938A oscillator. I
recommended to them NOT to try to do that, but they
didn't listen to me. At that time, it
was nearly impossible to come up with a DAC, buffer
amplifier, and voltage reference that didn't degrade
the stability of the E1938A, which isn't even as stable
as a 10811. What you need to ask yourself is: in
2016, can I finally get analog components that are
an order of magnitude or two better than what was
available in 1996? I don't know, without researching
it. Certainly, we can't depend on Moore's law coming
to the rescue. If anything, that works against analog
IC's by obsoleting older analog processes.
On 7/22/2016 10:15 AM, David wrote:
It is too bad voltage control of an oscillator cannot be made
ratiometric. Or can it? I have never heard of such a thing. That
would remove some of the demands on a low drift reference.
That's what we tried to do with the E1938A. A multiplying DAC
is used based on a reference that is ovenized instead the
crystal oven. That certainly eliminated the tempco issue
with the reference, but then we discovered 1/f noise on the
reference. We had to redesign with a different reference.
Rick
In message 20160722113312.f1f292c42ba086aafd6d46e1@kinali.ch, Attila Kinali w
rites:
You can already get 24bit DAC's off the shelf from TI (DAC1282).
Precisely(!) as stable as the voltage reference you feed them.
These are oversampling designs which by definition cannot attenuate
Vref noise.
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
On Fri, 22 Jul 2016 12:15:25 -0500
David davidwhess@gmail.com wrote:
If you expect analog specifications in line with the claimed digital
resolution of ADCs and DACs, you will be disappointed. 20 bits is
about where they top out no matter how many bits are available; the
best you can hope for is that they are monotonic but how meaningful is
that when it is buried in noise?
Depends on your application. If the circuitry that follows the DAC has
some integrative/low-pass characteristics, then bits burried in noise
might be not that bad. E.g. when controlling an VCXO, any noise beyond
10-50kHz will be filtered out by the crystal and its high Q.
Similarly, low frequency noise might be eaten up by the surrounding
control-loop. This is especially beneficial when dealing with a
circuit that has high 1/f noise. The drawback is, that high loop frequency
(something around 10-100Hz is the minimum required to filter 1/f noise)
is not that easy to achieve. It requires carefull design and makes things
generally a lot more expensive.
The LTC2400 is considered suitable for 6 digit designs before software
calibration is used which the application note and datasheet mention.
In this case, it is its repeatable INL which can be corrected for and
its low gain and offset drift which matter.
Yes, but the LTC2400, as all delta-sigma converters, has the big problem
that it only reaches the full performance at a very low sampling rate.
In case of the LTC2400 it's a whooping 7sps. Ie, that would limit the
DAC build with an LTC2400 in its feedback path to at most 3sps, probably
even lower.
On the other hand, a modern DACs like the AD5791 reaches full 20bit at 1Msps
(resp 1us settling time to 0.02% @10V step, or 1us to 1LSB @500 code step).
But using the AD5791 in a design isn't easy either. The dual voltage reference
that is required to reach full spec is kind of inconvenient. And as phk already
wrote, these DACs deliver you the reference accuracy and noise very precisely.
A nice write-up on issues in this area can be found at[1]
Attila Kinali
[1] "The 20-Bit DAC Is the Easiest Part of a 1-ppm-Accurate Precision
Voltage Source", by Maurice Egan, 2010
http://www.analog.com/library/analogdialogue/archives/44-04/ad5791.html
--
Malek's Law:
Any simple idea will be worded in the most complicated way.
The AD5791 evaluation board has an unpopulated area for what appears to be an LTZ1000 reference circuit.
Bruce
On Sunday, 24 July 2016 7:00 AM, Attila Kinali <attila@kinali.ch> wrote:
On Fri, 22 Jul 2016 12:15:25 -0500
David davidwhess@gmail.com wrote:
If you expect analog specifications in line with the claimed digital
resolution of ADCs and DACs, you will be disappointed. 20 bits is
about where they top out no matter how many bits are available; the
best you can hope for is that they are monotonic but how meaningful is
that when it is buried in noise?
Depends on your application. If the circuitry that follows the DAC has
some integrative/low-pass characteristics, then bits burried in noise
might be not that bad. E.g. when controlling an VCXO, any noise beyond
10-50kHz will be filtered out by the crystal and its high Q.
Similarly, low frequency noise might be eaten up by the surrounding
control-loop. This is especially beneficial when dealing with a
circuit that has high 1/f noise. The drawback is, that high loop frequency
(something around 10-100Hz is the minimum required to filter 1/f noise)
is not that easy to achieve. It requires carefull design and makes things
generally a lot more expensive.
The LTC2400 is considered suitable for 6 digit designs before software
calibration is used which the application note and datasheet mention.
In this case, it is its repeatable INL which can be corrected for and
its low gain and offset drift which matter.
Yes, but the LTC2400, as all delta-sigma converters, has the big problem
that it only reaches the full performance at a very low sampling rate.
In case of the LTC2400 it's a whooping 7sps. Ie, that would limit the
DAC build with an LTC2400 in its feedback path to at most 3sps, probably
even lower.
On the other hand, a modern DACs like the AD5791 reaches full 20bit at 1Msps
(resp 1us settling time to 0.02% @10V step, or 1us to 1LSB @500 code step).
But using the AD5791 in a design isn't easy either. The dual voltage reference
that is required to reach full spec is kind of inconvenient. And as phk already
wrote, these DACs deliver you the reference accuracy and noise very precisely.
A nice write-up on issues in this area can be found at[1]
Attila Kinali
[1] "The 20-Bit DAC Is the Easiest Part of a 1-ppm-Accurate Precision
Voltage Source", by Maurice Egan, 2010
http://www.analog.com/library/analogdialogue/archives/44-04/ad5791.html
--
Malek's Law:
Any simple idea will be worded in the most complicated way.
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
On Sat, 23 Jul 2016 20:36:28 +0200, you wrote:
On the other hand, a modern DACs like the AD5791 reaches full 20bit at 1Msps
(resp 1us settling time to 0.02% @10V step, or 1us to 1LSB @500 code step).
But using the AD5791 in a design isn't easy either. The dual voltage reference
that is required to reach full spec is kind of inconvenient. And as phk already
wrote, these DACs deliver you the reference accuracy and noise very precisely.
A nice write-up on issues in this area can be found at[1]
Attila Kinali
[1] "The 20-Bit DAC Is the Easiest Part of a 1-ppm-Accurate Precision
Voltage Source", by Maurice Egan, 2010
http://www.analog.com/library/analogdialogue/archives/44-04/ad5791.html
The AD5791 is pretty impressive although it comes with an equally
impressive price. That right there looks like 10 years of improvement
given the age of the LTC2400.
I see what you mean about the positive and negative voltage reference
requirement. Analog Devices might as well have written "magic
positive and negative reference source here" in their typical
operating circuit schematic.
There has to be a better way to do this. Maybe we could build a
wooden badger ...
I doubt the AD5791 does much better than 16 bits operating at 1 Msps, when
you include glitch energy, noise, and distortion.
On Saturday, 23 July 2016, Attila Kinali attila@kinali.ch wrote:
On Fri, 22 Jul 2016 12:15:25 -0500
David <davidwhess@gmail.com javascript:;> wrote:
If you expect analog specifications in line with the claimed digital
resolution of ADCs and DACs, you will be disappointed. 20 bits is
about where they top out no matter how many bits are available; the
best you can hope for is that they are monotonic but how meaningful is
that when it is buried in noise?
Depends on your application. If the circuitry that follows the DAC has
some integrative/low-pass characteristics, then bits burried in noise
might be not that bad. E.g. when controlling an VCXO, any noise beyond
10-50kHz will be filtered out by the crystal and its high Q.
Similarly, low frequency noise might be eaten up by the surrounding
control-loop. This is especially beneficial when dealing with a
circuit that has high 1/f noise. The drawback is, that high loop frequency
(something around 10-100Hz is the minimum required to filter 1/f noise)
is not that easy to achieve. It requires carefull design and makes things
generally a lot more expensive.
The LTC2400 is considered suitable for 6 digit designs before software
calibration is used which the application note and datasheet mention.
In this case, it is its repeatable INL which can be corrected for and
its low gain and offset drift which matter.
Yes, but the LTC2400, as all delta-sigma converters, has the big problem
that it only reaches the full performance at a very low sampling rate.
In case of the LTC2400 it's a whooping 7sps. Ie, that would limit the
DAC build with an LTC2400 in its feedback path to at most 3sps, probably
even lower.
On the other hand, a modern DACs like the AD5791 reaches full 20bit at
1Msps
(resp 1us settling time to 0.02% @10V step, or 1us to 1LSB @500 code step).
But using the AD5791 in a design isn't easy either. The dual voltage
reference
that is required to reach full spec is kind of inconvenient. And as phk
already
wrote, these DACs deliver you the reference accuracy and noise very
precisely.
A nice write-up on issues in this area can be found at[1]
Attila Kinali
[1] "The 20-Bit DAC Is the Easiest Part of a 1-ppm-Accurate Precision
Voltage Source", by Maurice Egan, 2010
http://www.analog.com/library/analogdialogue/archives/44-04/ad5791.html
--
Malek's Law:
Any simple idea will be worded in the most complicated way.
time-nuts mailing list -- time-nuts@febo.com javascript:;
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
The AD5791 specifications under various conditions are all roughly
consistent; 20 bits at DC, 16 bits at 10 ksps based on SFDR, and 12
bits at 1 Msps for large code changes.
Its intended application is DC where its 1 Msps update rate applies
for code steps of 500 or smaller and settling time will be within 1
LSB.
On Sun, 24 Jul 2016 23:48:05 -0400, you wrote:
I doubt the AD5791 does much better than 16 bits operating at 1 Msps, when
you include glitch energy, noise, and distortion.