volt-nuts@lists.febo.com

Discussion of precise voltage measurement

View all threads

Approach to calibration

FT
Florian Teply
Wed, Jun 20, 2018 7:54 PM

Dear fellow nuts,

just a few days after my last post to this list my boss made me
responsible for calibration of electrical measurement equipment in our
department. As funny a coincidence that might be - I'd be pretty
surprised if he was lurking here - this brought up a few questions
where I could use some insight and comments from guys with some more
experience in that than I have (which essentially is zero).

Now, as far as I understand, calibration at first sight is merely a
comparison between what the meter actually reads and what it is supposed
to read. As long as the difference between the two is smaller than what
the manufacturer specifies as maximum error, everything is fine, put
a new sticker to the instrument and send it back to the owner.

Now, as usual the devil is in the details: How to establish what the
meter is supposed to read. I'd be pretty surprised if it was as easy as
taking two meters and measure the same thing, say, a voltage,
simultaneously and compare the readings. Could some of you guys shed
some more light on that?

Background of my questions is me wondering if it would be feasible to
do the calibration in house instead of sending equipment out for
calibration. I'm not so much looking for financial savings as I doubt
we get to the point where running our own calibration would be cheaper
than contracting it out, even though we have on the order of fifty
multimeters and about as many voltage sources listed, and probably some
more sitting in some cupboards without being listed. I'm rather looking
for convenience, as it is often difficult to arrange in advance when to
calibrate what and send everything in time. With the possibility in
house we could do calibration whenever we like, and whenever equipment
is not in use, and not having to ask a commercial calibration lab to
calibrate a few dozen more or less complex devices within two weeks
maintenance break in summer. And not needing to ship equipment out
would be a plus as well as some of our guys don't like the idea to send
a few hundred kilo euros worth of equipment around just to get a fancy
new sticker on it...

Best regards,
Florian

Dear fellow nuts, just a few days after my last post to this list my boss made me responsible for calibration of electrical measurement equipment in our department. As funny a coincidence that might be - I'd be pretty surprised if he was lurking here - this brought up a few questions where I could use some insight and comments from guys with some more experience in that than I have (which essentially is zero). Now, as far as I understand, calibration at first sight is merely a comparison between what the meter actually reads and what it is supposed to read. As long as the difference between the two is smaller than what the manufacturer specifies as maximum error, everything is fine, put a new sticker to the instrument and send it back to the owner. Now, as usual the devil is in the details: How to establish what the meter is supposed to read. I'd be pretty surprised if it was as easy as taking two meters and measure the same thing, say, a voltage, simultaneously and compare the readings. Could some of you guys shed some more light on that? Background of my questions is me wondering if it would be feasible to do the calibration in house instead of sending equipment out for calibration. I'm not so much looking for financial savings as I doubt we get to the point where running our own calibration would be cheaper than contracting it out, even though we have on the order of fifty multimeters and about as many voltage sources listed, and probably some more sitting in some cupboards without being listed. I'm rather looking for convenience, as it is often difficult to arrange in advance when to calibrate what and send everything in time. With the possibility in house we could do calibration whenever we like, and whenever equipment is not in use, and not having to ask a commercial calibration lab to calibrate a few dozen more or less complex devices within two weeks maintenance break in summer. And not needing to ship equipment out would be a plus as well as some of our guys don't like the idea to send a few hundred kilo euros worth of equipment around just to get a fancy new sticker on it... Best regards, Florian
BB
Bob Bownes
Wed, Jun 20, 2018 8:15 PM

Florian,

As one would suspect this is far from a simple problem and it depends on
what your needs are.

The complexity of calibrating a particular piece of equipment can range
from simple 'hook it up to a known source and if it is withing cal, stick a
sticker on it' to a many hour/day exercise in which things like drift with
time and temperature are measured and corrected.

It all depends on your needs. Lets take the relatively simple HP 3468A 5.5
digit multimeter on my bench here. The manual, available online has a 20
page section on performance test and calibration. This is for a meter that
measures AC volts, DC volts, resistance, AC current, DC current and not
much more. I keep it calibrated by comparing it to my 3456, (which is
calibrated by a commercial service) because it is my main workhorse for
doing electronics construction and debuging (and the 3456 is more
accurate). The 3.5 digit multimeter I keep in the garage has never been
calibrated, but it's used for seeing if I have 13.8VDC and continuity in
circuits in the cars and not much more. And I'm happy with both. But I
answer only to me.

So the question is, what are your needs and requirements? Your industry may
require NIST (or other defacto standard) traceable calibration for some
things. Your factory may be using 3.5digit DMMs to see if there is mains
voltage on lines and not much more. Or you might be in a semiconductor
plant where you have a need for agreement in the last digit of several 3758
8.5 digit meters. All about what the requirement is.

Bob

On Wed, Jun 20, 2018 at 3:54 PM, Florian Teply usenet@teply.info wrote:

Dear fellow nuts,

just a few days after my last post to this list my boss made me
responsible for calibration of electrical measurement equipment in our
department. As funny a coincidence that might be - I'd be pretty
surprised if he was lurking here - this brought up a few questions
where I could use some insight and comments from guys with some more
experience in that than I have (which essentially is zero).

Now, as far as I understand, calibration at first sight is merely a
comparison between what the meter actually reads and what it is supposed
to read. As long as the difference between the two is smaller than what
the manufacturer specifies as maximum error, everything is fine, put
a new sticker to the instrument and send it back to the owner.

Now, as usual the devil is in the details: How to establish what the
meter is supposed to read. I'd be pretty surprised if it was as easy as
taking two meters and measure the same thing, say, a voltage,
simultaneously and compare the readings. Could some of you guys shed
some more light on that?

Background of my questions is me wondering if it would be feasible to
do the calibration in house instead of sending equipment out for
calibration. I'm not so much looking for financial savings as I doubt
we get to the point where running our own calibration would be cheaper
than contracting it out, even though we have on the order of fifty
multimeters and about as many voltage sources listed, and probably some
more sitting in some cupboards without being listed. I'm rather looking
for convenience, as it is often difficult to arrange in advance when to
calibrate what and send everything in time. With the possibility in
house we could do calibration whenever we like, and whenever equipment
is not in use, and not having to ask a commercial calibration lab to
calibrate a few dozen more or less complex devices within two weeks
maintenance break in summer. And not needing to ship equipment out
would be a plus as well as some of our guys don't like the idea to send
a few hundred kilo euros worth of equipment around just to get a fancy
new sticker on it...

Best regards,
Florian


volt-nuts mailing list -- volt-nuts@lists.febo.com
To unsubscribe, go to https://lists.febo.com/cgi-
bin/mailman/listinfo/volt-nuts
and follow the instructions there.

Florian, As one would suspect this is far from a simple problem and it depends on what your needs are. The complexity of calibrating a particular piece of equipment can range from simple 'hook it up to a known source and if it is withing cal, stick a sticker on it' to a many hour/day exercise in which things like drift with time and temperature are measured and corrected. It all depends on your needs. Lets take the relatively simple HP 3468A 5.5 digit multimeter on my bench here. The manual, available online has a 20 page section on performance test and calibration. This is for a meter that measures AC volts, DC volts, resistance, AC current, DC current and not much more. I keep it calibrated by comparing it to my 3456, (which is calibrated by a commercial service) because it is my main workhorse for doing electronics construction and debuging (and the 3456 is more accurate). The 3.5 digit multimeter I keep in the garage has never been calibrated, but it's used for seeing if I have 13.8VDC and continuity in circuits in the cars and not much more. And I'm happy with both. But I answer only to me. So the question is, what are your needs and requirements? Your industry may require NIST (or other defacto standard) traceable calibration for some things. Your factory may be using 3.5digit DMMs to see if there is mains voltage on lines and not much more. Or you might be in a semiconductor plant where you have a need for agreement in the last digit of several 3758 8.5 digit meters. All about what the requirement is. Bob On Wed, Jun 20, 2018 at 3:54 PM, Florian Teply <usenet@teply.info> wrote: > Dear fellow nuts, > > just a few days after my last post to this list my boss made me > responsible for calibration of electrical measurement equipment in our > department. As funny a coincidence that might be - I'd be pretty > surprised if he was lurking here - this brought up a few questions > where I could use some insight and comments from guys with some more > experience in that than I have (which essentially is zero). > > Now, as far as I understand, calibration at first sight is merely a > comparison between what the meter actually reads and what it is supposed > to read. As long as the difference between the two is smaller than what > the manufacturer specifies as maximum error, everything is fine, put > a new sticker to the instrument and send it back to the owner. > > Now, as usual the devil is in the details: How to establish what the > meter is supposed to read. I'd be pretty surprised if it was as easy as > taking two meters and measure the same thing, say, a voltage, > simultaneously and compare the readings. Could some of you guys shed > some more light on that? > > Background of my questions is me wondering if it would be feasible to > do the calibration in house instead of sending equipment out for > calibration. I'm not so much looking for financial savings as I doubt > we get to the point where running our own calibration would be cheaper > than contracting it out, even though we have on the order of fifty > multimeters and about as many voltage sources listed, and probably some > more sitting in some cupboards without being listed. I'm rather looking > for convenience, as it is often difficult to arrange in advance when to > calibrate what and send everything in time. With the possibility in > house we could do calibration whenever we like, and whenever equipment > is not in use, and not having to ask a commercial calibration lab to > calibrate a few dozen more or less complex devices within two weeks > maintenance break in summer. And not needing to ship equipment out > would be a plus as well as some of our guys don't like the idea to send > a few hundred kilo euros worth of equipment around just to get a fancy > new sticker on it... > > Best regards, > Florian > _______________________________________________ > volt-nuts mailing list -- volt-nuts@lists.febo.com > To unsubscribe, go to https://lists.febo.com/cgi- > bin/mailman/listinfo/volt-nuts > and follow the instructions there. >
BA
Bob Albert
Wed, Jun 20, 2018 8:18 PM

Part of the equation is certification.  If the equipment needs to be certified as within specification, it must be tested with equipment whose calibration has been verified by an official standards laboratory.  That way its calibration is traceable to the world standards for the parameters.  This is not trivial, as you can't be sure your standard meter is correct or even within its specification otherwise.
A voltage source used for calibrating a voltmeter must be stable, accurate, and noise-free.  The same is true for other 'live' calibrations.
Passive calibrations, such as resistors and other components need similar certification.
Having said all this, there are other considerations.  Test methods are very important.  For instance, measuring a resistor must be done at a particular lead length and temperature and voltage stress, with due diligence regarding dissimilar metals.  Frequency can be calibrated to a point by WWV radio transmissions.  Digital readings always have an uncertainty of at least one digit.  Capacitance and inductance measurements need to consider losses, namely Q and D and need to be done either equivalent series or equivalent parallel, depending on the application.  Nonlinear devices such as iron core inductors have a far more complex definition of parameters and it's not simple to set up a valid test.
So your boss has handed you an open can of worms.
Bob
On Wednesday, June 20, 2018, 12:54:28 PM PDT, Florian Teply usenet@teply.info wrote:

Dear fellow nuts,

just a few days after my last post to this list my boss made me
responsible for calibration of electrical measurement equipment in our
department. As funny a coincidence that might be - I'd be pretty
surprised if he was lurking here - this brought up a few questions
where I could use some insight and comments from guys with some more
experience in that than I have (which essentially is zero).

Now, as far as I understand, calibration at first sight is merely a
comparison between what the meter actually reads and what it is supposed
to read. As long as the difference between the two is smaller than what
the manufacturer specifies as maximum error, everything is fine, put
a new sticker to the instrument and send it back to the owner.

Now, as usual the devil is in the details: How to establish what the
meter is supposed to read. I'd be pretty surprised if it was as easy as
taking two meters and measure the same thing, say, a voltage,
simultaneously and compare the readings. Could some of you guys shed
some more light on that?

Background of my questions is me wondering if it would be feasible to
do the calibration in house instead of sending equipment out for
calibration. I'm not so much looking for financial savings as I doubt
we get to the point where running our own calibration would be cheaper
than contracting it out, even though we have on the order of fifty
multimeters and about as many voltage sources listed, and probably some
more sitting in some cupboards without being listed. I'm rather looking
for convenience, as it is often difficult to arrange in advance when to
calibrate what and send everything in time. With the possibility in
house we could do calibration whenever we like, and whenever equipment
is not in use, and not having to ask a commercial calibration lab to
calibrate a few dozen more or less complex devices within two weeks
maintenance break in summer. And not needing to ship equipment out
would be a plus as well as some of our guys don't like the idea to send
a few hundred kilo euros worth of equipment around just to get a fancy
new sticker on it...

Best regards,
Florian


volt-nuts mailing list -- volt-nuts@lists.febo.com
To unsubscribe, go to https://lists.febo.com/cgi-bin/mailman/listinfo/volt-nuts
and follow the instructions there.

Part of the equation is certification.  If the equipment needs to be certified as within specification, it must be tested with equipment whose calibration has been verified by an official standards laboratory.  That way its calibration is traceable to the world standards for the parameters.  This is not trivial, as you can't be sure your standard meter is correct or even within its specification otherwise. A voltage source used for calibrating a voltmeter must be stable, accurate, and noise-free.  The same is true for other 'live' calibrations. Passive calibrations, such as resistors and other components need similar certification. Having said all this, there are other considerations.  Test methods are very important.  For instance, measuring a resistor must be done at a particular lead length and temperature and voltage stress, with due diligence regarding dissimilar metals.  Frequency can be calibrated to a point by WWV radio transmissions.  Digital readings always have an uncertainty of at least one digit.  Capacitance and inductance measurements need to consider losses, namely Q and D and need to be done either equivalent series or equivalent parallel, depending on the application.  Nonlinear devices such as iron core inductors have a far more complex definition of parameters and it's not simple to set up a valid test. So your boss has handed you an open can of worms. Bob On Wednesday, June 20, 2018, 12:54:28 PM PDT, Florian Teply <usenet@teply.info> wrote: Dear fellow nuts, just a few days after my last post to this list my boss made me responsible for calibration of electrical measurement equipment in our department. As funny a coincidence that might be - I'd be pretty surprised if he was lurking here - this brought up a few questions where I could use some insight and comments from guys with some more experience in that than I have (which essentially is zero). Now, as far as I understand, calibration at first sight is merely a comparison between what the meter actually reads and what it is supposed to read. As long as the difference between the two is smaller than what the manufacturer specifies as maximum error, everything is fine, put a new sticker to the instrument and send it back to the owner. Now, as usual the devil is in the details: How to establish what the meter is supposed to read. I'd be pretty surprised if it was as easy as taking two meters and measure the same thing, say, a voltage, simultaneously and compare the readings. Could some of you guys shed some more light on that? Background of my questions is me wondering if it would be feasible to do the calibration in house instead of sending equipment out for calibration. I'm not so much looking for financial savings as I doubt we get to the point where running our own calibration would be cheaper than contracting it out, even though we have on the order of fifty multimeters and about as many voltage sources listed, and probably some more sitting in some cupboards without being listed. I'm rather looking for convenience, as it is often difficult to arrange in advance when to calibrate what and send everything in time. With the possibility in house we could do calibration whenever we like, and whenever equipment is not in use, and not having to ask a commercial calibration lab to calibrate a few dozen more or less complex devices within two weeks maintenance break in summer. And not needing to ship equipment out would be a plus as well as some of our guys don't like the idea to send a few hundred kilo euros worth of equipment around just to get a fancy new sticker on it... Best regards, Florian _______________________________________________ volt-nuts mailing list -- volt-nuts@lists.febo.com To unsubscribe, go to https://lists.febo.com/cgi-bin/mailman/listinfo/volt-nuts and follow the instructions there.
CS
Charles Steinmetz
Wed, Jun 20, 2018 8:39 PM

Florian wrote:

Background of my questions is me wondering if it would be feasible to
do the calibration in house instead of sending equipment out for
calibration. I'm not so much looking for financial savings as I doubt
we get to the point where running our own calibration would be cheaper
than contracting it out, even though we have on the order of fifty
multimeters and about as many voltage sources listed, and probably some
more sitting in some cupboards without being listed. I'm rather looking
for convenience

One reference you should not be without is Fluke's "Calibration:
Philosophy In Practice."  It will not tell you everything you need to
know, and it is not a step-by-step cookbook.  But it will introduce you
to the concepts you need to know and give you good ideas for how to
think about the subject.

The first edition is available on the web as a PDF download.  However,
the Second Edition is much more extensive and detailed.  You really want
that one.  I do not believe it is on the web as a download.  It can be
purchased from Fluke.  Since you presumably have a budget, just order it.

Jay Boucher has written a couple of books on the subject that can be
useful (these treat most industrial calibrations, not just electrical
calibration):

"The Quality Calibration Handbook: Developing and Managing a Calibration
Program"
"The Metrology Handbook, Second Edition"

If all you need to do is calibrate multimeters and voltage sources, the
easiest thing to do is purchase an AC/DC calibrator designed for just
this purpose (Fluke makes the best of these, IMO).  You would send that
back to Fluke periodically to be calibrated, and use it as instructed to
calibrate your instruments in-house.

Best regards,

Charles

Florian wrote: > Background of my questions is me wondering if it would be feasible to > do the calibration in house instead of sending equipment out for > calibration. I'm not so much looking for financial savings as I doubt > we get to the point where running our own calibration would be cheaper > than contracting it out, even though we have on the order of fifty > multimeters and about as many voltage sources listed, and probably some > more sitting in some cupboards without being listed. I'm rather looking > for convenience One reference you should not be without is Fluke's "Calibration: Philosophy In Practice." It will not tell you everything you need to know, and it is not a step-by-step cookbook. But it will introduce you to the concepts you need to know and give you good ideas for how to think about the subject. The first edition is available on the web as a PDF download. However, the Second Edition is much more extensive and detailed. You really want that one. I do not believe it is on the web as a download. It can be purchased from Fluke. Since you presumably have a budget, just order it. Jay Boucher has written a couple of books on the subject that can be useful (these treat most industrial calibrations, not just electrical calibration): "The Quality Calibration Handbook: Developing and Managing a Calibration Program" "The Metrology Handbook, Second Edition" If all you need to do is calibrate multimeters and voltage sources, the easiest thing to do is purchase an AC/DC calibrator designed for just this purpose (Fluke makes the best of these, IMO). You would send that back to Fluke periodically to be calibrated, and use it as instructed to calibrate your instruments in-house. Best regards, Charles
PK
Poul-Henning Kamp
Wed, Jun 20, 2018 9:55 PM

In message 20180620215406.235b6f85@aluminium.mobile.teply.info, Florian Teply writes:

Now, as far as I understand, calibration at first sight is merely a
comparison between what the meter actually reads and what it is supposed
to read. As long as the difference between the two is smaller than what
the manufacturer specifies as maximum error, everything is fine, put
a new sticker to the instrument and send it back to the owner.

What the sticker really says is that you have credible statistical
reasons to think the meter will be inside spec until the date on
the sticker.

This is why you can go longer between calibrations if you have
the calibration history for the instrument.

If for instance you instrument over the last five yearly calibrations
have been found to show 0.10, 0.15, 0.20, 0.25 and 0.30, then there
is every statistical reason to expect it to show 0.35, 0.40, 0.45
and 0.50 at the next four yearly calibrations, barring any unforseen
defects or mishaps, and the date for next calibration can be chosen
accordingly.

If on the other side its calibration history contains something
like ... +0.25, -0.35 ... you know it can change 0.6 in one year
and you may have to pull up the date on the sticker accordingly.

If the instrument has no history and reads 0.35, you will have to
consult the manufacturers drift specs and project forward and see
what the earliest date the instrument can become out of spec, and
write a date conservative to that estimate on the sticker.

Background of my questions is me wondering if it would be feasible to
do the calibration in house instead of sending equipment out for
calibration.

The biggest advantage to inhouse calibration, is that you can do it
much more often, and therefore don't need to do it as precisely
as the cal-lab, because the sticker only needs date some months
ahead.

The second biggest advantage is that you can perform the calibrations
in the target environment, rather than at some artificial enviromental
conditions, which don't apply in real life.

The third biggest advantage is that the calibration doesn't take
the instruments out of commission for several days due to transport
and scheduling, and they don't get damaged and lost in transit.

The biggest disadvantage is that you need to maintain suitable
cal-standards in-house.

If it is just DC and AC voltage/current/resitance in the audio
range, a HP3458A will handsomely pay itself back.

Up to about some hundred MHz you do somethign similar with
a good vector network analyzer.

In GHz territory it gets nasty.

--
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.

-------- In message <20180620215406.235b6f85@aluminium.mobile.teply.info>, Florian Teply writes: >Now, as far as I understand, calibration at first sight is merely a >comparison between what the meter actually reads and what it is supposed >to read. As long as the difference between the two is smaller than what >the manufacturer specifies as maximum error, everything is fine, put >a new sticker to the instrument and send it back to the owner. What the sticker really says is that you have credible statistical reasons to think the meter will be inside spec until the date on the sticker. This is why you can go longer between calibrations if you have the calibration history for the instrument. If for instance you instrument over the last five yearly calibrations have been found to show 0.10, 0.15, 0.20, 0.25 and 0.30, then there is every statistical reason to expect it to show 0.35, 0.40, 0.45 and 0.50 at the next four yearly calibrations, barring any unforseen defects or mishaps, and the date for next calibration can be chosen accordingly. If on the other side its calibration history contains something like ... +0.25, -0.35 ... you know it can change 0.6 in one year and you may have to pull up the date on the sticker accordingly. If the instrument has no history and reads 0.35, you will have to consult the manufacturers drift specs and project forward and see what the earliest date the instrument can become out of spec, and write a date conservative to that estimate on the sticker. >Background of my questions is me wondering if it would be feasible to >do the calibration in house instead of sending equipment out for >calibration. The biggest advantage to inhouse calibration, is that you can do it much more often, and therefore don't need to do it as precisely as the cal-lab, because the sticker only needs date some months ahead. The second biggest advantage is that you can perform the calibrations in the target environment, rather than at some artificial enviromental conditions, which don't apply in real life. The third biggest advantage is that the calibration doesn't take the instruments out of commission for several days due to transport and scheduling, and they don't get damaged and lost in transit. The biggest disadvantage is that you need to maintain suitable cal-standards in-house. If it is just DC and AC voltage/current/resitance in the audio range, a HP3458A will handsomely pay itself back. Up to about some hundred MHz you do somethign similar with a good vector network analyzer. In GHz territory it gets nasty. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk@FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.
FT
Florian Teply
Thu, Jun 21, 2018 8:00 PM

Am Wed, 20 Jun 2018 21:55:52 +0000
schrieb "Poul-Henning Kamp" phk@phk.freebsd.dk:


In message 20180620215406.235b6f85@aluminium.mobile.teply.info,
Florian Teply writes:

Now, as far as I understand, calibration at first sight is merely a
comparison between what the meter actually reads and what it is
supposed to read. As long as the difference between the two is
smaller than what the manufacturer specifies as maximum error,
everything is fine, put a new sticker to the instrument and send it
back to the owner.

What the sticker really says is that you have credible statistical
reasons to think the meter will be inside spec until the date on
the sticker.

This is why you can go longer between calibrations if you have
the calibration history for the instrument.

If for instance you instrument over the last five yearly calibrations
have been found to show 0.10, 0.15, 0.20, 0.25 and 0.30, then there
is every statistical reason to expect it to show 0.35, 0.40, 0.45
and 0.50 at the next four yearly calibrations, barring any unforseen
defects or mishaps, and the date for next calibration can be chosen
accordingly.

If on the other side its calibration history contains something
like ... +0.25, -0.35 ... you know it can change 0.6 in one year
and you may have to pull up the date on the sticker accordingly.

If the instrument has no history and reads 0.35, you will have to
consult the manufacturers drift specs and project forward and see
what the earliest date the instrument can become out of spec, and
write a date conservative to that estimate on the sticker.

Background of my questions is me wondering if it would be feasible to
do the calibration in house instead of sending equipment out for
calibration.

The biggest advantage to inhouse calibration, is that you can do it
much more often, and therefore don't need to do it as precisely
as the cal-lab, because the sticker only needs date some months
ahead.

And especially, I could do the calibration whenever it suits me, and
wouldn't be bound by calibration schedules. Yes, exceeding the
interval we specified by too much might still cause questions from QM,
but then that's the situation I already have, with  ISO 9001 audit next
week and equipment being overdue for up to four years.

The second biggest advantage is that you can perform the calibrations
in the target environment, rather than at some artificial enviromental
conditions, which don't apply in real life.

Luckily, we also maintain the 23°C in our labs that usually are quoted
for a calibration environment. Humidity in the labs is somewhat
undefined though if I remember correctly.

The third biggest advantage is that the calibration doesn't take
the instruments out of commission for several days due to transport
and scheduling, and they don't get damaged and lost in transit.

This is the one thing my direct supervisor is afraid of. Damage or loss
hasn't happpened until now as far as I know, but it's always a certain
risk to send equipment out.

The biggest disadvantage is that you need to maintain suitable
cal-standards in-house.

Umm, if I could convince my boss that a few fully loaded Fluke 734As are
absolutely essential to get this done, I'd expect to be in the right
place at volt-nuts ;-) But I'd consider myself lucky if we would get a
single 732B. Of course that always depends on how much time and money
that would save us. A set of standard capacitors I've seen somewhere,
and I wouldn't be surprised if we also had a set of standard
resistors sitting on a shelf.

If it is just DC and AC voltage/current/resitance in the audio
range, a HP3458A will handsomely pay itself back.

Up to about some hundred MHz you do somethign similar with
a good vector network analyzer.

In GHz territory it gets nasty.

For now, DC is all I would consider, exactly for the reason that RF
tends to become a bit more involved. Even though we also do have plenty
of RF equipment up into the 50/67 GHz range, and some even going into
three-digit GHz territory. But I didn't want to spend all my time just
on calibrating stuff as fascinating as that might be.
Coincidentally, we do happen to have an Agilent 3458A in one of our
labs. Funnily, it's with the digital guys, where I'd be pretty
surprised if they actually cared for the 4th digit. Has neven been
calibrated though after it has left the factory. What a waste...

Repurposing this one for internal calibration probably would require
having it calibrated externally for a few years to come to establish
ist own performance. Should still be good to manufacturer drift specs
if used shortly after calibration.

Best regards,
Florian

Am Wed, 20 Jun 2018 21:55:52 +0000 schrieb "Poul-Henning Kamp" <phk@phk.freebsd.dk>: > -------- > In message <20180620215406.235b6f85@aluminium.mobile.teply.info>, > Florian Teply writes: > > >Now, as far as I understand, calibration at first sight is merely a > >comparison between what the meter actually reads and what it is > >supposed to read. As long as the difference between the two is > >smaller than what the manufacturer specifies as maximum error, > >everything is fine, put a new sticker to the instrument and send it > >back to the owner. > > What the sticker really says is that you have credible statistical > reasons to think the meter will be inside spec until the date on > the sticker. > > This is why you can go longer between calibrations if you have > the calibration history for the instrument. > > If for instance you instrument over the last five yearly calibrations > have been found to show 0.10, 0.15, 0.20, 0.25 and 0.30, then there > is every statistical reason to expect it to show 0.35, 0.40, 0.45 > and 0.50 at the next four yearly calibrations, barring any unforseen > defects or mishaps, and the date for next calibration can be chosen > accordingly. > > If on the other side its calibration history contains something > like ... +0.25, -0.35 ... you know it can change 0.6 in one year > and you may have to pull up the date on the sticker accordingly. > > If the instrument has no history and reads 0.35, you will have to > consult the manufacturers drift specs and project forward and see > what the earliest date the instrument can become out of spec, and > write a date conservative to that estimate on the sticker. > > >Background of my questions is me wondering if it would be feasible to > >do the calibration in house instead of sending equipment out for > >calibration. > > The biggest advantage to inhouse calibration, is that you can do it > much more often, and therefore don't need to do it as precisely > as the cal-lab, because the sticker only needs date some months > ahead. > And especially, I could do the calibration whenever it suits me, and wouldn't be bound by calibration schedules. Yes, exceeding the interval we specified by too much might still cause questions from QM, but then that's the situation I already have, with ISO 9001 audit next week and equipment being overdue for up to four years. > The second biggest advantage is that you can perform the calibrations > in the target environment, rather than at some artificial enviromental > conditions, which don't apply in real life. > Luckily, we also maintain the 23°C in our labs that usually are quoted for a calibration environment. Humidity in the labs is somewhat undefined though if I remember correctly. > The third biggest advantage is that the calibration doesn't take > the instruments out of commission for several days due to transport > and scheduling, and they don't get damaged and lost in transit. > This is the one thing my direct supervisor is afraid of. Damage or loss hasn't happpened until now as far as I know, but it's always a certain risk to send equipment out. > The biggest disadvantage is that you need to maintain suitable > cal-standards in-house. > Umm, if I could convince my boss that a few fully loaded Fluke 734As are absolutely essential to get this done, I'd expect to be in the right place at volt-nuts ;-) But I'd consider myself lucky if we would get a single 732B. Of course that always depends on how much time and money that would save us. A set of standard capacitors I've seen somewhere, and I wouldn't be surprised if we also had a set of standard resistors sitting on a shelf. > If it is just DC and AC voltage/current/resitance in the audio > range, a HP3458A will handsomely pay itself back. > > Up to about some hundred MHz you do somethign similar with > a good vector network analyzer. > > In GHz territory it gets nasty. > For now, DC is all I would consider, exactly for the reason that RF tends to become a bit more involved. Even though we also do have plenty of RF equipment up into the 50/67 GHz range, and some even going into three-digit GHz territory. But I didn't want to spend all my time just on calibrating stuff as fascinating as that might be. Coincidentally, we do happen to have an Agilent 3458A in one of our labs. Funnily, it's with the digital guys, where I'd be pretty surprised if they actually cared for the 4th digit. Has neven been calibrated though after it has left the factory. What a waste... Repurposing this one for internal calibration probably would require having it calibrated externally for a few years to come to establish ist own performance. Should still be good to manufacturer drift specs if used shortly after calibration. Best regards, Florian
FT
Florian Teply
Thu, Jun 21, 2018 8:32 PM

Am Wed, 20 Jun 2018 16:15:38 -0400
schrieb Bob Bownes bownes@gmail.com:

Florian,

As one would suspect this is far from a simple problem and it depends
on what your needs are.

This is exactly what I expected. It seems simple, but isn't
necessarily. There's probably a reason Calibration labs are not popping
up everywhere...

The complexity of calibrating a particular piece of equipment can
range from simple 'hook it up to a known source and if it is withing
cal, stick a sticker on it' to a many hour/day exercise in which
things like drift with time and temperature are measured and
corrected.

As mentioned already in my other response, temperature isn't that much
of a concern for me. Lab temperature is set at 23°C, and varies by no
more than half a degree over the day. Humidity is to my knowledge
uncontrolled in the labs so it does vary, but I have no idea by how
much, or if that could be a problem.

It all depends on your needs. Lets take the relatively simple HP
3468A 5.5 digit multimeter on my bench here. The manual, available
online has a 20 page section on performance test and calibration.
This is for a meter that measures AC volts, DC volts, resistance, AC
current, DC current and not much more. I keep it calibrated by
comparing it to my 3456, (which is calibrated by a commercial
service) because it is my main workhorse for doing electronics
construction and debuging (and the 3456 is more accurate). The 3.5
digit multimeter I keep in the garage has never been calibrated, but
it's used for seeing if I have 13.8VDC and continuity in circuits in
the cars and not much more. And I'm happy with both. But I answer
only to me.

So the question is, what are your needs and requirements? Your
industry may require NIST (or other defacto standard) traceable
calibration for some things. Your factory may be using 3.5digit DMMs
to see if there is mains voltage on lines and not much more. Or you
might be in a semiconductor plant where you have a need for agreement
in the last digit of several 3758 8.5 digit meters. All about what
the requirement is.

As a matter of fact, it IS a semiconductor plant I'm talking about. And
I don't care about equipment that's used to check if the mains fuse
might been blown. I'm looking for a solution for the equipment that
needs to be calibrated because we do serious measurements with it. I'm
also not talking the stuff we have in place to extract PCM data. This
will remain to be commercially calibrated.
But defining what we NEED is a bit difficult for me. I have the
impression that buying decision for a specific instrument over all
the others is made because one thinks that at some point it
might be useful to have this or that feature, or because the engineer
thinks this would be neat to play with. Or possibly just because
everyone else in the industry uses that.

With proper setup, even three digits can potentially give better
insight than 8.5 digits ever could when they're applied without
thought. Or both can be off by orders of magnitude if the setup isn't
up to the task.

Taking the actual accuracy specs into account, even 6.5 digits seem
excessive, indicating a precision that just isn't there.
The semiconductor parameter analyzers and SMU mainframes we use all
feature accuracy specs of aapproximately 50 ppm for voltage
measurements and maybe 100ppm for current measurements. More
problematic might be the current range of these instruments: They
regularly have as a smallest range 10 pA full range with femtoamp or
even sub-femtoamp resolution. Granted, accuracy drops to the order of
1% at that range, but still it seems difficult to do calibration at
these current levels.

Thanks for all the input in any case!
Florian

Am Wed, 20 Jun 2018 16:15:38 -0400 schrieb Bob Bownes <bownes@gmail.com>: > Florian, > > As one would suspect this is far from a simple problem and it depends > on what your needs are. > This is exactly what I expected. It seems simple, but isn't necessarily. There's probably a reason Calibration labs are not popping up everywhere... > The complexity of calibrating a particular piece of equipment can > range from simple 'hook it up to a known source and if it is withing > cal, stick a sticker on it' to a many hour/day exercise in which > things like drift with time and temperature are measured and > corrected. > As mentioned already in my other response, temperature isn't that much of a concern for me. Lab temperature is set at 23°C, and varies by no more than half a degree over the day. Humidity is to my knowledge uncontrolled in the labs so it does vary, but I have no idea by how much, or if that could be a problem. > It all depends on your needs. Lets take the relatively simple HP > 3468A 5.5 digit multimeter on my bench here. The manual, available > online has a 20 page section on performance test and calibration. > This is for a meter that measures AC volts, DC volts, resistance, AC > current, DC current and not much more. I keep it calibrated by > comparing it to my 3456, (which is calibrated by a commercial > service) because it is my main workhorse for doing electronics > construction and debuging (and the 3456 is more accurate). The 3.5 > digit multimeter I keep in the garage has never been calibrated, but > it's used for seeing if I have 13.8VDC and continuity in circuits in > the cars and not much more. And I'm happy with both. But I answer > only to me. > > So the question is, what are your needs and requirements? Your > industry may require NIST (or other defacto standard) traceable > calibration for some things. Your factory may be using 3.5digit DMMs > to see if there is mains voltage on lines and not much more. Or you > might be in a semiconductor plant where you have a need for agreement > in the last digit of several 3758 8.5 digit meters. All about what > the requirement is. > As a matter of fact, it IS a semiconductor plant I'm talking about. And I don't care about equipment that's used to check if the mains fuse might been blown. I'm looking for a solution for the equipment that needs to be calibrated because we do serious measurements with it. I'm also not talking the stuff we have in place to extract PCM data. This will remain to be commercially calibrated. But defining what we NEED is a bit difficult for me. I have the impression that buying decision for a specific instrument over all the others is made because one thinks that at some point it might be useful to have this or that feature, or because the engineer thinks this would be neat to play with. Or possibly just because everyone else in the industry uses that. With proper setup, even three digits can potentially give better insight than 8.5 digits ever could when they're applied without thought. Or both can be off by orders of magnitude if the setup isn't up to the task. Taking the actual accuracy specs into account, even 6.5 digits seem excessive, indicating a precision that just isn't there. The semiconductor parameter analyzers and SMU mainframes we use all feature accuracy specs of aapproximately 50 ppm for voltage measurements and maybe 100ppm for current measurements. More problematic might be the current range of these instruments: They regularly have as a smallest range 10 pA full range with femtoamp or even sub-femtoamp resolution. Granted, accuracy drops to the order of 1% at that range, but still it seems difficult to do calibration at these current levels. Thanks for all the input in any case! Florian
FT
Florian Teply
Fri, Jun 22, 2018 9:45 PM

Am Wed, 20 Jun 2018 21:55:52 +0000
schrieb "Poul-Henning Kamp" phk@phk.freebsd.dk:


In message 20180620215406.235b6f85@aluminium.mobile.teply.info,
Florian Teply writes:

Now, as far as I understand, calibration at first sight is merely a
comparison between what the meter actually reads and what it is
supposed to read. As long as the difference between the two is
smaller than what the manufacturer specifies as maximum error,
everything is fine, put a new sticker to the instrument and send it
back to the owner.

What the sticker really says is that you have credible statistical
reasons to think the meter will be inside spec until the date on
the sticker.

This is why you can go longer between calibrations if you have
the calibration history for the instrument.

If for instance you instrument over the last five yearly calibrations
have been found to show 0.10, 0.15, 0.20, 0.25 and 0.30, then there
is every statistical reason to expect it to show 0.35, 0.40, 0.45
and 0.50 at the next four yearly calibrations, barring any unforseen
defects or mishaps, and the date for next calibration can be chosen
accordingly.

If on the other side its calibration history contains something
like ... +0.25, -0.35 ... you know it can change 0.6 in one year
and you may have to pull up the date on the sticker accordingly.

If the instrument has no history and reads 0.35, you will have to
consult the manufacturers drift specs and project forward and see
what the earliest date the instrument can become out of spec, and
write a date conservative to that estimate on the sticker.

Let me see if I understandd that correctly:
Assuming no adjustments have been made to the instrument in between,
with calibration history I could work out the actual drift rate of the
instrument. Of course, the more datapoints I have, the more
accurate that estimation might be. Then I could use that to project
into the future to see when it will likely drift out of spec.

And, additionally, given that I worked out the drift, I could even try
and post-process the data taken with that instrument and correct for
the drift we just established, if this extra precision actually has
some value to someone. After all, in the end it's just the removal of
some systematic error I just happen to know after the analysis.

But would the evaluation of drift rate still be possible if adjustments
have taken place? I guess I couldn't count on actually knowing what has
been changed unless I did the adjustment myself, and what effect that
change has on future drift rate might be pretty difficult to predict
even for insiders.

Background of my questions is me wondering if it would be feasible to
do the calibration in house instead of sending equipment out for
calibration.

The biggest advantage to inhouse calibration, is that you can do it
much more often, and therefore don't need to do it as precisely
as the cal-lab, because the sticker only needs date some months
ahead.

One more question to that, as it's not entirely clear to me what
exactly you mean here:
Do I take it correctly that in case I would be willing to re-cal in,
say three months instead of next year, the instruments used for
calibration do not necessarily need to be as precise as if I needed
calibration good for one year? Or did you mean that I could afford
coming closer to the manufacturer spec limit? Or something else
altogether?

The second biggest advantage is that you can perform the calibrations
in the target environment, rather than at some artificial enviromental
conditions, which don't apply in real life.

The third biggest advantage is that the calibration doesn't take
the instruments out of commission for several days due to transport
and scheduling, and they don't get damaged and lost in transit.

The biggest disadvantage is that you need to maintain suitable
cal-standards in-house.

If it is just DC and AC voltage/current/resitance in the audio
range, a HP3458A will handsomely pay itself back.

I guess a good Calibrator like a Fluke 5730A might do the trick as
well for the mentioned measurement range if low currents don't mattter
too much. And might be easier to get nowadays as even well-known
distributors don't quote a 3458A anymore. Might try to get a quote for
a Fluke 8508 and a Keithley 2002 as well...

Am Wed, 20 Jun 2018 21:55:52 +0000 schrieb "Poul-Henning Kamp" <phk@phk.freebsd.dk>: > -------- > In message <20180620215406.235b6f85@aluminium.mobile.teply.info>, > Florian Teply writes: > > >Now, as far as I understand, calibration at first sight is merely a > >comparison between what the meter actually reads and what it is > >supposed to read. As long as the difference between the two is > >smaller than what the manufacturer specifies as maximum error, > >everything is fine, put a new sticker to the instrument and send it > >back to the owner. > > What the sticker really says is that you have credible statistical > reasons to think the meter will be inside spec until the date on > the sticker. > > This is why you can go longer between calibrations if you have > the calibration history for the instrument. > > If for instance you instrument over the last five yearly calibrations > have been found to show 0.10, 0.15, 0.20, 0.25 and 0.30, then there > is every statistical reason to expect it to show 0.35, 0.40, 0.45 > and 0.50 at the next four yearly calibrations, barring any unforseen > defects or mishaps, and the date for next calibration can be chosen > accordingly. > > If on the other side its calibration history contains something > like ... +0.25, -0.35 ... you know it can change 0.6 in one year > and you may have to pull up the date on the sticker accordingly. > > If the instrument has no history and reads 0.35, you will have to > consult the manufacturers drift specs and project forward and see > what the earliest date the instrument can become out of spec, and > write a date conservative to that estimate on the sticker. > Let me see if I understandd that correctly: Assuming no adjustments have been made to the instrument in between, with calibration history I could work out the actual drift rate of the instrument. Of course, the more datapoints I have, the more accurate that estimation might be. Then I could use that to project into the future to see when it will likely drift out of spec. And, additionally, given that I worked out the drift, I could even try and post-process the data taken with that instrument and correct for the drift we just established, if this extra precision actually has some value to someone. After all, in the end it's just the removal of some systematic error I just happen to know after the analysis. But would the evaluation of drift rate still be possible if adjustments have taken place? I guess I couldn't count on actually knowing what has been changed unless I did the adjustment myself, and what effect that change has on future drift rate might be pretty difficult to predict even for insiders. > >Background of my questions is me wondering if it would be feasible to > >do the calibration in house instead of sending equipment out for > >calibration. > > The biggest advantage to inhouse calibration, is that you can do it > much more often, and therefore don't need to do it as precisely > as the cal-lab, because the sticker only needs date some months > ahead. > One more question to that, as it's not entirely clear to me what exactly you mean here: Do I take it correctly that in case I would be willing to re-cal in, say three months instead of next year, the instruments used for calibration do not necessarily need to be as precise as if I needed calibration good for one year? Or did you mean that I could afford coming closer to the manufacturer spec limit? Or something else altogether? > The second biggest advantage is that you can perform the calibrations > in the target environment, rather than at some artificial enviromental > conditions, which don't apply in real life. > > The third biggest advantage is that the calibration doesn't take > the instruments out of commission for several days due to transport > and scheduling, and they don't get damaged and lost in transit. > > The biggest disadvantage is that you need to maintain suitable > cal-standards in-house. > > If it is just DC and AC voltage/current/resitance in the audio > range, a HP3458A will handsomely pay itself back. > I guess a good Calibrator like a Fluke 5730A might do the trick as well for the mentioned measurement range if low currents don't mattter too much. And might be easier to get nowadays as even well-known distributors don't quote a 3458A anymore. Might try to get a quote for a Fluke 8508 and a Keithley 2002 as well...
PK
Poul-Henning Kamp
Fri, Jun 22, 2018 11:11 PM

In message 20180622234534.12800dd5@aluminium.mobile.teply.info, Florian Teply
writes:

Let me see if I understandd that correctly:
Assuming no adjustments have been made to the instrument in between,
with calibration history I could work out the actual drift rate of the
instrument. Of course, the more datapoints I have, the more
accurate that estimation might be. Then I could use that to project
into the future to see when it will likely drift out of spec.

Provided it has a uniform low-ish drift rate.

That is probably something you will only see on the high-end kit.

Low range kit will probably be dominated by all other sources of noise.

And, additionally, given that I worked out the drift, I could even try
and post-process the data taken with that instrument and correct for
the drift we just established, if this extra precision actually has
some value to someone. After all, in the end it's just the removal of
some systematic error I just happen to know after the analysis.

If you do that, your uncertainty calculations just got a fair bit
more complicated, because now you also have to factor in the
uncertainty of the drift rate.

But yes, that is basically how all cal-labs without Josephson
Junctions estimate their Volt and Resistance.

But would the evaluation of drift rate still be possible if adjustments
have taken place?

No.

Until you have solid evidence to the contrary, you have to assume
that adjustments changed the drift rate.

One interesting idea in this space is to maintain per instrument
Kalman filters on the calibration results.

The predictions+uncertainty you get out will be way better than the
formal uncertainty calculation, because the Kalman filter does not
factor in risks (ie: things that could happen) until they actually
do happen, whereas the manufacturers specs have an allowance for
anything they could imagine or have heard about (jumps, thermals,
air pressure, etc.)

The main trick is that if you ever see your formal uncertainty dip
below the Kalman filter, you know something is seriously wrong and
in the mean time, the filter probably tells you what the situation
is much more precisely than the formal numbers.

The biggest advantage to inhouse calibration, is that you can do it
much more often, and therefore don't need to do it as precisely
as the cal-lab, because the sticker only needs date some months
ahead.

One more question to that, as it's not entirely clear to me what
exactly you mean here:
Do I take it correctly that in case I would be willing to re-cal in,
say three months instead of next year, the instruments used for
calibration do not necessarily need to be as precise as if I needed
calibration good for one year? Or did you mean that I could afford
coming closer to the manufacturer spec limit? Or something else
altogether?

All of the above, but you need to do the math to show that it is ok.

If you stick to the manufacturers instructions, they did the math,
and you wont need to.

If you invent your own schedule, you need to do the math to find out
the consequences for your uncertainty.

Linear scaling is a good first approximation for drift, but not for
other sources of noise or failure.

I guess a good Calibrator like a Fluke 5730A might do the trick as
well for the mentioned measurement range if low currents don't mattter
too much. And might be easier to get nowadays as even well-known
distributors don't quote a 3458A anymore. Might try to get a quote for
a Fluke 8508 and a Keithley 2002 as well...

If you are in EU, I think you need to buy the 3458A directly from
Keysight for ROHS reasons.

I don't think I'm qualified to recommend specific equipment.

I only mentioned the 3458A because it is generally seen as the "gold
standard" and it is a damn good instrument in my own personal
experience.

--
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.

-------- In message <20180622234534.12800dd5@aluminium.mobile.teply.info>, Florian Teply writes: >Let me see if I understandd that correctly: >Assuming no adjustments have been made to the instrument in between, >with calibration history I could work out the actual drift rate of the >instrument. Of course, the more datapoints I have, the more >accurate that estimation might be. Then I could use that to project >into the future to see when it will likely drift out of spec. Provided it has a uniform low-ish drift rate. That is probably something you will only see on the high-end kit. Low range kit will probably be dominated by all other sources of noise. >And, additionally, given that I worked out the drift, I could even try >and post-process the data taken with that instrument and correct for >the drift we just established, if this extra precision actually has >some value to someone. After all, in the end it's just the removal of >some systematic error I just happen to know after the analysis. If you do that, your uncertainty calculations just got a fair bit more complicated, because now you also have to factor in the uncertainty of the drift rate. But yes, that is basically how all cal-labs without Josephson Junctions estimate their Volt and Resistance. >But would the evaluation of drift rate still be possible if adjustments >have taken place? No. Until you have solid evidence to the contrary, you have to assume that adjustments changed the drift rate. One interesting idea in this space is to maintain per instrument Kalman filters on the calibration results. The predictions+uncertainty you get out will be way better than the formal uncertainty calculation, because the Kalman filter does not factor in risks (ie: things that _could_ happen) until they actually _do_ happen, whereas the manufacturers specs have an allowance for anything they could imagine or have heard about (jumps, thermals, air pressure, etc.) The main trick is that if you ever see your formal uncertainty dip below the Kalman filter, you know something is seriously wrong and in the mean time, the filter *probably* tells you what the situation is much more precisely than the formal numbers. >> The biggest advantage to inhouse calibration, is that you can do it >> much more often, and therefore don't need to do it as precisely >> as the cal-lab, because the sticker only needs date some months >> ahead. >> >One more question to that, as it's not entirely clear to me what >exactly you mean here: >Do I take it correctly that in case I would be willing to re-cal in, >say three months instead of next year, the instruments used for >calibration do not necessarily need to be as precise as if I needed >calibration good for one year? Or did you mean that I could afford >coming closer to the manufacturer spec limit? Or something else >altogether? All of the above, but you need to do the math to show that it is ok. If you stick to the manufacturers instructions, they did the math, and you wont need to. If you invent your own schedule, you need to do the math to find out the consequences for your uncertainty. Linear scaling is a good first approximation for drift, but not for other sources of noise or failure. >I guess a good Calibrator like a Fluke 5730A might do the trick as >well for the mentioned measurement range if low currents don't mattter >too much. And might be easier to get nowadays as even well-known >distributors don't quote a 3458A anymore. Might try to get a quote for >a Fluke 8508 and a Keithley 2002 as well... If you are in EU, I think you need to buy the 3458A directly from Keysight for ROHS reasons. I don't think I'm qualified to recommend specific equipment. I only mentioned the 3458A because it is generally seen as the "gold standard" and it is a damn good instrument in my own personal experience. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk@FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.
FT
Florian Teply
Sat, Jun 30, 2018 1:57 PM

Am Fri, 22 Jun 2018 23:11:47 +0000
schrieb "Poul-Henning Kamp" phk@phk.freebsd.dk:


In message 20180622234534.12800dd5@aluminium.mobile.teply.info,
Florian Teply writes:

Let me see if I understandd that correctly:
Assuming no adjustments have been made to the instrument in between,
with calibration history I could work out the actual drift rate of
the instrument. Of course, the more datapoints I have, the more
accurate that estimation might be. Then I could use that to project
into the future to see when it will likely drift out of spec.

Provided it has a uniform low-ish drift rate.

That is probably something you will only see on the high-end kit.

Low range kit will probably be dominated by all other sources of
noise.

Sure, if the error is dancing all over the place, the simple term
"drift" is misleading as it implies a mainly linear behaviour.

And, additionally, given that I worked out the drift, I could even
try and post-process the data taken with that instrument and correct
for the drift we just established, if this extra precision actually
has some value to someone. After all, in the end it's just the
removal of some systematic error I just happen to know after the
analysis.

If you do that, your uncertainty calculations just got a fair bit
more complicated, because now you also have to factor in the
uncertainty of the drift rate.

But yes, that is basically how all cal-labs without Josephson
Junctions estimate their Volt and Resistance.

Agreed, that's another can of worms  one probably doesn't want to open
unless necessary. As far as I understand it by now, this approach would
be a possible solution if a) sufficient data exists to support it
(including a reasonable model of drift) AND b) the manufacturer specs
are not sufficient for the task at hand (for the unit to be
calibrated and/or the unit taken as reference).

But would the evaluation of drift rate still be possible if
adjustments have taken place?

No.

Until you have solid evidence to the contrary, you have to assume
that adjustments changed the drift rate.

That's my gut feeling as well. At least my guts are well
calibrated ;-)

One interesting idea in this space is to maintain per instrument
Kalman filters on the calibration results.

The predictions+uncertainty you get out will be way better than the
formal uncertainty calculation, because the Kalman filter does not
factor in risks (ie: things that could happen) until they actually
do happen, whereas the manufacturers specs have an allowance for
anything they could imagine or have heard about (jumps, thermals,
air pressure, etc.)

The main trick is that if you ever see your formal uncertainty dip
below the Kalman filter, you know something is seriously wrong and
in the mean time, the filter probably tells you what the situation
is much more precisely than the formal numbers.

Granted, I just read a bit up on Kalman filters, and I'm surely far from
understanding the major part of it. But doesn't that need a priori a
reasonable model of what the uncertainty is composed of? Or at least a
reasonable starting point for the covariance matrix. That sounds to me
like a chicken-egg problem, trying to use some procedure to get an idea
of something else that's needed as input...
Definitely I'll have to wrap my head around it as it does indeed sound
like a promising technique. So lots more to read up on...

The biggest advantage to inhouse calibration, is that you can do it
much more often, and therefore don't need to do it as precisely
as the cal-lab, because the sticker only needs date some months
ahead.

One more question to that, as it's not entirely clear to me what
exactly you mean here:
Do I take it correctly that in case I would be willing to re-cal in,
say three months instead of next year, the instruments used for
calibration do not necessarily need to be as precise as if I needed
calibration good for one year? Or did you mean that I could afford
coming closer to the manufacturer spec limit? Or something else
altogether?

All of the above, but you need to do the math to show that it is ok.

If you stick to the manufacturers instructions, they did the math,
and you wont need to.

If you invent your own schedule, you need to do the math to find out
the consequences for your uncertainty.

As I take it, doing calibration more often than the recommended
calibration period is safe as long as I don't want to claim less
uncertainty than what the spec sheet for the instrument gives. Then
there's also no real need to re-cal more often than that though.

But I'd still be interested in how one actually can come up with
numbers, that is, how to do the uncertainty analysis for different
situations. I'm curious how far one actually could get without detailed
knowledge of the equipment...
Again, more to find and digest details on.

Linear scaling is a good first approximation for drift, but not for
other sources of noise or failure.

Failure I wouldn't even include here unless it's parametric failure,
which then would be the result of some drift process

I guess a good Calibrator like a Fluke 5730A might do the trick as
well for the mentioned measurement range if low currents don't
mattter too much. And might be easier to get nowadays as even
well-known distributors don't quote a 3458A anymore. Might try to
get a quote for a Fluke 8508 and a Keithley 2002 as well...

If you are in EU, I think you need to buy the 3458A directly from
Keysight for ROHS reasons.

Apparently, datatec in germany also still has a  few left in stock,
priced at 8500 Euros. They're actually cheaper than I had expected.
One single calibration run for our modular DC instruments would already
nearly buy me one, and one 3458A would nearly be sufficient to
calibrate the stuff (just missing a few resistors, which at 0.1%
required accuracy aren't that hard to come by...).
I should discuss that with my boss...

I don't think I'm qualified to recommend specific equipment.

I only mentioned the 3458A because it is generally seen as the "gold
standard" and it is a damn good instrument in my own personal
experience.

That's my interpretation as well. I just looked what I can come up with
that has spec'ed uncertainty below 10 ppm, and the three units
mentioned were all I found from the major test equipment manufacturers.
And there's probably a reason why even Keithley lists a 3458A as
Equipment needed for calibration of some of their high end meters.

As it stands, the most likely solution hardware-wise in my case is
repurposing an 3458A we already have for use as calibration reference.
No need to buy more fancy equipment, just some money for regular
calibration of the beast. Even though it often is easier to come up with
100k Euros for new equipment than with 10k for regular maintenance of
what we already have :-(

All the best,
Florian

Am Fri, 22 Jun 2018 23:11:47 +0000 schrieb "Poul-Henning Kamp" <phk@phk.freebsd.dk>: > -------- > In message <20180622234534.12800dd5@aluminium.mobile.teply.info>, > Florian Teply writes: > > >Let me see if I understandd that correctly: > >Assuming no adjustments have been made to the instrument in between, > >with calibration history I could work out the actual drift rate of > >the instrument. Of course, the more datapoints I have, the more > >accurate that estimation might be. Then I could use that to project > >into the future to see when it will likely drift out of spec. > > Provided it has a uniform low-ish drift rate. > > That is probably something you will only see on the high-end kit. > > Low range kit will probably be dominated by all other sources of > noise. > Sure, if the error is dancing all over the place, the simple term "drift" is misleading as it implies a mainly linear behaviour. > >And, additionally, given that I worked out the drift, I could even > >try and post-process the data taken with that instrument and correct > >for the drift we just established, if this extra precision actually > >has some value to someone. After all, in the end it's just the > >removal of some systematic error I just happen to know after the > >analysis. > > If you do that, your uncertainty calculations just got a fair bit > more complicated, because now you also have to factor in the > uncertainty of the drift rate. > > But yes, that is basically how all cal-labs without Josephson > Junctions estimate their Volt and Resistance. > Agreed, that's another can of worms one probably doesn't want to open unless necessary. As far as I understand it by now, this approach would be a possible solution if a) sufficient data exists to support it (including a reasonable model of drift) AND b) the manufacturer specs are not sufficient for the task at hand (for the unit to be calibrated and/or the unit taken as reference). > >But would the evaluation of drift rate still be possible if > >adjustments have taken place? > > No. > > Until you have solid evidence to the contrary, you have to assume > that adjustments changed the drift rate. > That's my gut feeling as well. At least my guts are well calibrated ;-) > One interesting idea in this space is to maintain per instrument > Kalman filters on the calibration results. > > The predictions+uncertainty you get out will be way better than the > formal uncertainty calculation, because the Kalman filter does not > factor in risks (ie: things that _could_ happen) until they actually > _do_ happen, whereas the manufacturers specs have an allowance for > anything they could imagine or have heard about (jumps, thermals, > air pressure, etc.) > > The main trick is that if you ever see your formal uncertainty dip > below the Kalman filter, you know something is seriously wrong and > in the mean time, the filter *probably* tells you what the situation > is much more precisely than the formal numbers. > Granted, I just read a bit up on Kalman filters, and I'm surely far from understanding the major part of it. But doesn't that need a priori a reasonable model of what the uncertainty is composed of? Or at least a reasonable starting point for the covariance matrix. That sounds to me like a chicken-egg problem, trying to use some procedure to get an idea of something else that's needed as input... Definitely I'll have to wrap my head around it as it does indeed sound like a promising technique. So lots more to read up on... > >> The biggest advantage to inhouse calibration, is that you can do it > >> much more often, and therefore don't need to do it as precisely > >> as the cal-lab, because the sticker only needs date some months > >> ahead. > >> > >One more question to that, as it's not entirely clear to me what > >exactly you mean here: > >Do I take it correctly that in case I would be willing to re-cal in, > >say three months instead of next year, the instruments used for > >calibration do not necessarily need to be as precise as if I needed > >calibration good for one year? Or did you mean that I could afford > >coming closer to the manufacturer spec limit? Or something else > >altogether? > > All of the above, but you need to do the math to show that it is ok. > > If you stick to the manufacturers instructions, they did the math, > and you wont need to. > > If you invent your own schedule, you need to do the math to find out > the consequences for your uncertainty. > As I take it, doing calibration more often than the recommended calibration period is safe as long as I don't want to claim less uncertainty than what the spec sheet for the instrument gives. Then there's also no real need to re-cal more often than that though. But I'd still be interested in how one actually can come up with numbers, that is, how to do the uncertainty analysis for different situations. I'm curious how far one actually could get without detailed knowledge of the equipment... Again, more to find and digest details on. > Linear scaling is a good first approximation for drift, but not for > other sources of noise or failure. > Failure I wouldn't even include here unless it's parametric failure, which then would be the result of some drift process > >I guess a good Calibrator like a Fluke 5730A might do the trick as > >well for the mentioned measurement range if low currents don't > >mattter too much. And might be easier to get nowadays as even > >well-known distributors don't quote a 3458A anymore. Might try to > >get a quote for a Fluke 8508 and a Keithley 2002 as well... > > If you are in EU, I think you need to buy the 3458A directly from > Keysight for ROHS reasons. > Apparently, datatec in germany also still has a few left in stock, priced at 8500 Euros. They're actually cheaper than I had expected. One single calibration run for our modular DC instruments would already nearly buy me one, and one 3458A would nearly be sufficient to calibrate the stuff (just missing a few resistors, which at 0.1% required accuracy aren't that hard to come by...). I should discuss that with my boss... > I don't think I'm qualified to recommend specific equipment. > > I only mentioned the 3458A because it is generally seen as the "gold > standard" and it is a damn good instrument in my own personal > experience. > That's my interpretation as well. I just looked what I can come up with that has spec'ed uncertainty below 10 ppm, and the three units mentioned were all I found from the major test equipment manufacturers. And there's probably a reason why even Keithley lists a 3458A as Equipment needed for calibration of some of their high end meters. As it stands, the most likely solution hardware-wise in my case is repurposing an 3458A we already have for use as calibration reference. No need to buy more fancy equipment, just some money for regular calibration of the beast. Even though it often is easier to come up with 100k Euros for new equipment than with 10k for regular maintenance of what we already have :-( All the best, Florian