Precision, Performance, Confidence. ™

Implementing ISO/IEC 17025 Measurement Uncertainty Requirements in MET/CAL Version 9.X

Introduction

 

There is an increasing need to determine measurement uncertainties in a calibration environment. This need is based on the requirement to comply with certain standards documents such as ISO/IEC 17025.

 

It is no longer sufficient to calculate the traditional test uncertainty ratio (TUR), per MIL STD 45662A.  The T.U.R. is usually calculated as:

 

          TUR = (Test Tolerance) / (Accuracy of Standard)

 

The TUR calculation is based on the stated uncertainty of the measurement standard, but does not represent the expanded measurement uncertainty because it does not encompass empirical information based on a sequence of actual measurements, nor does it incorporate measurement uncertainty information based on the resolution of the Device Under Test (DUT) or other components of the measurement system and aspects of the measurement environment.

 

This discusses the implementation of measurement uncertainty calculation in MET/CAL version 9.x automated calibration software.

 

Throughout this application note, we will use actual examples of how MET/CAL calculates measurement uncertainty.

 

 

For our example, we will use a 5500A to calibrate a 3 ½ digit meter at 10 Amps 50 Hertz

This meter has a 10mA resolution at 10.0A

 

Basic Calculation

 The measurement uncertainty calculation is simply:

 

          Expanded Uncertainty = (combined Standard Uncertainty) * K

 

Where K is the coverage factor.

 

Note: In Version 8 and later, Welch-Satterthwaite is enabled by using the VSET WS = YES in a procedure. In Welch-Satterthwaite mode MET/CAL determines the effective degrees of freedom (DF), and then looks up the coverage factor in a T-distribution table at the specified confidence (KCONF).

 

The combined Standard Uncertainty is an RSS (Root Sum Square) calculation:

 

          Combined Standard Uncertainty =  

 

MET/CAL attempts to determine the U1 and U2 automatically.

 

U1 is the calibrator’s accuracy related uncertainty.  It is the Normalized System Uncertainty, which is the instrumental measurement uncertainty of the reference standard, and is based on the uncertainty and confidence level of the Calibration standard.  This uncertainty is taken from the ACC file for the calibrator used.  The 90 Day specification is typical for the ACC file.  This would be considered as Type B uncertainty

 

U2 is the combination of two DUT related uncertainty components:

 

  1. S1: The standard deviation of the mean within a sequence of actual measurements (a Type A uncertainty)
  2. S2: The resolution (or sensitivity) of the DUT (a Type B uncertainty)

 

U3, U4, U5, U6, U7, U8, U9, and U10 are optional uncertainty components, which may be directly specified by the procedure writer.  If specified, they are included in the RSS calculation.  If not specified, they default to zero and do not affect the RSS calculations.  If these values are specified in the procedure, they will persist in the procedure until changed or reset.   These optional uncertainty components would be considered as part of the Type B uncertainty.

 

Note:  If the Welch-Satterthwaite is enabled, the per-component degrees of freedom and the per-component sensitivity coefficients may be specified.

 

Determining U1, the Normalized System Uncertainty

 

In each test step in a MET/CAL calibration procedure there is a measurement standard and a DUT.

 

In most cases, the specification of a test in the calibration procedure includes information about the test sufficient for MET/CAL to automatically program the measurement standard.  The information is also used to look up the uncertainty of the standard in an external accuracy file.

 

The Normalized System Uncertainty is calculated as:

 

          Normalized System Uncertainty = System Uncertainty / Confidence Interval

 

Where:

 

System Uncertainty is calculated from data in the MET/CAL Accuracy file for the calibrator.

 

The Confidence Interval is a statistical measure of the confidence associated with the specification given for a calibration standard.  In normal operation, the Confidence Interval is looked up automatically and is taken from the header portion of the external accuracy file.

The following is an example from the header in the 5500A 90 Day accuracy file:

 

Begin Header

           instrument = Fluke 5500A

           interval   = 90 days

           confidence = 2.58 sigma

End Header

 

The typical Confidence values are 2 sigma, 2.58 sigma, and 3 sigma.  Note that the parameter called Confidence in this document is described in various technical documents as a “coverage factor”.  This is not the same coverage factor, however, used to determine the Expanded Uncertainty from the Standard Uncertainty.

 

For our example:

 

The 90 Day spec for the 5500A @ 10A, 50 Hz

= +/- (0.05% + 2000uA)

or

System Uncertainty = 7mA at 2.58 confidence

or

Normalized System Uncertainty = U1 = 7mA/2.58 = 2.7mA at 1 sigma confidence

 

Determining the U2 portion

 

The second uncertainty component, U2, is typically based on a sequence of actual measurement, and on the resolution of the DUT.  The calculation is:

 

          U2 =  

 

Where S1 is based on the sequence of measurements, and S2 is based on the resolution of the DUT.

 

Note: In Version 8 the U2 portion may be defined by the procedure writer using the U2M VSET command.  The procedure writer may choose U2M = RSS, which will RSS the S1 and S2 values as shown above, or U2M = Single, which allows the operator to use either the S1 or S2 values.

 

Determining S1

 

S1 is based on a sequence of measurements at a particular test point, and is calculated as:

 

          S1 = (SDEV / ) * F

 

Where:

 

  1. N is the number of measurements
  2. SDEV is the standard deviation of the measurements
  3. F is a factor based on the Student’s T distribution and the number of degrees of freedom.  The number of degrees of freedom is one less than the number of measurements taken.

 

Unless overridden by use of the VSET FSC in a procedure, the value of F is determined per Table G.2 of Annex G of the document ISO Guide to the expression of uncertainty in measurement (commonly referred to as the GUM).

 

The values of F used by MET/CAL are exactly half the values shown in the 95.45% column of Table G.2.

 

Note that MET/CAL uses the simplifying assumption that the number of degrees of freedom is one less than the number of measurements (NMEAS).  If this assumption is not acceptable, it may be possible for the metrologist / procedure writer to directly calculate F and override MET/CAL's built-in determination of F

 

In our example:

 

We take 5 readings of the 10A, 50 Hz source with our DMM and get the following data:

 

 

S1 = (25.49mA/ ) * F

S1 = (25.49mA/2.236)*1.435

S1 = 16.36mA

 

Determining S2

 

S2 is based on the resolution (or sensitivity) of the DUT.  With most measuring devices, there is a limitation of one half of the smallest amount measured.  The reason it is necessary to include the S2 component in the calculation of the second uncertainty component, U2, is that in cases where the uncertainty of the standard is much smaller than the uncertainty of the DUT there is a high probability that a sequence of measurements at a particular test point will all yield the identical value.  In this case the calculated standard deviation of the measurements will be zero, and S1 will therefore also be zero.  However, a standard deviation of zero does not indicate the measurements are all absolutely the same, it only indicates that, within the resolution of the DUT, the measurements are the same.

 

For example, if the real value of an applied signal is fluctuating, but always with less than +/- 0.5 count as shown on the display of a DMM, a sequence of identical measurements would be recorded.  Including S2, therefore, prevents the inappropriate estimate of U2 as zero in such cases.

 

S2 is calculated as:

 

The  term comes from assuming a rectangular distribution of probabilities of values within a range defined by half the resolution of the DUT.  This resolution is, by default, determined indirectly, from information given in the procedure.  It is typically based on the specified NOMINAL value, although there are other sources of information when the NOMINAL value is not directly specified by the procedure writer.

 

For example, suppose a DC Volts verification test is done at 1V.  If the procedure writer specifies that the NOMINAL value is “1.00V”, MET/CAL infers from the format of the NOMINAL specification the resolution of the DUT is 0.01V

In our example:

 

Determining U3, U4….,U10

As previously stated, the calculation of the standard uncertainty is:

Where U3, U4,….,U10 are optional uncertainty components which can be directly specified to augment the measurement uncertainty calculation.  These optional uncertainties are used to assign uncertainties from things like lead loading effects, thermal emfs, etc. and, unless otherwise specified by the procedure writer, it is considered part of the Type B uncertainty.

U3, U4, ….,U10 may be directly specified in a MET/CAL Calibration procedure.  The specification may apply to a single test, a sequence of tests, or to the entire procedure.  The default value for these components is zero and therefore make no contribution if not used.

 

Recall also that the Expanded Uncertainty is calculated as:

 

          Expanded Uncertainty = (Combined Standard Uncertainty) * K

 

Where K is the coverage factor.

 

Thus, an Assigned Value of U3, U4 …,U10 affects both the Combined Standard Uncertainty and the Expanded Uncertainty.  Note all manually specified Uncertainty components needs to be at 1 sigma.

 

It is up to the metrologist or procedure writer to assign values to these components.  In general these components are intended for Type B uncertainties.  These uncertainties are not directly based on the sequence of measured values, the uncertainty of the main calibration standard, or the resolution of the DUT, because those uncertainty components are incorporated in U1 and U2. If the Welch-Satterthwaite is enabled, the degrees of freedom and sensitivity coefficients may be specified for each of the optional uncertainty components.

  

As stated in the GUM, information used to determine Type B uncertainties includes:

 

  • Previous measurement data
  • Knowledge of relevant behavior and properties of materials and instruments
  • Manufacturer’s specifications
  • Calibration certificates
  • Uncertainties assigned to reference data taken from handbooks

 

In practice, sources of additional, optional uncertainty components may include:

 

  • Test leads
  • Terminators
  • Attenuators
  • Power splitters
  • Thermocouples
  • Other signal conditioners
  • Environmental factors (temperature, humidity)

 

In some cases it may be appropriate to leave all optional uncertainty components unassigned.  For example, if you are using a Fluke 5720A to calibrate a Fluke 10 DMM, the resolution of the DUT will dominate the measurement uncertainty calculation and any uncertainty contribution from, say, test leads, well be negligible.  On the other hand, if you are using, for example, an HP 3458A to measure a precision resistor, uncertainty due to test leads and temperature fluctuations in the lab may be important.

 

For Type A uncertainty components MET/CAL uses the number of measurements,

minus one, as the number of degrees of freedom.

 

This affects the determination of U2, which is based in part on the standard

deviation of a sequence of measurements at a given test point.

 

The optional uncertainty components, U3 to U10, may be Type A or Type B,

however, if they are Type A, it's the responsibility of the user (procedure writer)

to determine the number of degrees of freedom and perform the required

statistical calculation before entering the uncertainty component into MET/CAL.

These optional components must be normalized to 1 sigma.

 

 

 

 

 

Have more questions? Submit a request

Comments

Please sign in to leave a comment.