Tolerance goes to zero for nominal values of zero
I am measuring the frequency response of a microphone preamplifier by injecting a signal into it with a SRS DS360 signal generator and measuring the voltage with an Agilent 34401A DMM. The signal is nominally 120 dBµV and referenced from 1kHz. The signal generator's amplitude and frequency, expected nominal value, and tolerances are read from a text file for each test point. Some of the frequencies in question should not differ from the 1 kHz level measured by the DMM, so the nominal difference is given to be 0. An example is as follows.
@freq = 5011.9 Hz
@amp = 1 Vrms
@kHz_ref = 119.976 dBµV
@dB_ref = 1E-6
M1 = 0.07 dBµV
M2 = 0.07 dBµV
MEM = 119.934 dBµV
MEM1 = 0.000 dBµV
1.030 PORT [@DS360]FREQ [V @freq]; AMPL [V @amp]VR
1.031 PORT [@34401]READ?[I]
1.032 MATH MEM = 20*log(MEM/@dB_ref) - @kHz_ref
1.033 TSET TDESC = [V @freq] Hz
1.034 MEMCX dB -M1U +M2U
Whenever the nominal value is 0.000, the tolerances reported also collapse to zero, even though the tolerances are expressed in units and not percentages. Is this because MET/CAL converts tolerances to a ratio of the nominal value or a function of it expecting a nonzero value for the expected result?
Also, the expanded uncertainty is always reported much higher when the nominal value is zero. Even if the value is given as 0.000, MET/CAL stores the value as 0, so the UUT resolution is significantly larger than other test points. I could set UUT_RES to default to three decimal places, but I'd prefer a more elegant solution.
Please sign in to leave a comment.