Applying the manufacturer's tolerances to UUT's (and the word 'Reading') on MET/CAL
Perhaps you could help us on a question about UUT specifications and MET/CAL.
I'm having a difficult time explaining to my work colleagues the correct way of applying the manufacturer's tolerances to the instruments, because of the English word "Reading" and I would like to know how MET/CAL calculates the tolerance of the UUT.
Please suppose that we have a FLUKE 5520A Multifunction Calibrator that is generating 10Vdc (this is the standard value). Suppose that we want to calibrate a FLUKE 179 handheld multimeter at 10V on the 100V scale, but the multimeter is reading 6,4V.
So the error equals to 6,4V - 10V = -3,6V.
Let's suppose that the manufacturer defines the following UUT tolerance:
(1% of Reading + 0,1% of Scale)
Because of the word 'Reading' in the specifications, some colleagues say that the UUT tolerance should be calculated as:
6,4V * 1% + 100 * 0,1% = 0,164 V
But I believe that tolerance should be calculated as:
10V * 1% + 100 * 0,1% = 0,200 V
Because the word 'Reading' in the specifications really stands for "expected reading", so the specification should always apply to value that we expect to read and not the value actually read. If we apply the specification to the value actually read, we could get multiple tolerances values, and in the absurd case that the error is quite large (for example a reading of 1 V would give an error of -9V), the tolerance would be very small.
I was wondering what would be the 'correct' way of calculating the UUT specifications and how MET/CAL does it.
My sincere regards and thank you in advance,
Please sign in to leave a comment.