MetrologyTom
Registered
Hello, I've been asked about a specification at my company that specifies when an out of tolerance form will be filled out and sent to a manager and then possibly to customers (due to OOT "as found" at the calibration interval). Hopefully this is an appropriate subforum for this. The problem is there seem to be different interpretations of the text, and examples provided that aren't really examples.
The basic text is (paraphrased): "if an OOT condition exceeds the required accuracy by more than 25% on 10:1 accuracy ratio gaging, the technician will fill out the OOT form". The example given is "gage used for .001 part tolerance is calibrated to .0001 accuracy or 10:1 ratio; if during calibration the gage if found to exceed .000025 out of calibration or 25% of the required accuracy it is out of tolerance".
First off the gage in this case is used for many part tolerances so that isn't straight forward (or may be checked with masters that aren't the exact span of the tolerance of the parts to be checked). And a gage isn't really "calibrated" to an accuracy, every kind of gage has a limit to how accurate it can possible be. Generally we look at the error over the largest span (the tolerance, or something a bit larger). In this case the gage is much more accurate than 10% of the tolerance. But for a simplified example of how I see this working:
We have two masters, one is at 1.0 and one is at 1.001 (perfect, same as our part tolerance!). We master at 0, and check the other master on the gage. We want it to read from 1.0009 to 1.0011 to meet the 10:1 ratio. However, in order to be bad enough to submit an OOT form, we need to be lower than 1.000875 or higher than 1.001125 (25% MORE off than the normal allowance). Would this be how others would interpret this specification? I've heard other ways this is interpreted which leaves me to believe it has been managed in several different ways over the years. I can re-write this to make it more clear, but the original intent is lost to time and probably partially comes out of looking at old standards like MIL-STD-120.
The other problem I have here is what normally happens is a gage is checked over different ranges than the tolerances we are checking due to the exact sizes of the masters available. Or we check it over several intervals. My thought is we treat any interval we check over with the 10% rule (and further, the 25% over that for an alert). In the case of the kind of instrumentation here, the adjustments are gains and the errors linear. So if we had a master of 1.000 and 1.002 I would be looking for 1.0019 to 1.0021 and for an alert outside of 1.001875 - 1.002125.
None of this is what has been done in the past; for the two intervals in my actual case (master at "0", then check to masters at -.0002 and -.0004) I was told 'we're allowed .000030 off at each interval" as the past practice.
The basic text is (paraphrased): "if an OOT condition exceeds the required accuracy by more than 25% on 10:1 accuracy ratio gaging, the technician will fill out the OOT form". The example given is "gage used for .001 part tolerance is calibrated to .0001 accuracy or 10:1 ratio; if during calibration the gage if found to exceed .000025 out of calibration or 25% of the required accuracy it is out of tolerance".
First off the gage in this case is used for many part tolerances so that isn't straight forward (or may be checked with masters that aren't the exact span of the tolerance of the parts to be checked). And a gage isn't really "calibrated" to an accuracy, every kind of gage has a limit to how accurate it can possible be. Generally we look at the error over the largest span (the tolerance, or something a bit larger). In this case the gage is much more accurate than 10% of the tolerance. But for a simplified example of how I see this working:
We have two masters, one is at 1.0 and one is at 1.001 (perfect, same as our part tolerance!). We master at 0, and check the other master on the gage. We want it to read from 1.0009 to 1.0011 to meet the 10:1 ratio. However, in order to be bad enough to submit an OOT form, we need to be lower than 1.000875 or higher than 1.001125 (25% MORE off than the normal allowance). Would this be how others would interpret this specification? I've heard other ways this is interpreted which leaves me to believe it has been managed in several different ways over the years. I can re-write this to make it more clear, but the original intent is lost to time and probably partially comes out of looking at old standards like MIL-STD-120.
The other problem I have here is what normally happens is a gage is checked over different ranges than the tolerances we are checking due to the exact sizes of the masters available. Or we check it over several intervals. My thought is we treat any interval we check over with the 10% rule (and further, the 25% over that for an alert). In the case of the kind of instrumentation here, the adjustments are gains and the errors linear. So if we had a master of 1.000 and 1.002 I would be looking for 1.0019 to 1.0021 and for an alert outside of 1.001875 - 1.002125.
None of this is what has been done in the past; for the two intervals in my actual case (master at "0", then check to masters at -.0002 and -.0004) I was told 'we're allowed .000030 off at each interval" as the past practice.