Please clarify the Rule of 10 to 1 - AND - What is the ndc number?

Miner

Forum Moderator
Leader
Admin
Is there anybody out there? :confused:
Please be patient. Some of us have jobs that prevent us from being online in the Cove at all while others are too busy to do more than spot check.

I doubt that you will find anything in the AIAG manuals. The 10% Rule is a guideline from ANSI/ASME STD B89.7.3.1-2001 "GUIDELINES FOR DECISION RULES: CONSIDERING MEASUREMENT UNCERTAINTY IN DETERMINING CONFORMANCE TO SPECIFICATIONS". There is also a 4:1 rule from ANSI Z-540 & MIL-STD 45662A.

There are other Covers with deeper knowledge in this area.
 
C

Coleman Donnelly

Well I have waited a few days and still nothing... I wasn't trying to be rude or pushy just working on a short time table (as always)... as a result "we" have made a decision that I don't fully agree or disagree with because I could not obtain evedince for or against the argument that I was making.
 

Miner

Forum Moderator
Leader
Admin
I requested input from another forum moderator to help answer your question. Your question crosses between the MSA and calibration forum.
 

BradM

Leader
Admin
Well I have waited a few days and still nothing... I wasn't trying to be rude or pushy just working on a short time table (as always)... as a result "we" have made a decision that I don't fully agree or disagree with because I could not obtain evidence for or against the argument that I was making.

Sorry you did not have the answers that you needed for your issue.:eek:

I doubt that you will find anything in the AIAG manuals. The 10% Rule is a guideline from ANSI/ASME STD B89.7.3.1-2001 "GUIDELINES FOR DECISION RULES: CONSIDERING MEASUREMENT UNCERTAINTY IN DETERMINING CONFORMANCE TO SPECIFICATIONS". There is also a 4:1 rule from ANSI Z-540 & MIL-STD 45662A.


Now, to Miner's post here. Coleman, is this what you mean? I got somewhat lost in your previous post. Are you talking about accuracy ratios?

If you are concerned with keeping accuracy ratios, then "yes", anything you use to calibrate another device should be significantly more accurate than the device you are verifying. A 10 to 1 ratio is good; like Miner suggested, a 4 to 1 ratio is usually considered minimal.

Exactly what is the argument you were trying to make, and what was unknown to you?
 

Hershal

Metrologist-Auditor
Trusted Information Resource
I guess I should be a little more specific....

We use a universal gage to measure the length of our parts. The universal tollerance for the length of a part is +/- .040". That means that the tollerance of the length gage should be +/- .004". We set our length gages using a transfer standard. My understanding is that the length of the transfer standard should be +/- .0004". Maybe i am wrong - if so please enlighten me! - However if i am not wrong - how do i prove my argument? Preferebly using AIAG manuals!

Let's see if I can help tis question, and clarify application of 4:1 and 10:1 rules.....no promises.....

If th tolerance for the part is +/- 0.04", then the question becomes - from a calibration perspective - the accuracy and uncertainty of the measuring instrument, as those are the driving questions. If the accuracy of the measuring instrument is say +/- 0.004" as given in the quote, then the measuring instrument has the ability to accurately measure to 1/10 of the part tolerance. That can be a good thing.

Calibration of the measuring instrument then means that the standard used to calibrate the instrument would have to be ale to accurately measure +/- 0.001" in order to have a 4:1 TAR, or test accuracy ratio.

Cal labs now work with uncertainty as a normal course, rather than accuracy. Without knowing the uncertainty of the calibration of the measuring instrument, numbers would be speculation. However, the same concept regarding calculation of 4:1 or 10:1 applies. The uncertainty of the standard(s) used to calibrate the measuring instrument is to be equal to or less than 1/4 of the uncertainty of the calibration of the measuring instrument.

Hope this helps. I realize it will likely generate more questions.....a good thing!
 
C

Coleman Donnelly

As mentioned in other posts - uncertainty is something that we are still trying to tackle... As a result I am working within my means.

The confusion arised when there was discussion about the validity of using length transfer standard blocks to calibrate a "length gage". 4:1 is considered the minimum amount of acuracy required to transfer or calibrate a known quantity. It was argued that the length is being transfered so the discrimination rule was not cumulative. i.e. If the gage blocks are 10x more accurate than then the part to be measured they can adequetly transfer the same degree of accuracy over to the actual gage that will be used to check length so that the "length gage" would also maintain the same 10x more precise than the length tollerance of the part to be measured.

The situation becomes a little compounded when you factor in multiple gage transfer blocks. i.e. If my "length gage" has to be set up to check 48" +/- 0.040" than I will need to use multiple blocks to reach my desired quantity because the largest block we have in stock is 12" Now if i have 4- 12" blocks each being held to 12" +/- 0.004 my collective statement becomes 48" +/- 0.016" (If I am wrong here please - someone let me know).

Now based on this situation would 12" +/- 0.004" be adequet when checking my length blocks to satisfy the purpose that they where intended to be used for?

If I took it to the next step and used 12" +/- 0.001" by implimenting the 4:1 rule would this satisfy my requirement when stacking up 4 blocks for a total of 48" +/- 0.004" since I am making a length gage that needs to check 48" +/- 0.004"? Or would my overall stackup need to be +/- 0.001" to satisfy the 4:1 rule.

Now if my stackup tollerance needs to be considered in applying a tollerance for all of my length standards how do I satisfy an auditor that this is being done correctly?

Thankyou for the responses - they are appreciated :thanx:
 

BradM

Leader
Admin
As mentioned in other posts - uncertainty is something that we are still trying to tackle... As a result I am working within my means.

I too am working with uncertainty. :)Fun, isn't it?

The confusion arised when there was discussion about the validity of using length transfer standard blocks to calibrate a "length gage". 4:1 is considered the minimum amount of acuracy required to transfer or calibrate a known quantity. It was argued that the length is being transfered so the discrimination rule was not cumulative. i.e. If the gage blocks are 10x more accurate than then the part to be measured they can adequetly transfer the same degree of accuracy over to the actual gage that will be used to check length so that the "length gage" would also maintain the same 10x more precise than the length tollerance of the part to be measured.

When you maintain adequate ratios, you are increasing your confidence in making correct decisions. I don't think I would make the statement that you're transferring accuracies.

Let's take a pressure gauge that has a mfg. accuracy of +/-1 PSI. If I am using the 4 to 1 accuracy ratio, then I can check that with a standard that is +/-.25 PSI. I could even use a higher order standard that is +/-.025 or even .0025 PSI. However, the accuracy of the gauge being tested is still +/-1 PSI, due to it's material, characteristics, design, etc. I haven't made the equipment better by using a better standard; I just deliver a more confident measurement.

The situation becomes a little compounded when you factor in multiple gage transfer blocks. i.e. If my "length gage" has to be set up to check 48" +/- 0.040" than I will need to use multiple blocks to reach my desired quantity because the largest block we have in stock is 12" Now if i have 4- 12" blocks each being held to 12" +/- 0.004 my collective statement becomes 48" +/- 0.016" (If I am wrong here please - someone let me know).

True, you do accumulate some error by wringing multiple blocks, hence the reason why people buy large gauge blocks. I think what you seeing is the value of estimating uncertainty of your measurement system.

If you had reported deviation (with reported uncertainty) for each of the gauge blocks, you could use those numbers in your calculations.
 
P

pinpin - 2009

Hi Everyone, a very Good Morning to all of YOU!

I was taught like this:

1) 10:1 rule is used to give us confidence that the measurement result indeed meet requirements or not. Example when reading is at the max for one that requires 4.2 +/- 0.02, we could not be sure it is exactly 4.220, or 4.221 to 4.229. Definitely we should not accept 4.22 had we not used 10 to 1 rule.

2) Besides applying this 10:1 rule, we need to take in the accumulated uncertainties throughout the calibration chain. While the reading is at the max of 4.222, we could tell how far it may lie beyound 4.222, and at what confidence level, say 95%.

Please correct me if this is not correct, thank you!:thanks::thanx:
 

Hershal

Metrologist-Auditor
Trusted Information Resource
As mentioned in other posts - uncertainty is something that we are still trying to tackle... As a result I am working within my means.

The confusion arised when there was discussion about the validity of using length transfer standard blocks to calibrate a "length gage". 4:1 is considered the minimum amount of acuracy required to transfer or calibrate a known quantity. It was argued that the length is being transfered so the discrimination rule was not cumulative. i.e. If the gage blocks are 10x more accurate than then the part to be measured they can adequetly transfer the same degree of accuracy over to the actual gage that will be used to check length so that the "length gage" would also maintain the same 10x more precise than the length tollerance of the part to be measured.

The situation becomes a little compounded when you factor in multiple gage transfer blocks. i.e. If my "length gage" has to be set up to check 48" +/- 0.040" than I will need to use multiple blocks to reach my desired quantity because the largest block we have in stock is 12" Now if i have 4- 12" blocks each being held to 12" +/- 0.004 my collective statement becomes 48" +/- 0.016" (If I am wrong here please - someone let me know).

Now based on this situation would 12" +/- 0.004" be adequet when checking my length blocks to satisfy the purpose that they where intended to be used for?

If I took it to the next step and used 12" +/- 0.001" by implimenting the 4:1 rule would this satisfy my requirement when stacking up 4 blocks for a total of 48" +/- 0.004" since I am making a length gage that needs to check 48" +/- 0.004"? Or would my overall stackup need to be +/- 0.001" to satisfy the 4:1 rule.

Now if my stackup tollerance needs to be considered in applying a tollerance for all of my length standards how do I satisfy an auditor that this is being done correctly?

Thankyou for the responses - they are appreciated :thanx:

That you are struggling with uncertainty is OK.....so is almost everyone else who makes measurements.....and yes, that includes Metrology professionals like me.....you are NOT alone there.....

As for combining blocks, there is a factor for each combination known as wringing.....there are studies and you can do your own if you have lots of time and nothing better to do.....or take the accepted number of 0.00005" perwringing.....for your example of three 12" blocks, here is how it works.....the three wringings can be combined and so are 0.00015", and this is a rectangular distribution, and so divided by square root of three or 1.732.....this is dropped into the final formula.....

Each block you gave a number for the 12" block of 0.004".....you did not state this as uncertainty, but for discussion let's say it is.....each block has its own uncertainty, hence you have four blocks in this case each with an uncertainty across the block of 0.004", a normal distribution if calibrated by an accredited laboratory and the uncertaint expressed at k=2 to approximate 95% confidence, and so divide by 2 to return to standard uncertainty and drop into the final formula.....

Each block was calibrated at some temp, most likely 20 C, so if used at some other temp, then that must be taken from the CALIBRATION temperature (which should be listed on the cal cert for the blocks), and the difference is a rectangular distribution.....after division then drop the number into the final formula.....

All these are Type B or systemic readings.....

Now, take several readings, get standard deviation, divide by n-1 to obtain Type A or random uncertainty and drop into the final formula.....

Now, the final formula.....

Take all these numbers, square each of them, add them together, and take the square root of the sum.....this is known as root-sum-square or RSS.....it gives you what is known as Standard Uncertainty.....

Then take the Student T-Tables to multiply to achieve 95% confidence.....in theory, one can determine specific degrees of freedom, but in a practical approach the Type B contributions (which have no set number) are considered infinite and so 1.96 will provide 95% confidence.....but the confidence is typically expressed at k=2 which arrives at 95.5% confidence, but is easier to work with.....

Hope this helps......
 
Top Bottom