Please clarify the Rule of 10 to 1 - AND - What is the ndc number?

sonflowerinwales

In the country
Daniel
Rounding of results is incorrect. The 9.86mm and 10.12mm are out of tolerance and should be recorded as such. If you are using a digital vernier, the resolution is 0.01mm, but the accuracy is 0.02mm according to the standards and manufacturers specification!
Paul
 
D

Daniel Negrea

Sonflowerinwales,

Thank you for your answer, some people tried to shake my believes in my knowledge implying that 9.86 mm can be rounded to 9.9 mm (the drawing specifies only one decimal place for this dimension), and part still be ok.

Regards,

Daniel
 
D

Daniel Negrea

Jim,

You are 100% right, this is the way that I know being right too, but some people tried to shake my knowledge.

Regards,
Daniel
 

Hershal

Metrologist-Auditor
Trusted Information Resource
A little clarification on 10:1

The 10:1 rule is - to use an analogy - like the big brother to the 4:1 rule and in the U.S. is "an accepted metrological specification" under ANS/ISO/IEC 17025.

10:1 or better is considered optimal in metrology, where 4:1 is considered the minimum in metrology. The 4:1 is specifically described in the American National Standard ANSI/NCSL Z540-1-1994 Clause 10.2.b which is the word-for-word posting of the long-since-dead-and-buried MIL-STD-45662A.

The current interpretation of the rule is a TUR which means the expanded uncertainty of the calibration performed is the base to work back to the 4:1 from the instrument used to effect the calibration. As an example, if a caliper is calibrated at 0.001 inch, with an expanded uncertainty for that calibration of 600 microinches, then the collective expanded uncertainties of the gage blocks used to calibrate the calipers must be not more than 150 microinches in order to maintain the 4:1 ratio.

Hope this helps.

Hershal
 
B

briggs_joe

sonflowerinwales said:
Daniel
Rounding of results is incorrect. The 9.86mm and 10.12mm are out of tolerance and should be recorded as such. If you are using a digital vernier, the resolution is 0.01mm, but the accuracy is 0.02mm according to the standards and manufacturers specification!
Paul

Unfortunately, I'm going to be one of those "shakers". :argue: :)

Mathematically, accuracy in engineering terms is determined by the specified number of digits in the variation specification. Rounding is acceptable if you do not violate the number of significant digits specified or implied.

So, for example, the generic specification "10 +/- 0.1" by itself implies that accuracy is being held to 1/2 of the specified error spec, or 0.05. Therefore, a measurement of 9.86 is an acceptable measurement per the specification.

Alternatively, a specification can be more explicit in two ways. One is to actually write out the significant digits you expect accuracy to, such as "10.0000 +/- 0.1000". The other is to state it: "10 +/- 0.1 measured to an accuracy of 0.0001".

This is probably one of the most confusing, and one of the most abused areas in requirements and specification writing in industry, and really does lead to a lot of problems trying to communicate intentions in requirements. :(

Always report your results to the same number of digits as in the specification and round them before recording, even if you know your equipment carries greater accuracy. The additional digits won't do anyone any good except create the kind of question being asked here.
 

Jim Wynne

Leader
Admin
briggs_joe said:
Mathematically, accuracy in engineering terms is determined by the specified number of digits in the variation specification. Rounding is acceptable if you do not violate the number of significant digits specified or implied.

Sorry, but this is just wrong. While the tolerance may be signified by the number of decimal places used (e.g., two decimal places = +/- .01 or) the need for accuracy is not, unless it's specifically stated. There is no intrinsic difference between .01 and .010, and if the specification says x +/- .01 and the measurement is x.011, the specification has been violated.

If you disagree, can you tell me where the limit on rounding is? In the example given above, is x.01999, would you consider the result acceptable?
 
B

briggs_joe

Jim Wynne said:
If you disagree, can you tell me where the limit on rounding is? In the example given above, is x.01999, would you consider the result acceptable?

Actually, I did specify it. This result is not acceptable because it violates the 1/2 digit accuracy implied in the specification. Spec says +/- 0.01, result is off by 0.01999 which rounds to 0.02, therefore it fails.

Jim Wynne said:
Sorry, but this is just wrong. While the tolerance may be signified by the number of decimal places used (e.g., two decimal places = +/- .01 or) the need for accuracy is not, unless it's specifically stated.

Do you have any material reference for this? I'm not trying to be mean, I'm just not aware of any. I've never heard of some numbers being subject to rounding and others not.

The problem is not in the specification, its in the implementation of it. There is no industry standard requirement for what a measurement's precision must be relative to the measurement result itself (thus, the existence of MSA and this forum ;) ). So, if it is not specified, it could 100x more precise, 10x or even just 1x. If you compare a 1x measurement to a 100x measurement, you could easily fail a 100x measurement and pass a 1x measurement. For example, both could report 10.1 as a result, or one could report 10.1 and the other 10.001. Which is more right from a mathematical perspective? I'm purposely avoiding a quality perspective, because of its subjective nature, and wasn't what my original post spoke to.

Also, significant digits drive designs and implementations. If the designs are only good to 3 digits of precision including rounding of all calculations, then a measured test value that violates the 4th digit of precision (or the 7th) is not an impactor on the design unless the requirements were poorly specified.

I'm happy to move this topic off to another thread. I think it's extremely valuable as there are differing views and practices that lead to confusion. :confused:

Hopefully not ruffling feathers too much! :(
 

Jim Wynne

Leader
Admin
No ruffled feathers here, Joe. First, when it comes to engineering standards, there is no universally accepted source. There are published standards, but they are adopted by agreement between parties. Without at least two parties agreeing to abide by a standard, the standard is meaningless.

Next, unless there is some sort of explicit understanding between parties, numbers mean exactly what they say, nothing more and nothing less. .011 is greater than .01, and if a specification says that .01 is the limit, and you measure .011, you've exceeded the limit, unless we have agreed to some other interpretation of boundaries beforehand. If you make $100,000 worth of parts that exceed the stated limit, but assume something about rounding because of some ill-defined "engineering standard," and make parts that measure > the stated limit, and the parts don't work in end use, I hope you're hungry, because you're going to eat those parts :D .
 
Last edited:

Tim Folkerts

Trusted Information Resource
Let me add a few more stray thoughts.

1) It is common in college-level science courses to use "significant digits" to indicate an approximate level of precision. A number like "11" implies a precision of something like +/- 1 or +/- 0.5, while "11.0" would imply something like +/- 0.1 or +/- 0.05.

I have seen professors who seem to think "significant digits" is the last word in error analysis, and expect students to exactly follow the "significant digit" rules for all homework. In fact, it is simply a rule of thumb that works pretty well in many circumstances, but it is quite possible to find situations where the rules work quite poorly.

2) In the absence of any clarification, I would follow Jim's interpretation that +/- 0.1 means that +/- 0.10001 is out of spec, while +/- 0.09999 is in spec. Of course, the best plan is to keep all the parts well away from the limits, so there is no question.

3) Economically, it may not be worth arguing about the parts right near the spec. You could easily spend more money determining whether the part is actually +0.09999 or +0.10001 than the part is worth. It also depends on whose responsibilty it is: the producer to assure that it is in spec or the customer to show that it is out of spec.

This is where the idea of guardbanding comes in. A conscientious producer might reject any parts at +0.10 because he isn't sure they are good. For internal purposes, the test criterion might be tightened to +/- 0.09. A conscientious customer might accept any parts at +0.10 because he isn't sure they are bad. For internal purposes, the test criterion might be loosened to +/- 0.11.

4) Perhaps Taguchi had it right. Rather than an absolute limit for good/bad, a sliding scale might be more appropriate.

At one time, I was trying to figure out an "economic capabilty index". The loss function for bad parts is specificed, and then the "economic capabilty index" is simply the cost of poor quality associated with the parts. If the parts are exactly at the target, the index is 0. The farther off the parts are from ideal, the higher the average loss function. If the average cost of poor quality exceeds some value, then there is a penalty to the producer.

Determining the appropriate loss function would take a bit of thought (but that's what QE's are for) and calculating the results is somewhat involved (but that's what computers are for), but interpreting the results -money! - is simple (I guess that's what managers are for ;) ).


Tim F
 
B

briggs_joe

I was speaking from a purely mathematical viewpoint. From that perspective, I still believe I am speaking well. It is an academic approach.

Now when it comes to business decisions, based on quality, cost and schedule, then there are many other things to consider, which I agree completely with Tim about.

In fact, at our facility, we implement guardbanding as a standard practice to ensure we have good test margin. We are also beginning to implement an outliers methodology that will move test limits based on a six-sigma passband, that will, for most of these measurements, be well within the design specification limits. So the effect of whether one uses an absolute vs a rounding approach to the passband limit evaluation is rather null on product quality.

I suppose, in the end, we're all attempting to achieve the same goal in much the same way. My greater concern is attempting to be more standardized in approaches and evaluations for those "simpler" things that auditors love to get ticky-tacky about because it's about all they can really understand in the short amount of time they take to review your processes and data.

So, in this particular case, industry would do well to make a common choice: either round always, or round never. Since most academic environments do promote rounding, I believe that makes the better choice from an engineering and business perspective. (Ever have arguments with your customer when they complain that the test report says the measurement is 10.11, the limit is 10.11 with a LL<=x<=UL eval, and it still says fail because the computer is hiding digits? Talk about a waste of time. ;) ) However, round nothing might make more sense in preventing audit issues. Neither one in the end will make a significant difference in overall product quality when present with other elements such as guardbanding and six-sigma SPC.
 
Top Bottom