Informational Control Chart Interpretation - General "Rules"

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
But there is still value in control charts is there not?

Yes. I cannot vouch for Shewhart charts being valuable in such a situation, but, the X hi/lo -R chart is very valuable to clearly illustrate the variation during the process, to show if that variation become significantly different (bad batch of tools, incorrect coolant mix, etc.), and as a baseline for process improvement (less breakage, longer slope in tool wear, etc.) It is also evidence that your process is making good parts, which is not a function of Shewhart charts. As Geoff accurately mentions, Shewhart charts have no connection to capability, so they cannot offer that benefit. X hi/lo-R chart is related to the specification, due to the nature of the different statistical distributions that apply in correctly charted tool wear applications. Also, they eliminate the sampling error of recording one value for a dimension (such as one diameter) when the feature has an infinite number of diameters. The variation on the Shewhart charts is a combination of sampling error and measurement error which radically mask the true process variation. X hi/lo-R eliminates that problem.
 

JuneFoo

Starting to get Involved
Well, this thread is about Control Chart Interpretation, so it would seem to apply in this case.


Sorry, Guy, confuse on the control chart interpretation & seaching help here!

To analyse the control chart, what is the different of Non-Random Patterns and Sensitizing Rules? Are they same? Getting confuse on 7 point in rows, same side, or 6 point trend!

When and how to apply the rules? Don't know how to teach my QA Inspectors to apply them!

Looking for help! Thanks!
 

Steve Prevette

Deming Disciple
Leader
Super Moderator
Sorry, Guy, confuse on the control chart interpretation & seaching help here!

To analyse the control chart, what is the different of Non-Random Patterns and Sensitizing Rules? Are they same? Getting confuse on 7 point in rows, same side, or 6 point trend!

When and how to apply the rules? Don't know how to teach my QA Inspectors to apply them!

Looking for help! Thanks!

In my work with the control chart - I keep it easy. A decision must be made by the analyst - are the data stable, predictable, in control, common cause? Or is there a signal, a trend, a special cause?

The list of rules you use to detect a signal is the basis for the decision. Different authors use different combinations of rules, but just pick an author and their set of rules. The set I have had good success with are

A point outside the control limits
two of three points above / below two standard deviations from the avg
four of five above / below on standard deviation from the avg
7 above/below avg
10 of 11 above/below avg
7 in a row all increasing or decreasing.

In a nutshell:

Plot your data
Calculate average and control limits
Evaluate for the set of rules you use
Identify any point or groups of points that violate any of the rules.
Investigate the points or groups
Apply corrective actions

If no such current points or groups exist, analyze across all of the data for common issues
Improve the process
 

Miner

Forum Moderator
Leader
Admin
The two better known sets of rules are the original Wikipedia reference-linkWestern Electric rulesand Wikipedia reference-linkNelson rules. There are others, including AIAG rules.

Most have the same set of 8 rules, but differ in the number of points required to trigger the rule. The original set of rules, Western Electric rules, were derived empirically, that is by trial and error, until they found rules that provided an acceptable balance between false alarms and the cost of failure to react to a process change. Nelson adjusted the number of points required to trigger the rule in order to provide an equal probability of a false alarm across all rules. This is the rule set used by Minitab.

Now for the part that they fail to teach you in any class or seminar. Nelson wrote a Technical Aid in ASQ's Journal of Quality Technology in October of 1984 entitled The Shewhart Control Chart - Tests for Special Causes. In this technical aid, he stated that Rules 1 thru 4 should typically be used. If the cost of failing to respond to a process shift is significantly high, then these rules could be supplemented by rules 5 & 6. Finally, he stated that rules 7 & 8 were only intended to use as diagnostic tools when first creating the chart. Rule 7 triggers when different process streams are mixed within a subgroup. Rule 8 triggers when different process streams are mixed between subgroups. Therefore, these rules are used to verify rational subgroups.
 
Last edited:

Miner

Forum Moderator
Leader
Admin
False Alarm Rates

Another topic that does not get covered in courses and seminars on SPC is that of the false alarm rate. Yes, most will tell you that the control limits are based on the normal distribution and as such the probability that a process that is in a state of control will exceed +/- 3s is 1-0.9973 = ~0.003 or roughly 3 false alarms per 1000. That doesn't sound too bad.

Next we get into a discussion about the use of the extended tests (the Western Electric (WECO) rule, Nelson or AIAG rules, et al). What does not get covered is the cumulate error rate of applying multiple tests.

Lets use the Nelson rules since they have an equal false alarm rate for each rule unlike other rules. Nelson recommend use of the first four rules under normal situations. The reliability of one test is .9973. The serial reliability of four tests is 0.9973^4 or 0.9892. 1-0.9892 = 0.0108 or 11 false alarms per 1000.

Now lets add in two more rules.The serial reliability of six tests is 0.9973^6 or 0.9892. 1-0.9839 = 0.0161 or 16 false alarms per 1000.

Now use all 8 rules. The serial reliability of eight tests is 0.9973^8 or 0.9892. 1-0.9786 = 0.0214 or 21 false alarms per 1000.

The original 3 false alarms per 1000 has increased seven-fold by using all 8 rules.

The point of this is that you should not automatically apply all 8 rules. KNOW your process. Some processes can change very slowly creating trends. Fine, use the trend rule. However, another process will never trend. It can only make a sudden shift. Do not use the trend rule for this process. Look at the history of the process, the probable sources of special cause variation and use this to decide which rules are appropriate.
 

Steve Prevette

Deming Disciple
Leader
Super Moderator
Thanks, Miner, a good point on application of multiple rules, you increase the chance of a false alarm. Acheson Duncan (the author I follow for SPC) makes that point also.

One thing to realize in the various rules is - does the author assume Normality, or do they use the Tchybychev Inequality? Shewhart did NOT involke Normality. It appears Nelson did, and your calculations are based upon Normality.

If you do NOT assume Normality, the false alarm rates are much lower. And one tends to use a lower number criteria (such as 7 in a row the same side of the average) when assuming Tchybychev, while 9 in a row matches the probabality for three standard deviations from the average when assuming normality.

I will say my primary work is in the safety and quality area, and we would generally prefer to fail on the side of more false alarms rather than less detections. And yes, one needs to find the cause of the alarm prior to taking any action.
 
D

Dinesh Deshpande

Hi Tim,

I am working on the process capability analysis using minitab, I Have a question regarding the process cap. If the process data is not following Normal distribution that means the process not stable. and Can I suppose it is not in control also

Regards

Dinesh Deshpande
Statistical Analyser
 
Top Bottom