Informational Control Chart Interpretation - General "Rules"

Tim Folkerts

Trusted Information Resource
I like the idea of using dice. A nice, visceral approach.

After you have established the general idea, have you considered giving someone 8-sided or 10-sided dice? Or perhaps make some dice numbered (1,1,1,6,6,6) or (2,3,4,5,6,7). It would be an easy way to simultate the process going out of control and you should quickly break some of the rules.

Tim F
 

Jim Wynne

Leader
Admin
Tim Folkerts said:
I like the idea of using dice. A nice, visceral approach.

After you have established the general idea, have you considered giving someone 8-sided or 10-sided dice? Or perhaps make some dice numbered (1,1,1,6,6,6) or (2,3,4,5,6,7). It would be an easy way to simultate the process going out of control and you should quickly break some of the rules.

Tim F

The dice analogy is good for building a picture. About 15 years ago I wrote a simple BASIC program that simulated dice-tossing for training, and was able to show graphs of the outcomes after 10, 100, 1000 and 100,000 throws. Of course, the 1000 graph wasn't much different from the 100,000, but it served the purpose of showing that the more data you have, the more accurate your predictions will be.
 
E

e006823

Steve Prevette said:

Steve,

Very informative document. Is there a reason that you use the sample standard devistion when calculating the control limits rather then using Rbar and the subgroup constant? Everything I've read has suggested that using the standard deviation results in inflated control limits.

Regards,
 

Steve Prevette

Deming Disciple
Leader
Super Moderator
e006823 said:
Is there a reason that you use the sample standard devistion when calculating the control limits rather then using Rbar and the subgroup constant? Everything I've read has suggested that using the standard deviation results in inflated control limits.

There are a few reasons I have chosen to use the sample standard deviation.

1. The sample standard deviation is the Maximum Likelihood Estimator, and is unbiased. It is the most "powerful" of the estimates of the ways to estimate standard deviation.

2. Using Range requires you to assume Normality. The various d2 values come from assuming Normality.

3. Shewhart compared the methods to estimate standard deviation and did make an argument for using the sample standard deviation in Economic Control of Manufactured Product.

4. There is some indication in the older literature that Range was chosen due to you could calculate the standard deviation using a slide rule and adding machine. Now it is actually harder in Excel to calculate using the Range (but not that hard, you just have to set up your forumula) than using the sample standard deviation.

5. My initial work at Hanford was in cycle times of work packages. At first I did Xbar - R in subsamples of 5. Management was very confused. They were used to managing by the calendar month, and depending on how many work packages were completed, there were differing numbers of data points on each monthly update. They wanted a monthly increment. So, we (Phil Monroe and I) shifted the charts to plot the average cycle time for the month, and then we took the standard deviation of the monthly averages.

Here are the disadvantages of using sample standard deviation:

1. Almost all of the SPC literature (including Dr. Wheeler) tell you to use Range.

2. Taking the monthly averages and then taking the standard deviation of that loses a lot of information and loses some of the "power" in the data. On some charts I have experimented with displaying 3 standard deviation limits based upon the sample standard deviation within the month. Preferably also, I should somehow make use of the variation within the month in establishing the standard deviation. Dr. Wheeler definitely points to this disadvantage. But I have yet to find a way to do it that is comprehensible by the people using the charts.

3. If you inadvertenly leave an outlier within the data, the standard deviation estimate will be inflated due to the distance from the mean is squared rather than just added together.

A few final comments:

1. The vast majority of my charts are p, c, or u due to my work with safety and quality data. Not many charts are x charts.

2. On the x charts I have, I at one point about 4 years ago calculated what the moving range gave me. In all cases, there was no difference in interpretation of what the data were telling you. That is, in no case did the range move the control limits inwards enough to suddenly tell me I now had a signal.

3. The Hanford Primer does "allow" you to use xbar R or moving range. I admit I don't go into much detail about it, but it is easy enough to find.
 
Last edited by a moderator:
M

M Greenaway

Sorry how do you get a bell shaped curve on the throw of two dice ?

I guess you just total the dice and record the number of times each total is rolled - this would not be a normal distribution. Each combination is open to the same chance probability as the next, why would 7's occur more often than any other number ??
 

Jim Wynne

Leader
Admin
M Greenaway said:
Each combination is open to the same chance probability as the next, why would 7's occur more often than any other number ??
Because there are more combinations that can result in 7:
4+3
5+2
6+1

Even though each toss is a random event, there are internal constraints that control probability. The purpose of the demonstration (as I saw it) was just to demonstrate the predictability of the process, something well known to casino operators :D . The moral of the story for me was that the goal in manufacturing processes (and the purpose of SPC) should be to identify and eliminate variation, thus enhancing the ability to predict process output by controlling sources of variation, as opposed to measuring parts ad infinitum.
 
S

SPC_Newbie

Does anyone have access to the document that is pointed to here: http :// www. hanford .gov /safety/vpp/trend.htm - DEAD 404 LINK UNLINKED

It appears to be inactive
 
Last edited by a moderator:
Top Bottom