Informational Control Chart Interpretation - General "Rules"

Steve Prevette

Deming Disciple
Leader
Super Moderator
Yes, it went inactive about a year after I moved from the Hanford site to the Savannah River Site.

Not to worry, all is being kept at http :// www .efcog. org/wg/esh_es/Statistical_Process_Control/index.htm - DEAD 404 LINK UNLINKED

Since this is a site that covers all Department of Energy sites, this should be as permanent a repository as I can find.
 
Last edited by a moderator:
C

Coleman Donnelly

I have found this thread to be very helpful and informative as we are currently trying to better understand what a "stable process" means.

It appears that here the stable process is one that would fit with an X-Bar&R chart due to the assumption that in order for the process to be stable it must fit a normal distribution.

What if however the process is not normal and there are dominant factors in your process that produce trending (or break other rules) as a result of the process. How can we determine and show that a process is stable under these unique situations that can not be removed as a special cause and instead have to be controlled as a part of the process?
 

Steve Prevette

Deming Disciple
Leader
Super Moderator
There has been a lot of materials by various authors about "what about other than normal" distributions and SPC. I'll just state - review the original works by Dr. Shewhart. He developed SPC based on the Tchybychev Inequality, which is independent of distribution ("non-parametric") and he tested SPC against the normal curve, a square distribution, and a triangular distribution.
 
C

Coleman Donnelly

Can you recomend any reading that specifically deals with Shewarts SPC based on the Tchybychev Inequality?
 

Steve Prevette

Deming Disciple
Leader
Super Moderator
Can you recomend any reading that specifically deals with Shewarts SPC based on the Tchybychev Inequality?


Yes, "Economic Control of Quality of Manufactured Product" by Dr. Shewhart. A 50th anniversary edition (now 30 years old) was published by ASQ and I believe is still available. It is well worth going back to the "horse's mouth" and reading the ORIGINAL concept.
 
C

Coleman Donnelly

Thank you very much for all of the responses. I will be ordering the book shortly, and I have just finished reading the article and all the related articles that were attached by Dr. Don Wheeler...

So my question is - when does it become appropriate to transform the data?

My current plan is to chart the data and derermine if the process is stable. If it is then next I will need to generate a cp and cpk values of my process to identify if process is capable? If my data is not normal - at this point do I run the AD test and determine if my P value is greater than 0.05 OR should I run all available transformations of my data find the highest P value and use that transformed data to calculate my capability index?

Does it make sense to apply the Central Limiting Therom to make my data more normal for control purposes - or does this again distort the voice of the process?
 

Steve Prevette

Deming Disciple
Leader
Super Moderator
So my question is - when does it become appropriate to transform the data?

If you believe Dr. Wheeler and Dr. Shewhart - NEVER. Keep the data as they are, easier to interpret and harder to inadvertently distort the data.

Of course, it is hard to say NEVER. Even Dr. Wheeler has a technique for analyzing infrequent events, where rathert than counting the number of events per time interval, you plot the quantity 365/days between events. By plotting this rate, you are actually doing close to an exponential transformation, which would be appropriate for time between events data.

My current plan is to chart the data and derermine if the process is stable.

Good, always an appropriate first step.

If it is then next I will need to generate a cp and cpk values of my process to identify if process is capable?

I'm not a big fan of those values - one should be able to look at the control chart (especially the limits) and compare them to specification and customer feedback to see if improvement is needed. A Cp of 2 is NOT a "magic number". Boeing goes to a Cp of 3 (nine sigma) on some critical airplane components.

If my data is not normal - at this point do I run the AD test and determine if my P value is greater than 0.05 OR should I run all available transformations of my data find the highest P value and use that transformed data to calculate my capability index?

My opinion is that the gain would not be worth the pain.

Does it make sense to apply the Central Limiting Therom to make my data more normal for control purposes - or does this again distort the voice of the process?

I think in asking the question you already know the answer:) Yes, it would be my opinion that it distorts the voice of the process. Of course, you may then ask about xBar-R control charting. The use of the xbar is NOT to invoke CLT, but to determine if you have within group versus outside of group variation.

By the way, my parents and brother live in the Cleveland area (Macedonia and Twinsburg, respectively), so I make it up your direction on occasion.
 
Top Bottom