SPC (Statistical Process Control) Overview

K

KCIPOH

Hello Steve,

Can you explain by the meaning of "the baseline is innocent until proven guilty"?

how to say its proven guilty?

confuse and appreciate your explanation:confused:

Thank You

Regards
Daniel
 

Steve Prevette

Deming Disciple
Leader
Super Moderator
Ah yes. The quote comes from a course I took from Davis Ballestracci.

"A baseline is innocent until proven guilty"

One of the most confusing topics to newcomers to SPC is - when should I shift the baseline average and its associated UCL and LCL to follow a change in the data?

Some people say - never! I haven't given permission for the process to change, I don't accept the change! However, if the job of SPC is to predict future performance, and if the data have shifted such that it no longer predicts future performance, one needs to eventually follow the indication.

Some people say - I just implemented a change! I should rebaseline now! Well - what if the change was ineffective? You may end up "tampering" (see the Funnel Experiment) with the chart, and it will not be effective at prediction either.

Davis made the point - ONLY consider rebaselining a chart if FIRST you have at least one "trend signal" / "out of control condition". Then you may consider going to a new baseline. But generally one should also know WHY the process data shifted, and have some indication that it is a permanent shift.

On the web site, the presentation on Life Cycle of a Trend (and the paper is posted here on the Cove someplace) goes through this in more detail, with examples.
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
But generally one should also know WHY the process data shifted, and have some indication that it is a permanent shift.

I agree, and that is why I always recommend preparing a total variance equation ahead of time, so that the sources of variation have been considered. When a shift or trend or other change (that is unexpected) occurs, rather than running around like a chicken with your head cut off, you look through your total variance equation or CNX chart or fishbone diagram - whichever you prefer - and look for the possible causes. Consider this advanced variation list development similar to PFMEA - where you take the time to think about your process ahead of time rather than just diving into it.

If you determine the cause, and understand the cause and it fits within the concept of your process, you may then accept a chart shift or recalculation of limits. You initial data may not have included long term variation, such as raw material lot variation, etc. It will only be seen - and dealt with - in time.
 

Jim Wynne

Leader
Admin
I agree, and that is why I always recommend preparing a total variance equation ahead of time, so that the sources of variation have been considered. When a shift or trend or other change (that is unexpected) occurs, rather than running around like a chicken with your head cut off, you look through your total variance equation or CNX chart or fishbone diagram - whichever you prefer - and look for the possible causes. Consider this advanced variation list development similar to PFMEA - where you take the time to think about your process ahead of time rather than just diving into it.

If you determine the cause, and understand the cause and it fits within the concept of your process, you may then accept a chart shift or recalculation of limits. You initial data may not have included long term variation, such as raw material lot variation, etc. It will only be seen - and dealt with - in time.

This is the essence of it--sometimes the mean changes, but if you just recalculate control limits without understanding why it changed, the proverbial baby is thrown out with the bath water. This is also the fundamental folly of the Six Sigma 1.5-sigma shift. We are led to believe that the process mean can drift a significant amount over time without anyone noticing that something has happened.
 
A

artichoke

This is also the fundamental folly of the Six Sigma 1.5-sigma shift. We are led to believe that the process mean can drift a significant amount over time without anyone noticing that something has happened.

The folly of six sigma is really much worse. It's interesting how these things are like a game of "Chinese whispers". The original claim was a "proof" by Mikel Harry that all process drift by 1.5 sigma in 24 hours. Harry based his original "proof" on errors in the height of stacks of discs, which of course bears no relation whatsoever to processes. He later said that it was "empirical" and not needed. In 2003 Harry pulled a new "proof" out of the ether on a completely different basis and this time called it a "correction". My papers show it was just as much nonsense as the original. Harry’s off sider Reigle Stewart changed the "proof" again to be a "dynamic mean off-set". All of these "proofs" are easily shown to be invalid.

Despite the ridiculous basis for 6 sigma, almost no one has bother to check the facts behind the scam, although one has to do considerable digging to find the original papers. As a result, billions of dollars have been wasted on a methodogy built on rubbish.
 

Steve Prevette

Deming Disciple
Leader
Super Moderator
I happened to think that perhaps the question truly was about "innocent until proven guilty". That is an American phrase to my knowledge. It just says we assume an accused criminal is innocent, until proven guilty (instead of guilty until proven innocent).

In this SPC parlance - if we have a baseline/UCL/LCL that we have established, we assume it is good - until it is proven guilty by receiving an "out of control" signal.

On 1.5 sigma - it's long been my theory that the real "problem" being seen was that the observed failure rate at 6 sigma from the average was not the failure rate predicted by the normal distribution. An alternate theory rather than the 1.5 sigma shift, would have been to acknowledge that the data likely weren't normally distributed and the observed tail was a bit "thicker" than the normal curve. As has been stated uncountable times here on the Cove, SPC does not depend upon normality. If only Mikel Harry et al had heard of the Tchebychev Inequality.
 
A

artichoke

An alternate theory rather than the 1.5 sigma shift, would have been to acknowledge that the data likely weren't normally distributed and the observed tail was a bit "thicker" than the normal curve. As has been stated uncountable times here on the Cove, SPC does not depend upon normality. If only Mikel Harry et al had heard of the Tchebychev Inequality.

Yes, one never knows what the data distibution is and one does not need to. Thicker tails however are not always the case, for example time based distributions (time to answer phone in a call centre for example). This has one fat tail and the other is truncated. Shewhart charts still work well, none the less.

Wheeler's "Normality and the Process Behaviour Chart" is a great read. See p88 and his comments on Chebychev vs actual distributions.
 
P

pmg-20130220

SPC (Statistical Process Control) is a method of monitoring a process during its operation in order to control the quality of the products while they are being produced, rather than relying on inspection to find problems after the fact. It involves gathering information about the product, or the process itself, on a near real-time basis so that the operator can take action on the process. This is done in order to identify special causes of variation and other non-normal processing conditions, thus bringing the process under statistical control and reducing variation.
 
L

learner sun

I am new for this forum.
I have a question do we need to fix control limits after 25 groups of data? or we always let SPC software to automatically caculate the control limits?

Note: I am using SPC software for process control/monitoring.
 

Bev D

Heretical Statistician
Leader
Super Moderator
Welcome :bigwave:

You shuold NOT continually recalculate control limits. This defeats the entire purpose of control charts. Resetting can create limits that accomodate trends and small shifts.

You should set the limits on a stable baseline period and only recalculate when a known improvemetn has been implemented and the improvement is evident in the chart.
 
Last edited:
Top Bottom