Attribute Gage R&R parts

Burnett

Registered
We’re doing an attribute GR&R on a box that is supposed to distinguish good vs bad parts. Unfortunately we don’t have a way to distinguish the true goodness vs badness of the parts other than the box itself. But for the purpose of the gage R&R, I’m suggesting that it’s only important that the the box will consistently report the “bad” parts as bad and “good” parts as good, and if it reports suspected “bad” parts as “good” sometimes and vice versa, that the box fails the Gage R&R, regardless of the true “goodness” or “badness” of the part. And that the box’s consistency needs resolved independently of the true goodness or badness of the parts, which can be determined later once the box consistency is resolved. Is this sound reasoning?
 

optomist1

A Sea of Statistics
Super Moderator
need more details...among them, how was the "goodness v badness", aka Go/No-Go established. The more details the better the response, type of part or parts?
 

Bev D

Heretical Statistician
Leader
Super Moderator
While it’s helpful to know what the “box” does you are correct that you need to resolve the repeatability (consistency) of calling good and bad. Unfortunately this does increase your sample size as the sample must be representative of the failure rate.

If you can describe the ‘defect’ or ‘failure’ and more about the box we might be able to help you with determining accuracy.
 

Burnett

Registered
Thank you for the responses. The parts are cables that interconnect between a camera sensor and an image processing device. The failure manifests itself as a very specific image artifact that our team has aligned on as representing a failed result. The “box” under test is the image processor itself. The problem is that the one box/many cable combination is not repeatable, and we’re not sure how much of that is contributed by the cables and how much by the box itself. We are using 10 suspected good cables and 10 suspected bad cables, but my point is that for the purpose of demonstrating that the process has good R&R, it doesn’t necessarily matter - we can say that the process is not repeatable regardless of which part(s) are contributing.
 

Miner

Forum Moderator
Leader
Admin
This is a long shot but your description of the problem and the use of a large number of cables reminds me of an intermittent quality issue I dealt with. A supplier had made an undisclosed to us change to an electronic device used in our product. This change made the device sensitive to electrical noise. And movement of the cables generated enough electrical noise to cause the device to fault.

Has EMI (Electro-Magnetic Interference) testing been performed on your test equipment?
 

Bev D

Heretical Statistician
Leader
Super Moderator
Miner is correct - if you can’t consistently fail and pass units then you are most likely dealing with an intermittent failure and rejecting or passing cables will be of no use. I’ve dealt with many intermittent problems and they are often the cause of non repeatable functional test failures such as yours. You may be in Problem Solving land not MSA land.
 

Burnett

Registered
Thank you! Yes, it’s an intermittent issue for sure, but as you point out, I’m not to that phase of the investigation, and I’m simply using the GR&R to officially “prove” to the team that the process is not repeatable, so we can align that it’s a real problem and agree to the next step of root causing which component(s) are producing the intermittency.
 
Top Bottom