Hello,
I'm pretty new to the field of MSA/GR&R and was assigned to do a study on our automated test equipment.
First, let me explain the scenario:
In my field at the end of the manufacturing process all parts will be electrical tested by an automated test equipment (ATE). This ATE consists of many voltage/current sources and voltage/ampere meters. The single part will be automatically placed in the teststation by a so-called handler (with multiple teststations) and the ATE does measurements (breakdown voltages, leakage currents, gain etc.) and gives a BIN code back to the handler which than knows if the part is pass or fail. The ATE can give back additional BIN codes such as passed the first 5 test but failed the last. But for sake of simplicity lets only look at pass/fail. That being said, the ATE does not only do attribute measurements but measures the true value and then decides if pass or fail according to a given spec.
Since we are measuring semiconductors there is a variation in the part itself which cannot be neglected. Even if we choose chips from the same wafer the variation across the wafer is significant.
Now my thoughts about the task:
If we look at the ATE as a blackbox which only gives back pass or fail - I would suggest to do an attributive study with 50 parts and check if the ATE gives back the same results in a given time frame (let's say test them each shift for one week)
-> this would be the "easiest" but from my point of view I'm making the assumption that inside of the ATE is "empty" and ATE can only say pass/fail
If we look at the ATE (black box) and handler as one measuring system. The handler has 3 appraiser (the test stations) and the ATE gives back pass/ fail - same as the one above but with 3 appraisers as the teststations
-> better because we check if all teststations give back the same result in pass/fail
If we look at the ATE as a measuring system consisting of many small measurement systems. We would have to do a study for all voltage/current sources and meters and check that they give consistent measurements
-> here comes the problem with the part itself. When measuring the gain, the part itself heats up which than changes the gain (physics). This can be "adjusted" by using shorter measuring time etc. Also measuring leakage current in nA range are difficult to do in short time.
-> one solution would be to use high presession resistors which are stable across a wide range, but this bothers me a bit because I'm not using production parts, what works for the resistors does not automatically mean that it will work with the semiconductors, also I'm not taking into account the handling process of the handler.
If we look at the ATE as a measuring system consisting of many small measurement systems and the handler with one teststation. We would load the handler with 50 parts from one batch and start the measurement process and record all data from the ATE. After all have been measured we would do the same 3 time more. With the data we would do ANOVA and look were the variance is coming from
-> we will still have the same problem with the measurements but now really using the machine as intended with production parts.
If we look at the ATE as a measuring system consisting of many small measurement systems and the handler with three teststation. Same as before but we would also have a look at the differences between each teststation.
-> This would be my choice to do because: using production parts, the machine runs as it would normally do, all teststations are in use.
--> the downside here is the technical difficulties we would have to solve. e.g. the parts can not be marked so I can't see which one is in which station (the parts are small QFN) we could trick the handler etc. but this is a different discussion.
Overall, I would not to the first two, because attribute study is for go/no go gauges. The third would be okay, but it would not take into account all the aspects of the measuring system. The fourth and fifth would be my choice.
But what is your opinion on this subject?
I'm pretty new to the field of MSA/GR&R and was assigned to do a study on our automated test equipment.
First, let me explain the scenario:
In my field at the end of the manufacturing process all parts will be electrical tested by an automated test equipment (ATE). This ATE consists of many voltage/current sources and voltage/ampere meters. The single part will be automatically placed in the teststation by a so-called handler (with multiple teststations) and the ATE does measurements (breakdown voltages, leakage currents, gain etc.) and gives a BIN code back to the handler which than knows if the part is pass or fail. The ATE can give back additional BIN codes such as passed the first 5 test but failed the last. But for sake of simplicity lets only look at pass/fail. That being said, the ATE does not only do attribute measurements but measures the true value and then decides if pass or fail according to a given spec.
Since we are measuring semiconductors there is a variation in the part itself which cannot be neglected. Even if we choose chips from the same wafer the variation across the wafer is significant.
Now my thoughts about the task:
If we look at the ATE as a blackbox which only gives back pass or fail - I would suggest to do an attributive study with 50 parts and check if the ATE gives back the same results in a given time frame (let's say test them each shift for one week)
-> this would be the "easiest" but from my point of view I'm making the assumption that inside of the ATE is "empty" and ATE can only say pass/fail
If we look at the ATE (black box) and handler as one measuring system. The handler has 3 appraiser (the test stations) and the ATE gives back pass/ fail - same as the one above but with 3 appraisers as the teststations
-> better because we check if all teststations give back the same result in pass/fail
If we look at the ATE as a measuring system consisting of many small measurement systems. We would have to do a study for all voltage/current sources and meters and check that they give consistent measurements
-> here comes the problem with the part itself. When measuring the gain, the part itself heats up which than changes the gain (physics). This can be "adjusted" by using shorter measuring time etc. Also measuring leakage current in nA range are difficult to do in short time.
-> one solution would be to use high presession resistors which are stable across a wide range, but this bothers me a bit because I'm not using production parts, what works for the resistors does not automatically mean that it will work with the semiconductors, also I'm not taking into account the handling process of the handler.
If we look at the ATE as a measuring system consisting of many small measurement systems and the handler with one teststation. We would load the handler with 50 parts from one batch and start the measurement process and record all data from the ATE. After all have been measured we would do the same 3 time more. With the data we would do ANOVA and look were the variance is coming from
-> we will still have the same problem with the measurements but now really using the machine as intended with production parts.
If we look at the ATE as a measuring system consisting of many small measurement systems and the handler with three teststation. Same as before but we would also have a look at the differences between each teststation.
-> This would be my choice to do because: using production parts, the machine runs as it would normally do, all teststations are in use.
--> the downside here is the technical difficulties we would have to solve. e.g. the parts can not be marked so I can't see which one is in which station (the parts are small QFN) we could trick the handler etc. but this is a different discussion.
Overall, I would not to the first two, because attribute study is for go/no go gauges. The third would be okay, but it would not take into account all the aspects of the measuring system. The fourth and fifth would be my choice.
But what is your opinion on this subject?