I am looking at a sample size chart that one of my clients (a medical device company) made for validation testing based on confidence/reliability intervals, and I'm trying to understand how these values were calculated. They have the following for attribute data and continuous data with a one-sided specification (zero failures), and I can see that they calculated it based on the binomial distribution:
| reliability | | | |
confidence | 80 | 90 | 95 | 99 |
80 | 8 | 16 | 32 | 161 |
90 | 11 | 22 | 45 | 230 |
95 | 14 | 29 | 59 | 299 |
99 | 21 | 44 | 90 | 459 |
However, what I can't seem to understand is how they calculated the suggested minimum sample sizes for continuous data with a two sided specification (also zero failures):
| reliability | | | |
confidence | 80 | 90 | 95 | 99 |
80 | 14 | 29 | 59 | 299 |
90 | 18 | 38 | 77 | 388 |
95 | 22 | 46 | 93 | 473 |
99 | 31 | 64 | 130 | 662 |
Does anyone know what equation or statistical rationale this last table is based off of?