Hello, my apologies if this has been discussed but I'm just not finding it based on my search terms.
We had an observation (not a non-conformance) in an audit that our QMS tool testing was not robust enough and that we should be setting up a sampling plan and use ISO 2859-1.
We make software as medical devices (SaMDs) and all our QMS tools are either software code creation and management for the devices or software for project management and documentation - all our tools are software items.
I have figured out how to determine what our sample size would be based on average usage of the tools, but it still seems an odd to test software. For example, in a project planning tool I could add five tasks (as sample records) to test functionality but that doesn't test whether or not I could make a project to hold them. And creating a project vs creating a task would have different pass/fail criteria. It seems like overkill to determine sample sizes for various types of items in a software tool.
I am not opposed to more robust testing (it is a bit thin), but it's starting to get more complicated (testing hierarchies of types of items that could be entered into a project planning tool) or if we keep it uncomplicated (straight up sample size based on individual tasks) then we test to task-specific features and possibly don't test all features.
Additionally, we are not generating our own tools, these are all OTS items.
Does anyone have any recommendation on how to use ISO 2859-1 for software QMS tools or a way to characterize what is done to satisfy the auditor we looked at it but determined that there was a "better" way to determine what/how to test?
We had an observation (not a non-conformance) in an audit that our QMS tool testing was not robust enough and that we should be setting up a sampling plan and use ISO 2859-1.
We make software as medical devices (SaMDs) and all our QMS tools are either software code creation and management for the devices or software for project management and documentation - all our tools are software items.
I have figured out how to determine what our sample size would be based on average usage of the tools, but it still seems an odd to test software. For example, in a project planning tool I could add five tasks (as sample records) to test functionality but that doesn't test whether or not I could make a project to hold them. And creating a project vs creating a task would have different pass/fail criteria. It seems like overkill to determine sample sizes for various types of items in a software tool.
I am not opposed to more robust testing (it is a bit thin), but it's starting to get more complicated (testing hierarchies of types of items that could be entered into a project planning tool) or if we keep it uncomplicated (straight up sample size based on individual tasks) then we test to task-specific features and possibly don't test all features.
Additionally, we are not generating our own tools, these are all OTS items.
Does anyone have any recommendation on how to use ISO 2859-1 for software QMS tools or a way to characterize what is done to satisfy the auditor we looked at it but determined that there was a "better" way to determine what/how to test?