ISO 2859-1 for QMS tool validation

SSchoepel

Involved In Discussions
Hello, my apologies if this has been discussed but I'm just not finding it based on my search terms.

We had an observation (not a non-conformance) in an audit that our QMS tool testing was not robust enough and that we should be setting up a sampling plan and use ISO 2859-1.

We make software as medical devices (SaMDs) and all our QMS tools are either software code creation and management for the devices or software for project management and documentation - all our tools are software items.

I have figured out how to determine what our sample size would be based on average usage of the tools, but it still seems an odd to test software. For example, in a project planning tool I could add five tasks (as sample records) to test functionality but that doesn't test whether or not I could make a project to hold them. And creating a project vs creating a task would have different pass/fail criteria. It seems like overkill to determine sample sizes for various types of items in a software tool.

I am not opposed to more robust testing (it is a bit thin), but it's starting to get more complicated (testing hierarchies of types of items that could be entered into a project planning tool) or if we keep it uncomplicated (straight up sample size based on individual tasks) then we test to task-specific features and possibly don't test all features.

Additionally, we are not generating our own tools, these are all OTS items.

Does anyone have any recommendation on how to use ISO 2859-1 for software QMS tools or a way to characterize what is done to satisfy the auditor we looked at it but determined that there was a "better" way to determine what/how to test?
 

yodon

Leader
Super Moderator
Wow... I'm scratching my head over sampling of software. I guess some auditors have a hammer and everything looks like a nail.

Since this is non-product software validation, I suggest you take a look at the FDA (draft) guidance on computer software assurance. I think it provides a practical foundation for such activities - especially leveraging the fact that these are probably commonly and widely used commercial applications. Higher-risk applications get greater attention. That's proper risk-based thinking. Not sample size silliness.

For compilers and such, we assert that the output (binaries) get tested which provides more assurance of properly functioning tools than any validation effort would.

If you have a master plan and stick to your plan (and proactively consider the effectiveness of the tools), the auditor should be happy.

Your statement that "all our tools are software items" concerned me a bit. Normally (per 62304), the term "software item" would be in the context of product software, not support software. I don't know if that may be causing confusion.
 

SSchoepel

Involved In Discussions
Thank you. That was my thought after wading into that standard. I think I'll write up some justification in the OFI record from the audit as to why it's not appropriate and move on.

We do have our validation process set up per that guidance but I'll take a look to see if there's an expansion I can do on it just to satisfy that we made a change to the testing. I also have the clauses in there that compilers/build tools are tested by the fact the output passes testing.

Regarding your software item reference, I'm taking about QMS software not our device/product software and that was clear to the auditor. He has audited us before and this is the first time he's had an issue with the QMS tool validation.
 
Top Bottom