Evolution of SW functionalities based on Machine-Learning Algorithms - qualif. of changes (significant/non-significant according to MDCG 2020-3

Galac

Involved In Discussions
Hello,

Our software is constantly evolving to improve performance, reduce risk and offer new features.

Our software incorporates functionalities based on Machine-Learning algorithms. Depending on user feedback or other PMS inputs, we are often called upon to improve these algos (by replaying them with new data or a new model), or to adapt algos already created for other similar applications (e.g. a different anatomical zone, for example, segmentation of an image of the aorta or segmentation of an image of a bone structure such as the spine).
In practice, the ML principles and methods remain each time the same. Only the training and validation data and the models change.

When I use MDCG 2020-3, I find it hard to assess the significance of these changes.

Do you have any suggestions for qualifying these changes as significant or non-significant, and also justifying them?
Thank you.
 

shimonv

Trusted Information Resource
The EU regulatory authorities are not up-to are not up-do-speed with ML practices. The guidance, being guidance, defines algorithm change as significant change. I suggest you define ML change as significant when it has an impact on the hazard analysis table - new / modified risk. Otherwise, every ML change will require NB review...
 

Galac

Involved In Discussions
The EU regulatory authorities are not up-to are not up-do-speed with ML practices. The guidance, being guidance, defines algorithm change as significant change. I suggest you define ML change as significant when it has an impact on the hazard analysis table - new / modified risk. Otherwise, every ML change will require NB review...
Dear Shimonv,
Thank you for your quick though sharing. I agree with your risk-based approach and will apply that path.
...I'm afraid that our NB will remain glued to the MDCG.
We will then be (once more) completely dependent on their level of understanding and interpretation (this promises to be another long debate with our NB).
 

shimonv

Trusted Information Resource
If I may add, Notified Bodies typically luck understanding about software development processes.
So take advantage of that when you document your decisions in the engineering change order.

Shimon
 

dgrainger

Trusted Information Resource
At the moment, any change of the algorithm would be considered as a significant change and will need NB review. :(
 

yodon

Leader
Super Moderator
You might want to have a look at a document the FDA published: Good Machine Learning Practice for Medical Device Development: Guiding Principles and their guidance doc, Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence/Machine Learning (AI/ML)-Enabled Device Software Functions. The latter suggests having a (documented) change control plan and gives some direction on doing impact assessments (which could be used to justify whether or not regulatory notification was required). As others have indicated, the regulatory bodies are lagging a bit on expertise here but having a documented plan and impact assessment could go a ways supporting your decisions. They may not like it still, but you will have a good paper trail.
 

Ed Panek

QA RA Small Med Dev Company
Leader
Super Moderator
For fda ai seems to have caught them a bit flat footed. They understand that ai improves on its own but some explanation is needed. For example your release is benchmarked vs 10 problems. It solves 8/10 or problems a and b it fails

New software algorithm is developed solving an and b but now failing c. You release it but fda struggles to understand why c passed and now fails. The sw is better with 90% score vs 80% though
 
Top Bottom