Hello,
Our software is constantly evolving to improve performance, reduce risk and offer new features.
Our software incorporates functionalities based on Machine-Learning algorithms. Depending on user feedback or other PMS inputs, we are often called upon to improve these algos (by replaying them with new data or a new model), or to adapt algos already created for other similar applications (e.g. a different anatomical zone, for example, segmentation of an image of the aorta or segmentation of an image of a bone structure such as the spine).
In practice, the ML principles and methods remain each time the same. Only the training and validation data and the models change.
When I use MDCG 2020-3, I find it hard to assess the significance of these changes.
Do you have any suggestions for qualifying these changes as significant or non-significant, and also justifying them?
Thank you.
Our software is constantly evolving to improve performance, reduce risk and offer new features.
Our software incorporates functionalities based on Machine-Learning algorithms. Depending on user feedback or other PMS inputs, we are often called upon to improve these algos (by replaying them with new data or a new model), or to adapt algos already created for other similar applications (e.g. a different anatomical zone, for example, segmentation of an image of the aorta or segmentation of an image of a bone structure such as the spine).
In practice, the ML principles and methods remain each time the same. Only the training and validation data and the models change.
When I use MDCG 2020-3, I find it hard to assess the significance of these changes.
Do you have any suggestions for qualifying these changes as significant or non-significant, and also justifying them?
Thank you.