Examples of inherent safety by design

sagai

Quite Involved in Discussions
Guys,
My utterly subjective opinion is coming ...

We can do an awful lot of things, however, we cannot mitigate software risk at all by software.
It is due to the nonsense nature of this medical industry approach, that there is a category like Software as a Medical Device or Medical Software Device in EU in isolation from the hardware.
Not a surprise though, have a look at how many of the regulatory guys are lawyers and how many of them having an engineering background.

Other thing ...
For this 14971 approach, strictly the main compulsory text is calling for HARA. Millions of times people believe that it equals FMEA of some kind. Nonsense, put it in to polite.

The similar thing starts to evolve here when we focus on device failure, rather than on the potential harm on patients by using the device (this piece of software that I doubt would do anything without running on hardware) that is 100% functional.

So what can we do than with this software thing ...

There is a risk-benefit analysis that the domain medical expert can think of for quite a while, if this piece of software still beneficial to use for medical purposes, with all the limitation, with all the miserable nature of all of its use cases on patients, etcetera.

Cheers
Saby
 

Watchcat

Trusted Information Resource
Not a surprise though, have a look at how many of the regulatory guys are lawyers and how many of them having an engineering background.
So now I am wondering who you mean by "regulatory guys." The regulators?

The person who has led the SaMD effort at FDA (and is the FDA member of the IMDRF SaMD working group) is Bakul Patel, who has a BS in electronics and communication, and an MS in electronic systems engineering. Also an MBA, but no JD.

This announcement posted by Bakul last year gives you an idea of what kinds of backgrounds FDA thinks are needed for "digital health":
https://www.linkedin.com/pulse/fda-seeking-digital-health-advisor-team-bakul-patel/
I'm pretty sure no lawyers need apply. :p

It is due to the nonsense nature of this medical industry approach, that there is a category like Software as a Medical Device or Medical Software Device in EU in isolation from the hardware.

If you mean that it is nonsense for "the regulatory guys" to separate what many previously referred to as "standalone" software with a medical purpose from what I think (?) many still call "embedded" software, part of me agrees, and part of me does not. What I think is the same is...software is software is software. What is different is the hardware on which it is running (because no software really stands alone. For the one, it is the hardware it was design to run on. For the other...it seems to be "whatever." Not a software or hardware expert, but for some reason, this just doesn't sound good to me.

What I think is very different is those who develop the one versus the other. I am terrified by the SaMD crowd, and if I were FDA, I'd be building an army to defend the healthcare fort from them. Maybe that is exactly what FDA is doing. Or maybe it's just trying to ride the money wave, same as the SaMD developers.
 

Ronen E

Problem Solver
Moderator
Let's not beat around the bush.
Of course SW is useless without any HW to run on.
My own take is that the "SaMD" concept (or "standalone SW") was invented only to avoid the minefield of trying to apply the current regulatory paradigm to OTS HW such as PCs and mainstream operating systems. At face value it's much easier to "just regulate the SW" and suffice with the fig leaf of "whatever HW that the SW developer deems as acceptable". A much more safety-reliable approach would have been to only allow specified SW-HW combinations, but of course that would have rendered 90% (my guess) of the applications not economically viable for manufacturers, users, or both.
 

robert.beck

Involved In Discussions
What do you consider to be "the regulatory field"? Regulators? People who work in regulated industry?
I meant those whose primary expertise is knowledge of the regulations. this is a broad group of professionals, and includes regulatory consultants where solo or part of a larger consulting organization, people whose job is to supply internal regulatory advice, and also those who work for FDA and a Notified Body, and who interact with medical device companies as auditors or reviewers. Others, who work in a regulated industry may be hostile or indifferent to the regulations. this group consists of managers, engineers, product developers, and especially marketeers. In my experience, most of the professional software developers working for medical device companies have no clue about regulations or, sadly, about best practices. they see best practices as a government conspiracy to control their work and insistently refuse to even read the regulations to see if there is anything to be gleaned from them. this is in contradistinction to reality, which is that many FDA guidance documents are more white papers summarizing best practices.
 

Watchcat

Trusted Information Resource
When speaking about the profession to regulatory colleagues, I often refer "our perpetual identity crisis." When I started out, nobody wanted to be "Regulatory"...bunch of boring bureaucats doing administrative paperwork in suits. I miss those days. Now everybody wants to be "Regulatory."

With our perpetual identity crisis unsolved, I cannot say that your definition is incorrect, nor can I give you an authoritative definition, but, IMO, those whose primary expertise is knowledge of the regulations are in the compliance field. Those who understand the regulations....and the regulators...are in the regulatory field. Compliance justifies everything with "because it's required."

As a regulatory professional, what is required is the least of my interests. It is the last thing to consider, after I've worked with the company experts what is needed and why (risk analysis). Then I look at it from the regulator's perspective, and ask if the regulator is likely to agree with the company experts (not based on what is required, but on whether the company's explanation is both persuasive and meets the regulator's (usually political CYA) needs). Even then, if anyone is going to bring up what is "required," it's going to be the regulator, not me. If it does, then it falls to me to work that out with the regulator in a manner that best addresses interests of the company.

Also, IMO, the regulations "require" pretty darn little, if you don't try to use them as an instruction manual to spare yourself the effort of having to think.
 

robert.beck

Involved In Discussions
Probably more of an uninformed risk, like most patients, since information can be hard to come by. I don't know who used the term "slam dunk," but I'd steer clear of any medical professional who used this term to refer to the anticipated outcome of a surgical procedure.

I agree, if in the US, it's almost certain to be legal damage control.

Yes, you're correct. the whole thing came on so quickly I did not have time to do any research. after the first surgery failed, I started reading quality journal articles on the subject (JAMA etc), and found there were some things I could have done that might have increased the odds of success but there is still uncertainty because of the idiopathic nature of the condition. No one mentioned this except in an off-hand, casual way. the surgeon himself did not discuss success or failure, merely mentioning that my vision might not be as good as it was. "Slam dunk" is my assessment of the overall, non-verbal, sense of confidence exuded by everyone involved. Physicians tend to treat their patients as uninformed and not very bright, which is true much of the time. When I drove my children to school, they assumed the car worked properly, I knew how to drive safely, and they were seated in the car properly. Over the course 18 years, I was stopped twice by police officers for driving over the speed limit by a few miles, trying to get to school on time. In one case the officer told me the car seat was being used incorrectly. in the other my daughter started crying the officer kept apologizing. This is getting off the topic of my original question, which was NOT to discuss the shortcomings of the American medical system or any other system.

The basic question I posed on this thread is: the regulations require that probability of a software error occurrence be assessed at 100%. this is a constraint that must be followed if a company wants to be compliant to IEC 62304 and the USA QSR, both of which are now applicable under MDSAP. Unfortunately, this is not a good regulation but it's there and must be dealt with. the practical question is how to mitigate software risk in a document that will be reviewed by an auditor who is knowledgeable about these regulations without using detection (easily done by alarms no longer allowed per ISO 14971 or probability (not allowed by FDA and IEC 62304). It's a business question - how do avoid a non-conformity that is pointless and where fixing it won't improve the product. I posed one idea earlier (9 PM yesterday) but no one has commented on that directly. Now I have another idea about this:

1. compute risk exposure or level or whatever you wish to call it using this formula: probability of paying the full cost of the harm X severity of the harm. Source: modified from Risk Management chapter of CSQE Handbook, 2009.
2. this results in a dollar amount attached to the risk, making quantitative and easier to evaluate.
3. categorize the risk based on its expected cost.
4. the advantage of this method is that probability is brought into the assessment. the disadvantage is that economic consideration is also brought in (not permitted by ISO 14971: 2012). The counter argument is that the risk is being reduced as far as possible in a better way than just using severity. The better counterargument is that the risk-benefit analysis for this risk can explain how this fits into using Inherently safe design to reduce risk to an acceptable level.

Comments sought on this approach to risk mitigation . Comments on the medical system or the quality of this regulation not sought. Thank you.
 

robert.beck

Involved In Discussions
When speaking about the profession to regulatory colleagues, I often refer "our perpetual identity crisis." When I started out, nobody wanted to be "Regulatory"...bunch of boring bureaucats doing administrative paperwork in suits. I miss those days. Now everybody wants to be "Regulatory."

With our perpetual identity crisis unsolved, I cannot say that your definition is incorrect, nor can I give you an authoritative definition, but, IMO, those whose primary expertise is knowledge of the regulations are in the compliance field. Those who understand the regulations....and the regulators...are in the regulatory field. Compliance justifies everything with "because it's required."

As a regulatory professional, what is required is the least of my interests. It is the last thing to consider, after I've worked with the company experts what is needed and why (risk analysis). Then I look at it from the regulator's perspective, and ask if the regulator is likely to agree with the company experts (not based on what is required, but on whether the company's explanation is both persuasive and meets the regulator's (usually political CYA) needs). Even then, if anyone is going to bring up what is "required," it's going to be the regulator, not me. If it does, then it falls to me to work that out with the regulator in a manner that best addresses interests of the company.

Also, IMO, the regulations "require" pretty darn little, if you don't try to use them as an instruction manual to spare yourself the effort of having to think.
You raise some interesting points, which I'm not prepared to go into at this today, but I will think about "compliance" vs. "regulatory" in the context of business politics and managers striving to climb the corporate ladders. At first blush, I disagree with your last sentence. the regulations require time to understand how little they actually require, and most hate thinking so the better written regulations and guidance documents are good instruction manuals. In terms of compliance, as the "regulatory" person, I am on the front line where the rubber meets the road. behind me is a huge infrastructure of lazy people, somewhat incompetent people, many people with a completely different skill set than mine, and people are moving up or out in the organization.
 

Peter Selvey

Leader
Super Moderator
Re comment #45 which concludes that failure of software has to be assumed to be 100% (let's call it SWF100): it's an understandable misconception, but it's not correct and really needs to be squashed at every chance (sorry Robert!).

In IEC 62304 and the FDA guide the key reference to SWF100 is related to classification, which decides the level of design controls. Also in general it's normal practice to use this kind of worst case scenario to get a feel for what could go wrong. It's a kind of "pre-screen" approach that focuses on the severity of a system failure which you would normally use with any complex system (hardware, mechanical or software).

However, once you move to the next phase which includes deciding if and what kind of risk control is necessary and, critically, judging if that risk control is effective, it is neither practical nor expected to continue use SWF100. Full stop. No competent regulator or authority would ever ask for that. You can guarantee it (100%). Why? Because there are many high risk devices that rely on software for both control and protection, and any rule that requires SWF100 would make these devices both impractical and illegal. Which is nonsense and the regulators know this.

Guidance documents (including informative annexes in standards) often refer to SWF100 but the hidden context is this early phase of getting-to-know-your-system. Unfortunately, since they don't explicitly say this, it leads to frequent incorrect interpretation that SWF100 is required at all stages in the analysis (and in all likelihood due to the authors not really thinking it though).

Comment #45 also refers to an incorrect interpretation of Annex ZA: this Annex is informative, but the key point of the Annex is to make clear that in Europe, minimizing risk is the criteria, not acceptable risk from ISO 14971. In other words, if a manufacturer follows EN ISO 14971 perfectly and determines the risk is acceptable, Europe can still deem the device illegal even though EN ISO 14971 is a harmonized standard. It's a nuance but important especially from a legal perspective. Unfortunately, the Annex goes on to give justifications where the acceptable risk model has been abused (e.g. overuse of warnings/cautions), which in turn have been misinterpreted as a normative requirements that warnings/cautions are no longer acceptable risk controls. But in that regard, it's good to keep in mind the Annex is only informative. It's main point is to provide a background legal argument that compliance with ISO 14971 is not the end of the story as far as Europe is concerned.

This post is already too long so I won't delve (too much) into the subject of how to decide the failure rate of software. Suffice to say, no field has a good handle on this. The examples given (that a faulty line of code will always fail if executed) are so oversimplified as to be meaningless. In reality, software failure rates are on par with and perhaps even lower than hardware.

The problem is not the failure rates, but our natural inclination to treat software as perfect until proven otherwise. In hardware we have an innate sense that things can go wrong, and not just for random faults but systematic faults as well (I suspect, systematical faults dominate hardware issues as well). I have also found that hardware has a natural "cleaning process" between the prototype stage (with messy wiring, fixes, trial and error) and real production (laying out the schematics, laying out the PCB); this cleaning process helps to improve reliability and also gives a second chance to think about potential issues. Software does not have this: software written with good intent but still contains all the mess from trial and error can easily end up going direct to production. Ideally software should be written twice (just like hardware): once just to get the bloody thing up and running, and next to make the solid reliable version. And no matter what controls are applied, never rely on a single system (hardware, software, mechanical) for high severity stuff. Always employ two independent systems. It's not that hard really ...
 

Marcelo

Inactive Registered Visitor
As I mentioned somewhere else, I've tried some times to remove the classification from IEC 62304 because, in my opinion, it creates more problems (these discussions) than solves problems (the "level" activities and paperwork required for "low risk" software). In my opinion, the best practice is, forget the classification and do everything (and for "low risk" software, you would certainly need to do a little "more").
 
Top Bottom