Examples of inherent safety by design

Marcelo

Inactive Registered Visitor
The intent here is to ensure you mitigate risks "As Low As Possible" (ALAP). There's no such thing as "As Low As Reasonably Practicable" (ALARP) with this release of the standard.
The intention is that, if you disregard the introduction of the directive (which clearly mentions that:
Whereas the essential requirements and other requirements set out in the Annexes to this Directive, including any reference to ‘minimizing’ or ‘reducing’ risk must be interpreted and applied in such a way as to take account of technology and practice existing at the time of design and of technical and economical considerations comp atible with a high level of protection of health and safety;).

It's impossible (without infinite resources) that you apply the concept of "As Low As Possible" alone :p
 

Marcelo

Inactive Registered Visitor
For an historical context of the problem, see the mention to Edwards v. National Coal Board, [1949] here - (broken link removed).
 

sagai

Quite Involved in Discussions
Interesting subject ...

Tend to say, the inherent safe design is if you design on such a way that you are using multiple types of energies to achieve this safe design...
Otherwise, the design stuck into the limitation of the given universe of that energy type.
Thus, more or less would say, no way for inherent safe design in a single energy type design ...

Cheers
 

Tidge

Trusted Information Resource
The problem is that we are not quite sure about what kind of risk mitigation can we claim under this approach. The arguments within the team are for claiming ‘inherent safety’ are:

Arguments in favour of regarding this measure as ‘inherent safety’:
-Preventing noisy/useless from entering the algorithms that calculate the clinical information that the device is intended to provide is inherent safe, since no clinical information is presented on the screen and physicians would not make clinical decisions based on the information presented by the device.

Arguments disfavouring ‘inherent safety’:
-This measure is implemented through a software algorithm and software should not be used to mitigate risks.
-If no information is presented to the user, then the decision-making process can be affected resulting in a new risk.
How would you classify this approach? Would you say this is ‘inherent safety’ or just a ‘protective measure’?

I think it has been clarified that software absolutely can be used to mitigate risks.

One 'rule-of-thumb' that I suggest to teams performing risk-analysis involving software is this: If the software risk control requires code to run, then it is almost certainly a PMD. If the software risk control does not require code to run it is a candidate for IBD. In the example given above, the use of an 'algorithm' implies a PMD choice.

Possible examples of software risk controls that could be considered IBD:
1) 'Partitioning' software elements so that they do not use shared resources.
2) Specific choices of variable structures that avoid ambiguities (e.g. boolean v. text strings)

Note that these are just examples of potential risk controls offered without any context of the potential risk(s) they may be controlling.
 

sagai

Quite Involved in Discussions
I do not think if the software on its own could mitigate anything.
Any example to the contrary?

I cannot stop the fire with more fire ... it needs air, earth, water or any other kind to do so.

Also, hoping not to be too pedantic, what does this refers to then?
IEC 62304:2006/AMD1:2015
"Probability of software failure shall be assumed 1"
"Only risk control measures not implemented within (external) to the software system shall be considered"

Cheers
Saby
 

Tidge

Trusted Information Resource
I do not think if the software on its own could mitigate anything.
Any example to the contrary?

I cannot stop the fire with more fire ... it needs air, earth, water or any other kind to do so.

Suppose the fire hazard leads to harm because the medical device includes heater coils and that the user sets the length of time for the heater coils to be on. Software could be implemented in the design to monitor the heat profile and shutdown the coils operating under conditions likely to start a fire.

Also, hoping not to be too pedantic, what does this refers to then?
IEC 62304:2006/AMD1:2015
"Probability of software failure shall be assumed 1"
"Only risk control measures not implemented within (external) to the software system shall be considered"

This is meant to emphasize the difference between estimates of failure for physical elements as opposed to software elements. For example: Given a century (or more) of standardized manufacturing methods, material choices and design elements it is possible to make a qualitative estimate of the probability of failure for a specific gear. Unlike gears (or power supplies, or threaded fasteners, or materials) specific software design solutions are (typically) not standardized, nor have they been subject to the rigorous analysis that would allow any meaningful estimate of the 'probability of failure'.

In 'plain English' the result is: If you don't know that it will work or how it might fail, you have to test it.
 

Marcelo

Inactive Registered Visitor
"Probability of software failure shall be assumed 1"
"Only risk control measures not implemented within (external) to the software system shall be considered"

This is meant to emphasize the difference between estimates of failure for physical elements as opposed to software elements.

In fact, what we wrote in IEC 62304 is related to the fact that it's difficult to estimate software failure sand thus it's not a good practice to rely on software, and in this case, you should always estimate (unless there's a veryyy good rationale to why not this is the case) the software failure as always happening.

This was rewritten in the amendment to make it clear that the software failure is ONE event in the sequence of events, and it's not either P1, P2 or risk (which does not mean that P1, or P2, the risk will be 1 also, because, if you have measures outside the software, the probability of P1, P2 or risk can be more than one)).
 

Tidge

Trusted Information Resource
This was rewritten in the amendment to make it clear that the software failure is ONE event in the sequence of events, and it's not either P1, P2 or risk (which does not mean that P1, or P2, the risk will be 1 also, because, if you have measures outside the software, the probability of P1, P2 or risk can be more than one)).

Could you clarify this last part? The last few posts have been discussing probability of failure, which presumably is bounded by zero and one.
 

Marcelo

Inactive Registered Visitor
Could you clarify this last part? The last few posts have been discussing probability of failure, which presumably is bounded by zero and one.
Yes, the probability of failure is bound by zero and one, but the probability of failure is not the same of the probability of risk - which is also bound by zero and one (and this is what people usually confuses).

You can see my second comment on this thread: IEC 62304 Section 4.3(a) - 100% probability of failure for a more detailed explanation.
 

sagai

Quite Involved in Discussions
Looks some motionis going on here, great! :)

let me put some more firewood here ;)

Testing proves that there are errors in the software, it does not prove the correctness of the software.

When there is an error in software that testing did not revealed that error comes with 100% probability out when conditions met.

Regards
Saby
 
Top Bottom