Quality Gates for software development

DarrellH

Registered
Looking for some advice please.

Agile s/w dev environment

In the simplest terms our s/w is subject to huge amounts of testing during the dev processes, yet we still see escapes into the wild.
The product is mission critical and may sit on shelves for a number of years before deployment, hence the issues are not highlighted until long after the devs wrote and tested the code.
We measure the escapes to focus improvement activity, but this always results in yet more testing.
How do we identify the issues earlier in the dev stage, as testing is only as reliable as the test design, so that is not a solution.

We need to deploy quality gates throughout the dev process

As I write this I think about using DFMEA prior to sprints as part of the sprint planning activity - Could a generic DFMEA be developed into the database for testing strategy developing over time by reverse FMEA lessons learned back into the generic FMEA, Am i making sense here? could that work?, has anyone done this and what did they learn,
 

yodon

Leader
Super Moderator
You sound like you have some data. What are the failure types, what is the earliest they could have been caught?

As I'm sure you're aware, you can't test in quality so you need to figure out what you can do upstream to prevent the problems.
 

Tidge

Trusted Information Resource
As I write this I think about using DFMEA prior to sprints as part of the sprint planning activity - Could a generic DFMEA be developed into the database for testing strategy developing over time by reverse FMEA lessons learned back into the generic FMEA, Am i making sense here? could that work?, has anyone done this and what did they learn,

In some ways, all professionals who seek to improve learn by "re-fighting the last battle", so in principle having a common source of experienced defects is somewhat natural. A full-fledged DFMEA is probably not required, rather I'd suggest incorporating something like a checklist of problems (which would be in this 'generic' DFMEA) as part of each sprint phase. The trick is to not shackle the sprint team.

In my own experiences, the output of "agile sprints" is often indistinguishable from "lines of code, written."(*1) For me it is rare that a sprint starts with a clear, focused objective with a well-understood amount of work inside a defined architecture, and ends with testing of the work done inside the unit and at the (previously identified) interfaces. Presumably not everything that would end up in this "DFMEA" would be applicable to every sprint. My attitude is to treat each sprint like a mini-waterfall model focused on the unit of work. My (quality) objective is to keep people from wanting to waste time by investigating units that have already been developed and tested.

(*1) One natural, understandable, simplistic response from a sprint team that I strongly dislike: "I am still writing (unit) code." I can only think of two times in decades of work where a sprint team reported the reason for delay as "I am still testing the (unit) code."
 
Top Bottom