Sampling strategies/techniques for software QA

basker_1957

Registered
Hi,

Are there any published industry standards (or guidelines) on sampling strategies for software QA? A representative use case would be measuring compliance to a code review process; how do I determine how many code review records I need to examine to be able to say that compliance to the review process is x% with a margin of error of y% ? I've generally used a sample size of 10 - 15% (which seems to be widely-used sample size), but I'd like to be able to refer to a standard or guideline that would give this approach more validity.

Thanks in advance!
 

yodon

Leader
Super Moderator
I've never heard of sampling for software QA. Everywhere I've been and everywhere I've seen, all data are used.

I'm curious how many data points you have that you feel sampling would be needed?

I'm also curious how you would use the % compliance to code review metric (irrespective of sampling). Will that be used in conjunction with other metrics?
 

JenniD

Registered
It sounds like you're conducting an internal audit for a process issue? Not specifically using sampling to inspect software outputs.

If you're sampling process, I would suggest sticking to the record sampling methods outlined in audit standards, irrespective of whether the record is related to the software domain. Maybe a better question: Is there no way to force conformance to the review process? i.e you're not allowed to merge code without review of X authority person?
 

Tidge

Trusted Information Resource
Specific to code reviews of (product) software against an established coding standard: I have always had each code review make an assessment of every one of all identified deviations from the coding standard.

I have always done this specifically so that we have the raw data for either evaluating either/both of the following:
  1. The technical ability of our (individual, collective) programmers to follow the coding standard
  2. The appropriateness of discrete elements of the coding standard
EDIT: I should add that much of my recent focus has been specifically with (product) software for medical devices. In this field we track all anomalies from the software development process to support the launch of the medical devices.

In practice, I have only ever seen revisions to the coding standard. In my judgement, such revisions have always been in the area of 'pet peeves' rather than any actual improvements in the coding standards. (I always think about the difference between a 'computer scientist' and a 'computer artist'.) For example, we had a coding standard with a strict requirement in the area of "blank spaces vs. TABs" that was cluttering up the anomalies list for code reviews and that criterion eventually got dropped.

If the question is strictly an (internal) audit for assessing "how well are we following our established code review process?", any standard (process) audit process can be followed: scope/objectives, plans, criteria, records (from both source records and systems that those records can feed into like NCR, CAPA, deviations), audit reports.

Depending on the scope and criteria you may need technical resources. The specific effectiveness of Code Reviews themselves can be tricky, because the 'denominator' (for lack of a better word) in establishing study designs can be extremely confusing. Even a simple metric like 'lines of code' (LOC) can muddy the studies. For example: a LOT of lines of code can be generated automatically and would be unlikely to violate 'coding standards' so a sampling plan that includes such code would be highly unlike to expose defective code reviews.
 
Top Bottom