Calibration Intervals (Frequency) derived from Variables Data

BradM

Leader
Admin
Re: Calibration Intervals derived from Variables Data

Sorry BradM. Short interval more $$ is not win win. Step back and look at the big picture.

Company A voltmeter has a recommended calibration interval of 1 year. Company B voltmeter has a recommended calibration interval of 2 years. Both meters have the same tolerance and can be used for our application. Guess which one I’m going to buy. So while company A makes short interval dollars they are losing market share to company B.

This is where my calibration interval analysis is going. By identifying the current calibration interval design limitations, we can redesign to improve our product towards longer calibration interval and put the competition out of business. Well at least make them uncomfortable.

Excellent job. And agreed :agree1:

Realize my previous post was Devil's Advocacy, and a bit tongue-in-cheek. Observe my statement about setting objective evalution aside. I was implying that most companies out there (IMO) are approaching it this way. I believe many do not have rational approaches to establishing frequency intervals. Too, more of the population responds positively to price and stated accuracy. Few have the knowledge (or the tools, or the desire) to critically analyze the accuracy, calibration methology, etc. As Marc suggested, there are so many variables to the same instrument, it would be difficult to do. Unless... a robust model can be utilized.
 
J

John Nabors - 2009

Re: Calibration Intervals derived from Variables Data

I up came with a system several years ago that is competely arbitrary but has served me well. If an instrument has needed adjustment within two calibration intervals I reduce the inverval by 50%. If it passes through 4 intervals without requiring adjustment I increase it by 50% and then review it after four more calibration intervals to see if I can extend it further.

Not very scientific, but it has worked for me.
 
Last edited by a moderator:

BradM

Leader
Admin
Re: Calibration Intervals derived History and Use

I think for most non-critical (lives do not depend up it) applications which is the case in many companies, measurement equipment calibration cycle time should be looked at in terms of calibration history. My 'rule of thumb': If the instrument keeps coming back in calibration without adjustment, lengthen frequency. If adjustment is necessary but the instrument is within its tolerance, the frequency is probably about right. And, of course, if it comes back having needed adjustment AND was out of tolerance, shorten the cycle. Note that my 'rule of thumb' is general. For example, if a review of the calibration history for the device shows that it was stable until a certain point in time (coming back in calibration without adjustment, or minimal adjustment is necessary but the instrument is within its tolerance), one should be looking at the integrity of the instrument (for example, is it wearing out?).
I think that is an excellent system, and would serve most fairly well. Basically you have three decision criteria: 1) Failed calibration, 2) passed calibration (but with adjustment), 3) passed calibration (no adjustment). 1 and 3 are fairly consistent; 2 is my interest. Does the vendor always adjust, never adjust unless OOT, adjust at 50%? If I had a little more of a robust system to take the percentage and estimate a frequency, that would be useful.

Now, let's take another scenario I'd like feedback on. A company has 20 digital multi-meters. There are 5 in the calibration laboratory and each is used approximately 5 times a day, 2 are kept by product engineers whose use is not able to be tracked, 13 are used on the line and each is used at 15 minute intervals on each of 2 shifts, and 5 of the 13 are also use on a 3rd shift (same 15 minute interval scenario). Not relying on calibration history, how would one set a calibration cycle for each?

Good one. Why would you not want to at least consider calibration history? Is it not available? I’m saying whether you go off your rule of thumb, or the most sophisticated system available, any realistic forecast should start with historical performance (IMO).
 
R

rdragons

Re: Calibration Intervals derived from Variables Data

ALGORITHMIC METHODS
Other methods utilize simple to complex decision algorithms to adjust calibration intervals in response to in-tolerance or out-of-tolerance conditions observed during calibration. Typically, these approaches consist of instructions to lengthen or shorten calibration intervals in response to current or recent observations. Because of their nature, these methods are labeled algorithmic methods. Algorithmic methods have achieved wide acceptance due to their simplicity and low cost of implementation. However, most algorithmic methods suffer from several drawbacks.

The following list is fairly representative:

1. With most algorithmic methods, interval changes are in response to small numbers (usually one or two) of observed in-tolerance or out-of-tolerance conditions. It can be easily shown that any given in-tolerance or out-of tolerance condition is a random occurrence. Adjusting an interval in response to small numbers of calibration results is, accordingly, equivalent to attempting to control a process by adjusting to random fluctuations. Such practices are inherently futile.

2. Algorithmic methods make no attempt to model underlying uncertainty growth mechanisms. Consequently, if an interval change is required, the appropriate magnitude of the change cannot be readily determined.

3. Algorithmic methods cannot be readily tailored to prescribed reliability targets that are commensurate with quality objectives. The level of reliability attainable with a given algorithmic method can be discovered only by trial and error or by simulation.

4. If an interval is attained that is consistent with a desired level of reliability, the results of the next calibration or next few calibrations will likely cause a change away from the correct interval. To see that this is so, consider cases where reliability targets are high, e.g., 90%. For a 90% target, if the interval is correct for an item, there is a 0.9 probability that it will be observed in-tolerance at any given calibration. Likewise, there is a 0.81 probability that it will be observed in-tolerance at two successive calibrations. With most algorithmic methods, such observations will cause an adjustment away from the item’s current interval. Thus, algorithmic methods tend to cause a change away from a correct interval in response to events that are highly probable if the interval is correct.

5. With algorithmic methods, although a correct interval cannot be maintained, a time-averaged steady-state measurement reliability can be achieved. The typical time required ranges from fifteen to sixty years.

6. With algorithmic methods, interval changes are ordinarily computed manually by calibrating technicians, rather than established via automated methods. Accordingly, operating costs can be high.
Quote from Dr. Castrup

In metrology world it is the user of the product that has responsibility to establish calibration interval based on their usage. Which is in conflict with the fact the manufacture of the product has the largest database of information available to establish calibration intervals.

I think the dilemma is “instrument” vs. “instruments”. If I have one instrument and want to adjust cal interval the only method that will work is an algorithmic method. “John Nabors” has a good one, but once the average is found the cycle of 4 good cals and two OOT cals is a Reliability of 66.66%. Which means it is returned out of cal 33.33% of the time. There just isn’t enough data to work with.

I have a large group of instruments and if you refer back to plot03 last graph you will note the average is 125 weeks the dogs are 69 weeks and the gems are 180 weeks. I don’t currently know if this is random with increasing uncertainty over time or if there really are gem instruments that will go for 180 weeks. If it’s random “John Nabors” algorithmic will be constantly searching a 69 to 180 week window. If there are really gems in the group and “Jim Nabors” owns it algorithmic may save some money.
 
R

rdragons

Re: Calibration Intervals derived from Variables Data

The variables calibration interval analysis does work, but it’s not as good as RP-1 S2. Set to 99% confidence bands you can get an estimated threshold as to when a variable will go out of tolerance. It suffers from the same resolution issue that occurs with the calculation of Cpk. If the resolution of the measurement is such that it can’t estimate a standard deviation then its prediction is questionable. It also suffers greatly from effects of outliers which can affect the prediction by as much as four months. The residual sum of squares is used to select the best order fit and in many cases they just look wrong, so many liberties were taken with outliers and order of fit. It also suffers from effects of non normal distributions. But it does a good job at filtering MTE random fails to get at the underlying trend over time, and you can make a spreadsheet listing variables with weeks to threshold of OOT, which does provide a feel for which variables will go OOT beyond the current calibration interval.

Its report time and the next question: In a calibration interval analysis should broken equipment returned for repair by a customer be considered as a failed incoming calibration? There seems to be some confusion between reliability of broken equipment and reliability of calibration OOT. I tend to think of them as separate issues, is there an industry standard?
 
R

Ruebenn

Dear Distinguished people of the forum,

I am a make shift calibration engineer(from the instrument service and repair dept) and i am still green to all these calibration terms,definations and etc.
I was redesignated to the RF/Microwave calibration team and find myself facing the ardous test of being being audited by the 17025 accreditation body.
We are scheduled to be audited by the 17025 body this July.
We are accreditated in the AC/DC measurement and now, we are trying for the RF/Microwave measurement as well.
It is an extension to our capabilities...does that mean that i can use the same quality manual and laboratory manual fo the RF calibration as well?
I am trying to get used to our standards in the RF/Microwave lab but i am blur when it comes to the MU calculation and quality manuals that go along with it.
I require some inputs on coming out with the MU calculations for power measurements..levels in dBm and in ppm as well.
If you have any materials pertaining any RF/Microwave MU budgets, do let me know?
Appreciate the help.

Rgds
Ruben
 

Hershal

Metrologist-Auditor
Trusted Information Resource
Depending on the frequency and what type of conduit you use, there could be many factors.....some of the common ones are:

Uncertainty on your equipment, as reported by the accredited lab that cal'd them

VSWR (Voltage Standing Wave Ratio), or how much reflection you have

RH (Relative Humidity), below 20% you may have influences from static charges in the ambient environment

Surface loss of the conduit

There will be other factors also.....but these are common ones.

Hope this helps.

Hershal
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
I like the work stated. You run the risk with dimensional contact measuring devices of missing special causes - such as dropped gages - if you run all the way to the period that approaches the tolerance within (1-a) confidence level. If the period is 5 years, and you drop the gage in 30 days, you have 4 years and 11 months of suspect product. Nasty risk. The linear regression is a good approach for dimensional contact measuring devices. It works because the fundamental variation should be wear - a uniform distribution. The measurement error about that wear rate will be a normal distribution - but if the calibration standard and technique is appropriate, it should be statistically insignificant to the wear rate for the technique to be accurate.
 
Last edited:
R

rdragons

You run the risk with dimensional contact measuring devices of missing special causes - such as dropped gages - if you run all the way to the period that approaches the tolerance within (1-a) confidence level. If the period is 5 years, and you drop the gage in 30 days, you have 4 years and 11 months of suspect product. Nasty risk.

If your company adjusts calibration intervals to reduce risk caused by dropped gages it means that you are always producing suspect product, because you can not predict when the gage is dropped. No risk here, you’re 100% guaranteed to always ship suspect product, sometimes more sometimes less. Put everything on a 1 day cal interval to avoid the risk and if it’s dropped an hour after calibration you’ve still shipped 23 hours of suspect product.

The calibration interval is for “typical equipment usage”.
It is ultimately the “user’s” responsibility to know if their instrument is out of tolerance.

As a manufacturer we sell instruments to customers, we are not responsible for customer dropped product and run no risk. Same for inside the company, production floor personnel have the responsibility to know their instrument is out of tolerance, not Metrology. Metrology runs no risk.

I could fix your implied scenario by adding a second final inspection to the process performed with new personnel and a separate instrument. But this fix does not address the possibility that there would then be two teams playing catch over the noon hour with the gages. Or sabotage in which case both gages will be abused in less than 30 days. This fix is called Tampering - taking action based on the belief that a common cause is a special cause. The tendancy to take action, often leads to action without reason which causes more problems than it fixes. Dr. Deming stated that most variation (97% plus) was common cause variation not due to special causes. Tampering can also be considered a form of variation.

Our “typical equipment usage” does not include dropped. One might want to consider: incompetent employee, insufficient training, inappropriate procedures, absence of clearly defined standing operating procedures, inexperience, and sabotage. All are classified as “Common Cause” not special cause (wikipedia).

I am very pleased to be able to say. I have witnessed our production personnel delivering gages to Metrology to have them “checked”, because they were dropped.

Managing common cause risk is a management function not a calibration interval function.
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
I am glad you agree the risk exists and needs to be managed. I never said it had to be managed by the calibration period - but I will say if it is not properly managed, then your calibration period will be useless - no matter how long it is. :cool:
 
Top Bottom