Calibration Intervals (Frequency) derived from Variables Data

R

rdragons

Re: Calibration Intervals derived from Variables Data

Found it. My MRbar/d2 alternative to Castrup math works. I believe my solution to growth of standard deviation with time is more validated than that proposed in Castrup’s white paper. d2 for a sample size of two is 1.128379 and since the chart doesn’t go any lower someone once told me to use the 1.128379 for a sample size of one. WRONG! WRONG! WRONG!

I had to go back to basics. MRbar is the average of a one sided normal distribution. To convert MRbar to standard deviation divide by .67449 (d2 for a sample size of one). Then multiply that result by 1.6449 to get the standard deviation for a one tail 95% Confidence.

The first big graph MR (Moving Range - Rbarpoints) has a smoothing function ksmooth, 15 points. That’s the curve weaving up and down. It’s a moving average. A line function returns slope and intercept for the average. The ksmooth and line agree at the two heavy clusters of data. I’m very pleased with this result. Line*1.6449/.67449 is the boundary for 95% confidence as a function of time.

There are 17 points outside the estimated confidence boundary. 17/267 = 6.4%.
A normal distribution would have 5%, but this is real data and the next graph shows some skew on the lower side. Once again I am pleased with the agreement.

Now that I have accomplished this for a normal distribution I realized I can derive factors to convert MRbar to chosen Reliability Targets for non normal distributions.

The last graph is the model selected by residual sum of squares with one tail 95% Reliability Target boundaries. Origin corrected for Type B Expanded Uncertainty.

Validity?

Calibration Intervals from Variables Data: This one variable analysis has 5 fail data points out of 267, 1.87%.

S2 Intervals Analysis: I have looked at 93 calibrations with 2 failed by the above variable, 2.2%.

I now have templates for Predicting Calibration Intervals from S2 and Variables. I have 177 more calibrations and 312 more variables to look at. And am curious as to how well these two methods will agree. Tomorrow: calibration crunching.

I am still finding improvements that can be made to this template. Can you understand the last page? Does it have the pertinent information on it? I think I will add sample size. If you have any suggestions, please comment.
 

Attachments

  • Plot03.doc
    131.5 KB · Views: 390

Hershal

Metrologist-Auditor
Trusted Information Resource
Re: Calibration Intervals derived from Variables Data

Just out of curiosity, in what circumstances is such precision in determining calibration intervals necessary?

Money.

The military, and the Navy/Marine Corps in particular, have been able to save tens of millions of dollars over the last few decades by continuously studying that.

Hershal
 

Marc

Fully vaccinated are you?
Leader
Re: Calibration Intervals derived from Variables Data

Yes, I understand the money aspect. I was more looking for specific situations where a lot of money would be saved. For example, I wouldn't think a metal stamping house using a CMM typically would need that level of precision to determine calibration frequency for the CMM.
 

BradM

Leader
Admin
Re: Calibration Intervals derived from Variables Data

Good question, Marc. This is my take:

First, I would be assuming that you have a calibration program with access to the appropriate historical numbers. Now, say I have six hundred (or more) instruments a year that I have to send out. We all understand the costs associated with sending instruments out. So I have the applicable information to use such a tool.

So, if I have a defensible tool to help me establish a more reliable indicator of calibration intervals, than say six months, one year, etc., it would be feasible for me to utilize such a tool. I have compounded my savings: lowered costs associated with sending instruments out and lowered costs associated with calibration failure. I can not only use this to decrease the frequency, but increase.
 
R

rdragons

Re: Calibration Intervals derived from Variables Data

Marc: Read the last paragraph.

https://news.minnesota.publicradio.org/features/2003/08/28_zdechlikm_wellstonesettle/

or about VOR meter

https://news.minnesota.publicradio.org/features/2003/03/03_zdechlikm_wellstone/

That VOR signal was calibrated by a technician with some company’s instrument. You get a feel for the money that can be involved over and above the yearly metrology department’s budget. Marc this is getting scary. Your question prompted a search on ““out of tolerance” litigation” and I found the above articles. Both products I have been asked to do this analysis on are capable of calibrating VOR. How about lives instead of money.

Hershal: If there are analysis methodologies similar to S2 or variables analysis available from the Navy, where do I find it? You talked about downloading.

Still request feedback on suitability of Plot03.
 

Marc

Fully vaccinated are you?
Leader
Re: Calibration Intervals derived from Variables Data

Marc: Read the last paragraph....about VOR meter

Point well taken. As a 'former' pilot I am well aware of aspects such as this. However, in the case you cite it doesn't address calibration frequency. I know lots of places that end up using equipment when it is known to be late for calibration, for example, and at times when equipment that is KNOWN to be out of tolerance. The article you linked to says "Federal officials tested that VOR signal and found it was slightly out of tolerance after the crash." It does not address calibration frequency of the instrument used.

How about lives instead of money.

You can't nail me on that. If you're doing aluminum casting of an alternator case and you're off a bit, there's a (probably) statistical probability of failure due to an out of tolerance condition causing loss of life of (I would bet) zero.

Good question, Marc. This is my take....

Yup. I guess what I was getting at is there are situations where such a detailed analysis isn't necessary. In other situations such a level of analysis is not just important, it is critical.

I'm not trying to scare anyone. I bring this up because over the years I have worked with so many different companies and there are so many unique situations. Because so many people come here from all over the world, from big corporations and governments to 4 person mom and pop shops, I wanted to inject a bit of 'common sense' here. A contrast, if you will, so that smaller shops don't take this as a 'Must Do' situation - That everyone should look to their specific company and assess what they need. I had one client, for example, that had 4 measurement devices. One was a scale to weigh pallets for shipping putposes. I have had other clients, like the old Motorola semi-conductor sector, which had thousands. One client made elevator cabs. There were almost no critical measurements (+/- 14 inch was the typical tolerance). In short, I'm throwing some balance in the discussion, I think.

I do want to say this is a most excellent discussion thread. It's the kind that makes this forum excite me. Hershal's participation is especially appreciated because Hershal is one of the rare expert metrologists. Hershal has seen so much, and is so knowledgeable, that I really appreciate his participation.

As an FYI - I 'cut my calibration teeth' in military manufacturing of various avionic and nuclear submarine assemblies (from communication, control and navigation LRU's {line replaceable units} to hermetic bulkhead connectors) at what was then Cincinnati Electronics. The calibration laboratory at CE had very precise standards for obvious reasons. For example, resistance standards were kept in liquid baths in a room where you had to go through three sets of doors just get in. Having said that, it was a different world when I jumped into the 'commercial' world and got involved with companies where tolerances were so much bigger than those required in the environment where I learned about calibration and calibration systems.

I'm not trying to say calibration frequency isn't important. I'm just trying to inject thought about the multitude of scenarios.
 
R

rdragons

Re: Calibration Intervals derived from Variables Data

Kansas has been in Texas all week witnessing testing.

Marc: To elaborate on BradM post. It’s the cost of quality. It’s a bathtub curve.

For calibration intervals to short, manpower and out of service time create additional cost.

For calibration intervals to long, risk of shipping nonconforming product increases. The lower cost level is Customer Satisfaction and Rework. The upper cost level is litigation both contract and liability. Additional cost, just a different flavor.

We want to be in the bottom of the tub. …..did I type that?

Follow up on Wellstone crash final report cleared the VOR, citing pilot error. So I’m not so scared any more.

Alternator………would that be the Horton Emergency Vehicles, John Molinari, Bobby Labonte, or just plain Nissan

(broken link removed)

(broken link removed)

(broken link removed)

(broken link removed)

Sorry Marc, couldn’t resist. But you made your point. We don’t care if a few barbecue covers are an inch to long. And a pilot is supposed to be able to recognize a broken VOR.

Back to basics……….

Does anyone out there use any form of variables calibration interval analysis?

Hershal where are you? Still want to know about downloading.
 

BradM

Leader
Admin
Re: Calibration Intervals derived from Variables Data

I guess what I was getting at is there are situations where such a detailed analysis isn't necessary. In other situations such a level of analysis is not just important, it is critical.

Sorry, Marc. Did not see your response on this one. It must have been during my "where are my notifications going?" phase.

I agree totally with you :yes: about what I perceive is the sentiments of your follow-up post, and the appreciation for Hersal (and all your professional covers) who take the time daily to share their expertise here.

I think experience and time teaches the professional about calibration frequency. I think even if you are a beginner, after you have instruments calibrated over a few cycles, one can begin to see whether the intervals are established properly (if you're looking).

I got excited (and am still excited) about this thread when there is some life given to actually looking at your calibration program and managing it. So many times, people get something calibrated, file it, and move on. They never look at their certificate, review the work being done, determine if the proper frequency is in place, if they have the right tool, etc. These forums are jam-packed with confused individuals who legitimately ask two fundamental questions: 1) what is my tolerance, and 2) what should my calibration interval be?

Ok, for #1, most of the time we say Mfg. tolerance. Since I'm talking to my cohorts, I ask:" how many of you have confidence in Mfg. specification??" If it's Fluke, I trust it implicitly. If it's XXXXXXXX, I have no confidence, and establish my own. So then there are several instruments in-between. There is no oversight whatsoever as to how companies ascertain their stated accuracies or recommend intervals. I know, that's a broad brush. But like I said, when I look at Fluke's analysis, you know they know what they're doing. Others?? You have to dig (sometimes I have to call) to find if there is any stated tolerance for the equipment even established.

By following established uncertainty analysis procedures, you can determine this. This is why I appreciate Hershal's torch-carrying on ISO17025 and referring people to legtimate, accredited labs that take pride in their work and do it right.

As for #2, in my experience, this one is a little more tricky. Say four of us bought out a calibration department of a corporation. What would be our intervals? Set aside objective analysis for a second. I bet it would be pretty short, right? We're not being deceptive or unethical. We are a business to make money (short interval means more money$$) and we're erring on the customer's safety side by not letting it go too long and being O.O.T. Sounds like a win-win, yes?

I just think it would be neat to have a tool to run some numbers through should I desire. If I could extend some and shorten others, giving me some $$$ saved to demonstrate to management, that has some promise. But to your point, it would probably not be useful to many others who manage a small group of instruments.

But.... that goes to the database thread. Where Access is the greatest thing in the world to me; for people who really work with databases, they prefer much more robust packages.
 
R

rdragons

Re: Calibration Intervals derived from Variables Data

As for #2, in my experience, this one is a little more tricky. Say four of us bought out a calibration department of a corporation. What would be our intervals? Set aside objective analysis for a second. I bet it would be pretty short, right? We're not being deceptive or unethical. We are a business to make money (short interval means more money$$) and we're erring on the customer's safety side by not letting it go too long and being O.O.T. Sounds like a win-win, yes?

Sorry BradM. Short interval more $$ is not win win. Step back and look at the big picture.

Company A voltmeter has a recommended calibration interval of 1 year. Company B voltmeter has a recommended calibration interval of 2 years. Both meters have the same tolerance and can be used for our application. Guess which one I’m going to buy. So while company A makes short interval dollars they are losing market share to company B.

This is where my calibration interval analysis is going. By identifying the current calibration interval design limitations, we can redesign to improve our product towards longer calibration interval and put the competition out of business. Well at least make them uncomfortable.
 

Marc

Fully vaccinated are you?
Leader
Re: Calibration Intervals derived History and Use

By identifying the current calibration interval design limitations, we can redesign to improve our product towards longer calibration interval and put the competition out of business. Well at least make them uncomfortable.

I agree, but then again... Manufacturer claims (even warranty aspects) may not tell the whole story. One would have to take both meters and put them in identical (or very similar) use scenarios and see how well they hold their calibration.

In most situations I would look for what I felt was the best 'built' device or instrument, I would set a 'reasonable' calibration interval and check results each time calibrated. If you get a new meter that is used daily in a very rough environment I wouldn't trust the manufacturers recommendation. I'd look to other multi-meters I already had and start there. Obviously this wouldn't apply in a startup with no history, but those are the exception.

I think for most non-critical (lives do not depend up it) applications which is the case in many companies, measurement equipment calibration cycle time should be looked at in terms of calibration history. My 'rule of thumb': If the instrument keeps coming back in calibration without adjustment, lengthen frequency. If adjustment is necessary but the instrument is within its tolerance, the frequency is probably about right. And, of course, if it comes back having needed adjustment AND was out of tolerance, shorten the cycle. Note that my 'rule of thumb' is general. For example, if a review of the calibration history for the device shows that it was stable until a certain point in time (coming back in calibration without adjustment, or minimal adjustment is necessary but the instrument is within its tolerance), one should be looking at the integrity of the instrument (for example, is it wearing out?).

Now, let's take another scenario I'd like feedback on. A company has 20 digital multi-meters. There are 5 in the calibration laboratory and each is used approximately 5 times a day, 2 are kept by product engineers whose use is not able to be tracked, 13 are used on the line and each is used at 15 minute intervals on each of 2 shifts, and 5 of the 13 are also use on a 3rd shift (same 15 minute interval scenario). Not relying on calibration history, how would one set a calibration cycle for each?
 
Top Bottom