Arrive with an idea, leave with a 3-year plan. Achieve reliability.

TRC gives you access to cutting-edge knowledge & technology

Sign Up

Please use your business email address if applicable

Averages Are Deceiving… Besides, Who Wants to Be Average?

by Ron Moore

Averages are funny things. Someone once explained averages this way: Take one foot and put it in a bucket of scalding water; take the other foot and put it in a bucket of ice water. On average, you ought to be comfortable. With that caution in mind, let's consider three very different examples of the use, or perhaps misuse, of averages.

Example One. Mean time between failures (MTBF) is an often used term to characterize the average times between the failures in a given set of equipment. As demonstrated in the aforementioned example, it's also not particularly meaningful. For example, consider the data in Figure 1, in which 30 identical components were run to failure and then their life measured.

Fig 1

Looking at the data, the MTBF is calculated as 90. It is further evident that about half the equipment fails before getting to 90 and the other half fails after 90. So, how useful is knowing that the MTBF is 90? If the MTBF is increasing or decreasing, that might be useful since you would know whether things are getting better or worse. But it's not particularly useful, for example, in setting preventive maintenance (PM) intervals. If you were to set up a replacement PM for these components and used MTBF for the replacement interval, you would be sorely mistaken. About half the equipment would fail before being replaced and the other half would have life remaining, resulting in a premature replacement. Neither is a desirable outcome.

By reorganizing the data in order of shortest to longest life, the result is the Figure 2 graph.

Is this reorganization of the data useful? It's the same data, but now it shows that a failure occurs every five to 10 days, more or less. Would that be useful? Probably, since plans can be made for the parts needed and for when a certain percent of the equipment would be down and not available for production. How will you know which one of these components will fail next? After reflecting on this, you would likely conclude that you should do condition monitoring to detect which of the components has defects that would cause it to fail next. What sort of condition monitoring? That would depend on the failure modes associated with the equipment. How often? Well, something less than five to 10 days, so there is enough time to manage the consequence of the impending failure and to plan and schedule the work to mitigate the failure.

After a time, you would also try to be proactive and eliminate the defects that are causing the failure in the first place. At this point the MTBF would increase, but you would likely still have a random failure pattern. Some 80 to 90 percent of equipment failures are in a random pattern. Incidentally, according to Peter Todd, a Predictive Maintenance Expert with SIRF-Roundtables in Australia, Figures 1 and 2 represent an exponential failure rate and imply a constant conditional probability of failure. That is, the probability of survival, and of failure, of any given component is the same with a random failure pattern. According to reliability consultant Paul Barringer, this means there is a constant, instantaneous failure rate where the die-off rate is the same for any surviving or unfailing member of the population. That is, an old part is as good as a new part when you have a constant, conditional probability of failure. The way to manage a random failure pattern with a constant, conditional probability of failure is to use condition monitoring. MTBF, or average data, doesn't have much meaning in this situation, other than to let you know if things are getting better or worse.

Example Two. In operational excellence workshops, participants are asked to do self-assessments of their practices. These self-assessments are on relevant issues, like operating practices, maintenance practices, and organizational culture, leadership and alignment. Each assessment has 10 questions and each question has a scoring range of zero to 10. So, each self-assessment has a maximum score of 100. Participants use their individual experience and judgment to give a subjective score of their practices in each self-assessment. Over many years in conducting these self-assessments, average scores for a group of 20 people in a workshop have consistently been near 55, more or less. Is this useful? Not particularly. If your group scored in this range, you would tend to think you're average. However, the interesting part comes about when the score for a given group is lower, for example, 45. Are they slightly below average? Experience shows they are not. Indeed, they are typically well below average. On the other end of the spectrum, suppose a group scores 65. Are they above average, or about 50 percent above the group that scored below average? Experience shows they are, in fact, above average, but also far better than the group that scored a 45 and often more than twice as good when compared to a more comprehensive review of the details of their practices. That is, the group scoring 65 is more than twice as good in their practices as the group scoring 45. While about 75 percent of those surveyed score near 55, about 10 percent score near 65, another 10 percent score near 45 and a few percent score near 35 or 75. Incidentally, the difference between 35 and 25 is nil. These companies are just awful. In addition, the difference between a 75 and an 85 is also nil. These companies are really good, few though they are.

So, what's happening? First of all, it's a subjective assessment based on the experience and values of the individuals that is then averaged. The range of individual scores for a group average of 55 on a given assessment might be 35 to 75, or more, depending on the individual's experience and valuation. The net effect of this is that the average scores tend to compress toward the center of the assessment questions at 55. Interestingly, individuals in those operations that are really poor tend to say to themselves, "Well, we're doing a little bit of this practice, so I think I'll give it a four." They are measuring from zero and giving themselves credit for doing something, but they know they're not doing it well. On the other hand, individuals in those operations that are really quite good see all the things they are not doing and discount from a 10 to get their score. Individuals in the two operations have a different frame of reference. The better the operation, the more opportunity they see to get even better. The worse the operation, the more they want to demonstrate that they're doing something. Hence, the scores compress toward the center, or average, and mediocrity reigns. This is not a good thing. Averages are not useful in this situation if you expect to survive and prosper in your business. More importantly, measurements should be against ideal, or perfection, in order to identify all the opportunities that are available to improve the business and then make business decisions relative to the next opportunity.

Example Three. The executive level of two companies conducted employee surveys, dryly called the "Are You Happy?" surveys. The questions typically were related to employee satisfaction regarding a number of issues, such as pay, opportunities, benefits, etc. One common question was something to the effect of, "As an employee, would you rate yourself against your peers as being below average, average, or above average?" A large majority, about 80 percent, rated themselves as above average compared to their peers.

While this is a statistical impossibility, it is telling. A large majority of people do not consider themselves average in their work performance and yet, referring to Example Two, at the same time they consider their company to be average. If they're individually above average, why isn't the company above average?

The answer is simple. The systems in place within the companies are mediocre. Mediocrity has become their standard for excellence. It's the people who are excellent. If each person believes he or she is above average, then the systems should challenge them and facilitate excellence in their performance, thus fostering company excellence. As Hajime Ohba, a Toyota guru in manufacturing, said, "We get brilliant results from average people managing brilliant processes. Our competitors get average results from brilliant people managing broken processes." It's not the people that are mediocre, it's the company's processes. Change your processes to get better results. And measure your company against perfection, not averages.

Keep reading...Show less

Ron Moore

Ron Moore is the Managing Partner for The RM Group, Inc., in Knoxville, TN. He is the author of “Making Common Sense Common Practice – Models for Operational Excellence,” “What Tool? When? – A Management Guide for Selecting the Right Improvement Tools” and “Where Do We Start Our Improvement Program?”, “A Common Sense Approach to Defect Elimination,” “Business Fables & Foibles” and “Our Transplant Journey: A Caregiver’s Story”, as well as over 70 journal articles.

Ron holds a BSME, MSME, MBA, PE, and CMRP. He can be reached at RonsRMGp@aol.com.

ChatGPT with
ReliabilityWeb:
Find Your Answers Fast
Start