Improve Asset Reliability & Efficiency at The Reliability Conference

The Reliability Conference 2025: Actionable Insights for Reliability Success.

Sign Up

Please use your business email address if applicable

Business Case for Data Integrity

Let's be honest. To many people, no business subject is more dull than the subject of "data." Nevertheless, the subject of data integrity is written about in business journals more often than many other seemingly more interesting topics. Furthermore, many surveys reveal a growing concern among business executives related to the ability to take advantage of the reams of data that are being collected.

Your intuition may tell you that there are large benefits associated with bringing integrity to your business data. We must admit, however, that intuition is not enough to garner the proper level of senior management support and resources to improve the data. You need a convincing business case whose development can prove to be very challenging for several reasons. First and foremost, the business case for data integrity is so vast, so far-reaching, so all-encompassing, and so pervasive in every aspect of business that knowing where to start, and how much of the story to tell, is a daunting proposition. We think the best approach is to frame the case in broad terms, citing specific facts and some quantitative examples that support the intuition that the business case for data integrity is huge. Armed with this information, you should then be able to personalize the case for data integrity in your firm or plant.

Information Overload
Consider this: The average installed data storage capacity at Fortune 1000 corporations has grown from 198 terabytes to 680 terabytes in less than two years. This is growth of more than 340%, and capacity continues to double every ten months! That statistic puts into objective terms what we all instinctively know about our data-we have huge quantities of it, and we are accumulating more and more every day.

Searching for Data
What else do we know about our data? In an article in Information Week (January 2007), writer Marianne Kolbasuck McGee found that average middle managers spend about two hours a day looking for data they need. The study does not comment on how often the search ends successfully, but we can assume that at least some of that time is wasted. Why? There are several reasons.

First, the volume of data is too large and most of it is not needed. To arrive at the needed data, one has to cull through reams of irrelevant or unnecessary data.

Second, the quality of the data-or data integrity-is generally poor. Much of the data is inaccurate, out of date, inconsistent, incomplete, poorly formatted, or subject to interpretation. Therefore, even when you do arrive at the needed data, can you trust it? If you have to hesitate to answer that question, you're undoubtedly spending some time deciding whether the data you have finally found (assuming you actually found it) is trustworthy and whether you can rely on it to accomplish your task at hand.

There are other reasons, but these two alone are compelling. Let's try to quantify these phenomena. The U.S. Department of Labor's Bureau of Labor Statistics indicates that in May 2006 approximately 142 million workers were in the U.S. workforce. Assume conservatively that only 10% of those workers are middle managers. Also assume conservatively that only 25% of the two hours per day spent searching for data is wasted (many studies indicate the actual percentage is higher). The amount is 1,633,000,000 hours (that's right ... billion!) that are wasted annually in the United States alone-equivalent to about 785,000 man-years annually!

To put these figures in financial terms, suppose $40 per hour is the average loaded cost rate for middle managers. Then $65,320,000,000 is wasted every year-that's $65.32 billion annually, just in the United States! Imagine what this number is when calculated worldwide!

Can we put those 1,633,200,000 freed-up hours per year (or 785,000 workers per year) to good use? Most certainly!

Retiring Baby Boomers
Assuming we can fix the data integrity problem nationwide and free up these hours, some of the retiring workers won't have to be replaced. Thus, the cost structure of the company will go down. According to the U.S. Department of Labor's Bureau of Labor Statistics, approximately 22.8 million people aged 55 and older are in the U.S. workforce today-approximately 16% of the entire workforce.

Assume conservatively that the number of workers in this category does not increase. Further assume that the 22.8 million of them will retire evenly over the next ten years. In short, approximately 2.3 million workers will retire each year in the United States (the actual estimated number is higher). The freed-up hours related to data integrity could account for about a third of that. Thus, one-third of those retired workers would not have to be replaced-assuming we solve the data integrity problem.

Again, our assumptions in this example are conservative. It is entirely possible that simply fixing the data integrity problems could go a long way toward solving the aging / retiring workforce debacle in the United States and elsewhere.

This analysis deals strictly with an efficiency gain. We have not yet talked about the effectiveness of our efforts or, to put it another way, the impact of the "Brain Drain" on the knowledge residing inside the corporation.

The Brain Drain
More than 80% of U.S. manufacturers face a shortage of qualified craft workers. This shortage is because of the retiring workforce phenomenon, and the fact that fewer new workers are entering the skilled trades, or even technical degree programs. As a result, we don't have a feedstock of replacement workers ample enough-or skilled enough-to replace the retirees.

This challenge should put the onus on management of our industrial companies to figure out how to leverage a potentially smaller workforce through eliminating wasted activities. It should also challenge them, perhaps more importantly, to institutionalize and memorialize the knowledge currently in the heads of the workers in the company's systems and data sources. Wouldn't meeting this challenge head-on facilitate and accelerate the accumulation of skills and knowledge on the part of new less-skilled workers?

In addition, the institutionalization of knowledge and information could facilitate the same work being satisfactorily accomplished by less-skilled workers. In other words, it is possible that we won't have to completely replace the retiring workers in-kind. The combination of better systems, automation, information, procedures, guidelines, training media, etc. with less-skilled workers could represent a game-changing step change in how we go about doing the work in our manufacturing and industrial companies! That step change could permanently and favorably impact the cost of doing business.

A Business Case Example
Many studies in the maintenance and reliability (or physical asset management) field, including several conducted by Management Resources Group, Inc., have pointed consistently to an estimate that between 30 and 45 minutes per day per maintenance worker is wasted searching for spare parts because of poor catalog data integrity. Spare parts represent just one narrow area of the many aspects of physical asset management, but they provide a helpful example. (Incidentally, according to research presented in Maintenance Planning and Scheduling Handbook, by Richard (Doc) Palmer, the total amount of unproductive time on the part of an industrial maintenance worker is, on average, 5 hours and 45 minutes per day! That means that productive time, on average, is only 28%! Not all of that unproductive time is related to data integrity, but some of it certainly is.)

If you are not familiar with this aspect of physical asset management, be aware that inventory catalog descriptions are generally not formatted consistently or in a way that facilitates rapid searching and finding of the needed spare part. Searchers often become frustrated because they cannot easily find the part in question. Sometimes the time spent searching doesn't even result in a successful find, let alone a rapid one. Typical problems include: the system indicates that the needed part is in stock, but when visiting the bin there are actually none in stock; the indicated bin location is wrong; the searcher spelled a search word differently than the myriad of ways it exists in the catalog material master records (e.g., Bearing, BRG, Brg).

Referring to the U.S. Department of Labor's Bureau of Labor Statistics' May 2006 Occupational Employment and Wage Estimates, it is estimated that the United States has approximately 5.45 million industrial maintenance workers today. The same data indicates that the mean hourly wage rate for these workers is approximately $20. A loaded cost including fringe benefits would be approximately $26 per hour.

If each of these workers is wasting conservatively 30 minutes per day searching for spare parts, then we are wasting 626,750,000 hours per year in the United States. That's over 300,000 workers per year, or 5% of the industrial maintenance workforce. At the mean loaded cost per hour, that equates to $16,295,200,000 annually-$16.295 billion!

Are we suggesting that the primary manifestation of these potential gains is a reduction in head count? Not necessarily, although the natural attrition generated by the Baby Boomers' retirements will present opportunities to reduce head count without having to lay off any workers.

In addition, you gain the real opportunity to redeploy the freed-up resources to more value-added activities that will drive higher equipment reliability and lower maintenance costs. The consensus of the expert community in asset management is that most industrial plants rely too heavily on time-based preventive maintenance (PM) procedures as a primary maintenance strategy. Based on the results of thousands of PM Optimization initiatives, approximately 60% of existing preventive maintenance activities in existence are inappropriate strategies for the assets in question. Thus, a very large portion of the maintenance workforce is engaged in low-value or zero-value work. Expert analysis of equipment failure behavior, using proven tools like Reliability Centered Maintenance (RCM) and Failure Modes and Effects Analysis (FMEA), dictates that the vast majority of assets in a typical industrial complex-about 89%-do not observe a predictable time-based failure pattern. Only about 11% of assets do so, as Figure 1 clearly shows.

Uptime Article Fig 4

The failure curves depicted in Figure 1 are accepted and proven knowledge dating back to studies that began in the 1960s. Keep in mind that the curves in this figure show the probability of failure on the basis of time (the x-axis is time in these curves). What these curves tell us is that it is impossible to predict failures of 89% of the assets in a plant on the basis of time. That does not mean we cannot predict failure for these classes of assets-it simply means that we cannot do so on the basis of time.

If the failure behavior of a specific class of asset shows that the asset fails randomly on the basis of time, how can we accurately define an interval for preventive, or time-based, maintenance? We can't! Yet that is exactly what we have tried to do for the past fifty years. Typically, we have guessed what the correct and safe time interval should be for preventive maintenance based on the actual historical failure behavior of that asset.

Consider an asset that over a 5-year period ran for 1 year before its first failure, then after repair it ran for 6 months before its next failure, then 3 months, then 18 months, then 5 months, then 16 months. What time interval would we set to do preventive maintenance on this asset if we wanted to prevent failure? If the asset is critical to operations, we'd have to take a risk-conservative approach and say that we should do something to this asset every 3 months. Based on the actual 5-year failure history of this asset, that would mean that we would have done preventive maintenance much too often.

Not only did the machine not need a PM during many of those runs, but as we can see from Figure 1, we may have introduced defects that actually induced failures that otherwise would not have occurred. This phenomenon is referred to in the reliability profession as Infant Mortality. Many people have probably heard the phrase "if it ain't broke, don't fix it." Well, this adage has more merit than you would think.

Uptime Article Fig 5

As you can see in Figure 2, there is significant basis and proof, dating back to the 1960s, to support elimination of many existing PMs. This elimination would free up significant manpower, potentially to be used to hedge against the loss of knowledge with retiring Baby Boomers, or to redeploy to perform other more value-added tasks that would be required to enhance asset performance.

Most equipment does not observe a time-based failure pattern, Therefore, should we do no maintenance at all on the 89% of assets and simply wait for them to fail? Absolutely not. In fact, while we cannot predict failure for these assets on the basis of time, we most certainly can predict failure of these assets on the basis of condition-using a variety of sensitive technologies and tools designed to detect early warnings of impending failures. These sensitive technologies and tools are commonly referred to as Predictive Maintenance and Condition Monitoring. Examples of such tools include vibration analysis, infrared thermography, oil analysis, and ultrasonic inspection. There are others that we don't need to go into here.

The trick to Predictive Maintenance (PM) optimization (reduction) and proper deployment of predictive maintenance tools is first to know how to categorize the assets, using analysis methods designed to understand likely and costly failure modes. Then, with that knowledge, review the existing preventive maintenance procedures. Eliminate those that either are not addressing failure modes or are applied to asset types that don't observe any time-based pattern. Once these steps are undertaken, the appropriate PM strategies must be deployed. The result of this optimization of the maintenance program invariably results in a significant reduction in work, with the attendant reduction in labor and spare parts usage. In turn, these results drive significant cost savings and enhanced asset performance.

You may be asking yourself at this stage, "What does this all have to do with data integrity?" Well, how can you possibly accomplish this optimization if your foundational data sources lack integrity and quality-i.e. are incomplete, inconsistent, and inaccurate? If you don't have an accurate and complete equipment list, for example, you lack a fundamental prerequisite to unlocking these technical benefits. The answer is that without asset data integrity you cannot accomplish the optimization described here, particularly if you want to do so both efficiently and effectively.

Consistency or Lack Thereof
Most corporations have allowed different industrial plants in the company's asset fleet significant autonomy in choice and use of systems, formatting of master foundational data in those systems, maintenance strategies, etc. It is typical today that multiple plants in one corporation have similar, if not identical, assets, yet these assets are described differently from plant to plant. The maintenance strategies that are deployed for these assets also vary dramatically from plant to plant. Wide variation in maintenance strategy across a fleet of like assets results in a corresponding variation in the operating performance of these similar assets. Some assets operate more reliably, whereas other assets of similar or identical class operate unreliably.

Based on our knowledge of best practices, why would we allow this in any company? Wouldn't we want to use sound analytical methods to classify our assets, analyze their failure modes, and apply somewhat consistent maintenance strategies across the enterprise (taking into consideration that some differences are warranted given operating context, etc.)? It seems logical and makes common sense to want to do so. But how can we undertake these steps efficiently if our assets are not described with a consistent taxonomy across the enterprise? Once again, we can't.

For those who may not be familiar with the term "taxonomy," it refers to the system of classification that guides the consistent formatting and nomenclature assignment used to describe whatever is being classified. A consistent taxonomy allows you to identify the like assets across the fleet and then measure and solve for the variation. An inconsistent taxonomy seriously impairs your ability to optimize your asset maintenance strategies and achieve consistent, reliable operation of your assets across the fleet. At the most basic level, this is a data integrity issue that must be solved in order to tap into the potential cost savings and improved asset performance that are waiting to be unlocked. Without data integrity, a significant entitlement of business benefits is locked away and unattainable.

Uptime Article Fig 3

This article was reprinted with permission from Industrial Press, Inc. from the book, Asset Data Integrity is Serious Business, by Robert DiStefano and Stephen Thomas.

Uptime Article Fig 1

Robert S. DiStefano, CMRP is Chairman and CEO of Management Resources Group, Inc. He is an accomplished executive manager with more than 30 years of professional engineering, maintenance, reliability, management, and consulting experience.
www. mrgsolutions.com

Uptime Article Fig 2
Steve Thomas has 40 years of experience working in the petrochemical industry. He has published six books through Industrial Press, Inc., and Reliabilityweb.com, the most recent being Asset Data Integrity is Serious Business and Measuring Maintenance Workforce Productivity Made Simple.

Reliability.AITM

You can ask "R.A.I." anything about maintenance, reliability, and asset management.
Start
ChatGPT with
ReliabilityWeb:
Find Your Answers Fast
Start