Unfortunately, the infant mortality of components and the random losses of machine parts are the leading failure modes affecting equipment reliability. Often these losses are caused by human intervention through time based "Preventive Maintenance" tasks and the simple act of production cycles.
Many times these two failure modes interrupt established schedules. This interruption is the key source of reactive maintenance management. When this happens, the facilities organization is thrown into a cycle characterized by "failure, fix and failure, fix". Organizations defined by this mode of operation are likely treating symptoms with band-aids and they are seldom able to put permanent solutions to reoccurring problems. There has to be a better mode of operation!
To combat unscheduled interruptions, "Best Practice" companies use proactive maintenance techniques to monitor the condition of their machinery and equipment. This knowledge enables us to leverage our resources for required maintenance through better scheduling. Better-controlled schedules lead to improvements in up time and quality, while minimizing safety and environmental incidents. This increased plant reliability allows organizations to effectively reduce work-in-process inventory while experiencing reduced indirect maintenance costs.
We are fortunate that our plant machinery, equipment, and systems are actually indicating their respective operating conditions. Through the use of predictive tools, maintenance organizations can learn how to baseline the key operating characteristics of their equipment. Predictive technologies such as vibration analysis, ultrasonic leak detection, infrared thermal imaging, electric motor testing, oil analysis, ball bar analysis, and precision alignment are all leading edge indicators of overall equipment reliability.
Top business leaders are very aware of the costs accompanying indirect head counts and budgetary numbers. They also know that adding cost and passing it on to the customer will not be tolerated in a globally competitive market place. At the same time, each facility's performance expectations, both internally and externally, are on the rise.
We no longer have the latitude to add cost by throwing manpower and other costly resources at problems. We have to find ways to become effective in what we do and drive efficiency in how we execute our responsibilities. As maintenance professionals our task is to define the key components of a piece of equipment or system, identify the best technology to be used to monitoring performance under load, collect data using the appropriate technology, benchmark that data over time looking for change from a pre-determined baseline, and finally investigate the cause of those changes. For the past seven years, Hamilton Sundstrand has worked hard at applying this methodology in a manner that could dramatically affect the way we conduct business.
Hamilton Sundstrand uses ACE (Achieving Competitive Excellence) methodology as its operating system for the entire division. ACE is UTC's unique quality initiative whose infrastructure supports and sustains quality throughout the corporation.
As Hamilton's operating system, ACE tools, like PdM, relentlessly drive its People, Processes and Procedures, to eliminate the difference between results and the desired outcomes of both Hamilton's internal and external customers. The system is powered by the disciplined application of ACE tools for continued process improvements, problem solving, and decision-making. It is the descriptor that confirms our culture of continuous improvement and is based on facts backed by data and focused on results leading to "World-class performance."
The process/methodology that has become standard work for Hamilton Sundstrand and has been instrumental in breaking this reactive cycle of repair, mirrors Six Sigma Methodology (the DMAIC model). By identifying and implementing five key process steps, Hamilton Sundstrand was able to systematically start the conversion from the traditional reactive maintenance management organization to a proactive group. We began to use the ACE operating system and its tools to manage our facilities organization as a business, like any other operations group.
These five fairly simple steps have become our cookbook approach to applying all the tools of TPM to the assets that we are charged with protecting. This "standard" approach includes:
1. "Definition and Prioritization"
Key components are rated into three categories (A most critical, B critical but covered by redundancy, C least critical). During this process, we looked at a number of different key performance indicators: asset history, internal customer requirements, and standard maintenance procedures. We compared them to OEM recommendations, OEM manuals, and the constraints within the process itself. We came to the realization that often times the performance of systems common to multiple processes and operations became more critical than individual or isolated assets. In our transformation, it became critical for us to deal with systems and system performance, instead of focusing on the life expectancy of individual parts.
2. "Measuring Performance"
We accomplished the second phase by selecting the best PdM technologies for data collection. We determine a specific route with established data collection frequencies. Consistent and accurate vibration data collection is the foundation of a successful vibration data collection program. Minimizing variances in collection is critical for accurate data analysis to occur. Vibration data is collected completely from each machine comprised of driving components and driven components. A complete set of data consists of 3 measurement directions from each bearing location (2 in radial and 1 in axial direction). Based on bearing-specific measurements and machine design, multiple measurements are incorporated, including but not limited to, acceleration and velocity.
Data is downloaded at the completion of a route and briefly reviewed to ensure data integrity and imminent failures. Accurate vibration data analysis will provide a measure of asset health. This in turn will lead to increased reliability through early detection of machine faults. Once a known baseline is set, we monitor the rate of change through ongoing data collection.
During the data collection process, we are continually comparing data over time and watching collection points move up the prioritization list. Trending of the vibration amplitudes allows for the development of a rate of progression, which facilitates the convenient scheduling of repairs, prior to failure. The final assessment is based on multiple measurements per direction and at each collected location. Severity of the final recommendation will be based not only on the vibration signature, but also on the rate of change observed, with respect to historical trending. This disciplined data collection and benchmarking change soon became our standard mode of operation, as we started to see the development of problems long before they became catastrophic losses. All anomalies are prioritized on a scale from 1 (indicating the need for immediate action) to 4 (no action is required and we will just continue to monitor). When a collection point reaches a level 2 or 3 and a change in performance has been noted, we investigate that particular asset and analyze the causes for the documented change.
Parts eventually do fail and the fourth step in our five-step process addresses improvements. We investigate all anomalies with detailed failure analysis and robust, relentless root-cause analysis. In this step of our predictive maintenance process, we concentrate on good trouble shooting techniques, backed by measurable data that may come from increased PdM data collection or by utilizing other PdM technologies to incorporate synergies. When any improvements are made, new baselines need to be reestablished to successfully monitor the asset in the future.
The final step in this cookbook approach deals with controlling the process. There is a lot of work done up front in the first three steps of this cycle. The definition of the system, identification of critical collection points, subsequent measuring, and finally, analysis are really at the heart of the process. With defined collection points, established frequencies, set schedules for the collection of data, and an easy-to-read prioritized report, we let the data (and not emotions) lead us to opportunities that will prevent a catastrophic loss. Regular monitoring assures us that we will see change before it has a chance to detrimentally affect the performance of an asset. With a good prediction, we have the opportunity to plan and schedule our corrections, driving the efficient use of resources.
Part of our UTC culture is to share best practices. This cookbook approach has received much attention throughout the Untied Technologies Corporation family. The most frequently asked question by other facilities groups is, "How do you (Hamilton Sundstrand) justify the resources and cost of this much data collection?" While it is true that not every collection point reveals an anomaly/opportunity, an analysis of our data over the last seven years has shown that 10% of all data points collected indicate some change over time. It is the recognition of this change and the subsequent investigation that leads to discoveries that under normal circumstances would not have been given any consideration.
Many times, the repairs are cost avoidance type issues that managers are hesitant to claim as justified cost savings. From our experience we know, without a doubt, that once a collection point starts to show change--if left unattended--that component will fail.
So as the old adage goes...you can pay now when the problem is manageable or you can pay more later.