Here are some of the most salient developments that began to take hold at the end of the 1900s:
Online/Inline sensors for ferrous and non-ferrous wear debris;
Improved, compact onsite test kits and sophisticated handheld and portable instrumentation;
Large particle and filter debris analysis;
Intelligent Agents: Sophisticated collaborative software for assessing data severity and rendering indepth, nuanced advisories in very specific applications and components.
Wear Debris Sensors
Having predicted this development decades earlier, I am genuinely surprised at how long it took for online sensors (beyond oil temperature and pressure, which have existed for a century) to become a viable solution and improvement in monitoring oil-wetted machinery. I suppose I should not be. It was as much a question of robustness as it was detection and measurement. Previous offerings were simply not rugged enough to stand up to immersion in hot, sometimes highly contaminated oil, nor did these devices demonstrate sufficient precision and repeatability under such conditions.
The technology, employing magnetometry, is now in a mature stage. Today, all the issues - detection and measurement, sufficient sensitivity and repeatability, and stability and ruggedness - have been met.
The metallic particle count sensor depicted in Figure 1 not only detects ferrous metal with size classification, but can also derive counts via signal analysis for non-ferrous particles at sizes as low as 135μ.
The oil analysis industry has long shown an interest in small particulates, especially those that could wreak havoc in hydraulic systems where clearances are so critical to safe and effective performance. Thus, "particle counting" instrumentation is and has been routinely employed to monitor particles from 4μ to 70μ (current range as indicated by ASTM International standard D7647).
The advent of online sensor detection of wear debris, however, begins at ~40μ. It is well understood that larger particles are indicative of fatigue or severe wear. Detecting such particles at the earliest (real time) opportunity is clearly a major advantage toward minimizing damage when it develops, or possibly avoiding failure altogether.
Because 40μ and larger particles are readily filtered out, systems with filters remove a significant amount of particulate evidence at such sizes. Improved filtration technology, too, impedes the gathering of large particle evidence, all for the good goal of lubricant cleanliness. This led to greater interest in and emphasis on inspecting filter debris.
For decades, filters have been cut open and their particles inspected via microscope and other means. Often, important information was gleaned to assist in vetting routine oil analysis results.
Today, the process is being approached at a much more sophisticated level, such as performing semi-automated analysis and using a combination of techniques, including x-ray fluorescence. Filter debris analysis (FDA) is rightly emerging as a specific inspection discipline in CM routines.
The "oil analysis business" is crowded with competent laboratories, providing adequate services to their customers. Most of these laboratories, whether commercial or private, provide commentary, but much of the time such commentary is rather limited in scope, or is simply not sufficiently informative for recipients to understand whether or not action should be taken and, if so, what that action should specifically be.
This situation exists for a number of reasons:
Commentary has always been subordinate to the creation and gathering of data. No standards or minimum expectations exist; the comments are often an afterthought.
In many cases, the evaluators at the testing site have limited knowledge about the equipment under surveillance, resulting in uninformed or minimal commentary.
Evaluation of an oil sample's test data requires solid knowledge of both the component and its lubricant. Many evaluators are not equally comfortable with these totally different aspects, yet there is considerable interplay and implication that can be overlooked if the evaluator is not aware of this interplay.
Subsequent samples from a given component may be commented by various evaluators, each with a different feel and understanding of the component, its application and its lube. The result can be a disjointed, discontinuous evaluation from sample to sample.
If the testing laboratory is remotely situated from the sample source, there is no opportunity for the evaluator to "see" the component. There may be some obvious indication of trauma that is key to the comment being rendered, but if the sampler doesn't see it or report it when submitting the sample, this information will not be the necessary part of the evaluation it could and should be.
Many recipients of oil analysis data and reports are only drawn to obvious problems, such as very high wear metals, or presence of water or abrasives. Additional nuances are not even considered, nor requested, because the recipient is simply not aware of such a possibility. Why should he be? He's not an evaluator. If the comment doesn't reflect a need to consider such nuances, they may never come to light.
Today's oil-wetted systems are more complex than ever, and oil chemistry and performance characteristics are at their highest level, simply owing to significant scientific advancement in lubricants chemistry. There isn't any one expert who can recall everything needed, know where to go to find specialized information, or simply find the time to make such an effort.
Automated expert system evaluation and pattern recognition of oil analysis data (or other CM data) can overcome limitations, weaknesses and inconsistencies in the oil analysis evaluation process, relieving the pressure that is placed on human effort and maximizing the program's value while minimizing errors. It can be programmed and taught to respond to complex data patterns, no matter how subtle, in order to render commentary rich in content and depth with speed, accuracy, consistency and nuance. It is the next level in oil analysis competence - it is the "Intelligent Agent." Such software allows collaborative knowledge infusion so that multiple aspects of evaluation can be addressed by highly competent domain experts.
Let's take a look at what intelligent agents can do. Figure 2 shows a typical 2-phase table for iron (Fe) and (Si) that allows 16 different relationships based on data severity at four levels of interest: notable, abnormal, high and severe.
This particular rule set is generic, in that it could apply to virtually any component type. If, however, we apply it to a specific type of component, say a diesel engine, we will add terminology like rings or cylinders to describe likely sources of Fe.
We'll also consider the possibility of a compromised air cleaner element or housing and recurring issues with reciprocating engines. Additionally, we may want to inquire about oil handling and storage practices if we see multiple examples of components with issues involving these two elements since it is unlikely that several air intake systems are faulty at precisely the same time.
Perhaps the greatest key performance indicator (KPI) of a condition monitoring program is the return on investment (ROI). The only way to measure this vital number is to garner feedback, i.e., record findings and maintenance action based on report information, commentary and advisories (see Figure 3).
Figure 3: Feedback logging directly online to feed CMMS and vet intelligence agent performance
Here, too, some intelligent agents provide a convenient gathering mechanism that can be attached to the actual sample and the machine's found condition and subsequent repair, as applicable.
Once the maintenance has been logged accurately, the ROI can be calculated based on known costs, including machine parts and production losses in conjunction with a computerized maintenance management system (CMMS).
When all the pieces of modern CM are brought together, spearheaded by (now available and effective) real-time condition monitoring for both oil and vibration, and anchored by a purpose-built intelligent agent with a report delivery system tailored to users, one can envision a very holistic, synergistic amalgamation of essential tools to achieve a CMMS that exacts the maximum from the efforts and resources expended (see Figure 4). Ultimately this is the goal of a CM program: Maximizing the Bottom Line.
Figure 4: Holistic closed-loop CM schematic example
Jack Poley is technical director of Kittiwake Americas, and is managing general partner of Condition Monitoring International, LLC (CMI). Jack has a B.S., Chemistry and B.S., Management from University of California [Berkeley] and New York University School of Commerce, respectively, and has completed 50 years in Condition Monitoring and Oil Analysis. www.conditionmonitoringintl.com
“R.A.I.” the Reliability.aiTMChatbot
You can ask "R.A.I." anything about maintenance, reliability, and asset management.