by Mike Johnson & Matt Spurlock
For decades, oil analysis has been considered a cornerstone for predictive maintenance programs in a variety of industries. If something rotates and requires oil to reduce friction between surfaces, then oil analysis is a great tool to diagnose machine condition. Over the past two years, however, focus on oil analysis as a reliability tool has waned. With several reliability trade shows having a noticeable absence of oil analysis related presentations, it begs the question: Why? Both as a topic of discussion in conference presentations and as an actual condition monitoring tool, oil analysis should, at a minimum, be proportionally represented alongside other condition monitoring tools and should be considered the primary tool for low speed and hydromechanical machines.
The condition-based maintenance strategies (e.g., preventive, predictive, proactive) rely upon effective use of technologies to achieve their promise of reduced cost and improved productivity. Vibration, thermography, oil and ultrasonic analysis, and motor current analysis seem to be the predominant analysis methods for electrical and rotating machines, but there is a lack of proportional balance in how these are implemented.
There are a multitude of technical presentations demonstrating the strong reliance on thermography and vibration analysis within the predictive and proactive maintenance strategies for mechanical systems. Again, there are plenty of presentations demonstrating thermography as a universally adopted technology for condition monitoring of electrical systems and components. One can easily find presentations related to ultrasonic assisted regrease methods, a clearly proactive approach to bearing lubrication, and the extensive benefits from this approach particularly for high-speed bearing applications.
Machine condition monitoring via sump sampling and oil analysis is represented less and less in conference presentations, yet it is the one technology that operates in multiple maintenance strategies. When utilizing a comprehensive test slate, oil analysis is able to identify the presence of primary root causes of failure before reaching point P on the P-F curve, predominantly from accurate contamination and fluid property monitoring. When the test slate adequately covers the monitoring of machine condition, the predictive strategy is then covered. It is a multifaceted and repeatable technology, but it isn’t relied upon as “real” condition monitoring much beyond engine analysis.
We think the explanation is rooted in a lack of knowledge at the plant level. In this and the three upcoming articles in this series, we will present the specific correlation of each condition-based maintenance strategy to oil analysis and discuss the merits of multiple routine test methods as first-line approaches for determining machine and system health.
OIL ANALYSIS AS A PREDICTIVE MAINTENANCE TOOL
With the right test slate and frequency for a given machine/component, oil analysis should be able to readily identify an impending machine health problem. We’ll start with the merits and deficiencies of multiple techniques to provide wear debris analysis.
The cornerstone of nearly all routine oil testing is atomic emission spectroscopy (AES), commonly referred to as metals testing. AES is the test that provides data to us, in parts per million (ppm), related to wear metals, lubricant additive metals and contaminant metals.
There are two common types of AES performed in oil analysis laboratories: inductively coupled plasma (ICP), as shown in Figure 1, and rotating disc electrode (RTD), arc spark, rotrode, as shown in Figure 2.
Figure 1: Inductively coupled plasma energy source
Figure 2: Arc emission energy source
The basic premise of these instruments is the same: apply a high energy source to excite (vaporize) the atomic particles and measure the amount of light given off by these particles at different wavelengths, as shown in Figure 3. While there are some minor differences in the optics of these instruments, the primary difference is the source of the high energy. With ICP, arguably the most widely used method, the energy source is a plasma flame. With RTD, the energy source is a spark (think lightning bolt) delivered by a carbon electrode over a carbon disc.
Figure 3: Wear debris analysis instrument functio
Both instruments do a fantastic job at measuring particles that are very small. In fact, ICP has a very good accuracy level with particles up to about five microns in size. Accuracy is lost between five and eight microns, with the instrument being blind to anything above eight microns in size. RTD is accurate to about eight microns, loses accuracy at eight to 10 microns and is blind to anything over 10 microns unless advanced rotrode filter spectroscopy (RFS) is employed. This is important to recognize because as the wear state progresses over time, the particles become larger and larger, eventually becoming large enough to be beyond recognition by the instrument.
Accordingly, AES can provide only a small part of the data needed to fully understand the wear-related condition of equipment, particularly when realizing that normal wear debris is considered to be up to about five microns in size. This suggests that abnormal wear debris is likely to occur well beyond the detection limits of the cornerstone laboratory test instrument. As Figure 4 suggests, AES is good at characterizing changes at the benign wear state, in front of the P on the P-F curve. If changes are made to lubrication and operating conditions based on the low-level wear debris generation, then machine and component lifecycles can be extended, with all of the resulting benefits.
Figure 4: Correlation between time, wear particle size and wear particle correlation. (From “Wear Debris Measurement” by M. Johnson, Tribology and Lubrication Technology, May 2011)
To address the particles that will go undetected through AES metals analysis, other testing must be employed. The most relevant for ferrous (Fe) wear metals is the ferrous index. There are several ferrous index instruments available, but only a couple will be explained in this article as the technology will be similar to those not mentioned.
The first is direct reading (DR) ferrography. It takes the ferrous particles in an oil sample and separates them into two classes: particles <5 microns in size (DS) and particles >5 microns in size (DL). The two values derived from this test are referred to simply as index values. Both are unitless and serve simply as trendable data. It is easy to understand that if either value increases, then the equipment being tested is experiencing a higher degree of wear. The values of DS and DL can be used to calculate an overall wear particle concentration (WPC), as follows:
WPC = Dl + Ds / Sample Volume
The downside to DR ferrography is the testing process. Sample prep for this test requires a double dilution with a toxic chemical. Along with a manual data entry procedure, the cost of this test alone is on par with the cost of a particle count test. This has historically resulted in a perception that particle count and ferrous density on the same sample is cost prohibitive.
In order to offer a cost effective approach to ferrous density, many labs will employ the particle quantifier (PQ) index, as shown in Figure 5. The PQ index, like DS and DL, is simply an index value. The higher the value, the more ferrous metal is present in the sample. Unlike DR ferrography, however, PQ only has a single value. PQ is not sensitive to particle size since it is simply measuring a disruption to a referenced magnetic field. The amount of disruption is then calculated as the PQ index. The downside to PQ is that there truly is no separation of particle size. A single, large ferrous particle can offer the same PQ value as many smaller particles that could simply be normal wear debris.
Figure 5: Schematic of the PQ Index Instrument (Courtesy of Jack Poley at Condition Monitoring International)
When utilizing the ferrous index values in oil sample data interpretation, it is imperative that trending practices are followed. Trending ferrous index alone provides useful, but limited, insight to machine condition. An example of how ferrous density and metals analysis can be reviewed is seen in Table 1:
|Test||ICP Iron||PQ Index|
As ICP-based Fe rises, PQ rises slowly, suggesting that the bulk of the ferrous wear is, indeed, very small in size.
|Test||ICP Iron||PQ Index|
As ICP-based Fe rises slowly, PQ rises significantly, suggesting this component is developing large ferrous particles that are not getting detected with ICP.
Even with the increased level of confidence that PQ and AES metals analysis provide, another key test should be used to truly confirm that an impending issue is present.
Particle counting has historically been promoted as a means to measure contamination. While this is certainly the primary use of particle counting, that data also can be used as a confirmation tool of the presence of abnormal wear particles.
The current International Organization for Standardization (ISO) calibration standard for automatic particle counters is ISO11171. When a lab is using this standard for calibration, the reported values should be:
>= 4 microns
>= 6 microns
>= 10 microns
>= 14 microns
>= 21 microns
>= 38 microns
>= 70 microns
While the ISO code (particles =4, =6, =14 microns, respectively) is generally what gets monitored (and even all that is reported by some labs), the particle values at and above the 14 micron size can provide valuable insight to the potential size of ferrous particles being picked up by the PQ. Conversely, reviewing PQ data can help to confirm that a high particle count value at the larger ranges is a cause of abnormal wear, or a potential effect of abnormal wear.
By having all these tests performed on routine samples, early detection of failures high on the P-F curve is probable. Once a high level of confidence to an impending failure is established, then more advanced exception testing can be performed to determine the type of wear and potential location of wear.
One such exception test is visual determination. Visual determination within oil analysis gives meaning to the old adage that “a picture is worth a thousand words” in diagnosing equipment reliability problems. Visual determination is performed either through patch microscopy or analytical ferrography. Both methods result in reviewing the actual particle morphology through a microscope. With analytical ferrography, the focus is primarily on ferrous wear due to the slide preparation process. Some non-ferrous debris can be still observed as particles can fall randomly out on the slide.
With patch microscopy, a small volume of oil is run through a 0.8 micron filter patch. All particles larger than the 0.8 micron pore size are captured for review. This method allows for a visual check of all ferrous and non-ferrous particles in the oil sample, with the only bias being size.
Figure 6: Wear debris analysis
With both methods of visual determination, the size, shape, color and rough estimate of volume of particles can be determined. Visual determination is primarily qualitative in nature. Using visual determination can help to really hone in on where and how a failure is occurring and what the root causes may be. While visual determination is a labor-intensive process, it provides the most detailed and accurate story about the machine from which the sample was drawn.
When the test methods are placed within the context of a wear mode and rate of wear, it is possible to see how AES (ICP and RFS/RDT), particle count and ferrography (PQ and DR) methods can present a holistic picture of changing machine health. Figure 6 reveals the strength of each relative to wear particle sizes.
This is just a review of the primary tools used in oil analysis that allow machine owners to monitor machine condition. With a properly designed test slate that meets the reliability objectives for a machine, coupled with knowledge of alarm development and data evaluation, oil analysis can and should be the go-to tool for machine condition monitoring. Machine metals analysis is a uniformly popular starting point for sample-based testing. There are major strengths and weaknesses to the commonly used techniques. Some instruments are best for reporting the very earliest stages of component damage, while some are best for reporting symptoms of catastrophic, immediate pending and destruction. Once this is understood, a properly designed test slate allows the end-user to look at the machine’s risk with a wide-ranging time line, revealing a continuum between immediate and very long-term risk quite effectively. When integrated with a solid vibration program, reliability improvement could be undeniable!
Mike Johnson, CMRP, CLS, has 22+ years of oil analysis experience including eight years in the United States Marine Corps where he utilized oil analysis for effective troubleshooting on Amphibious Assault Vehicles. Matt has written working procedures for oil sample testing, designed and managed multiple lubrication and oil analysis programs and has consulted in the fields of lubrication and oil analysis to various industries all over the world.
Matt Spurlock, CMRP, CLS, has 22+ years of oil analysis experience including eight years in the United States Marine Corps where he utilized oil analysis for effective troubleshooting on Amphibious Assault Vehicles. Matt has written working procedures for oil sample testing, designed and managed multiple lubrication and oil analysis programs and has consulted in the fields of lubrication and oil analysis to various industries all over the world.