What determines the criticality of a system can vary widely and may be based on a number of factors. In the case of our site, it is often the business unit the system supports and to what degree that system’s function may affect many factors, including scientific data. When one seeks to employ the expertise of a vendor to meet a need for reliability, it is necessary to first evaluate the need based on system criticality. Once a thorough review and ranking is completed, it is possible to determine the reliability services required. When considering available vendor options, a selection matrix can be a valuable tool for evaluating potential vendors and improving the fit of the vendor to the service desired.
The Foundation of a Successful Partnership
All partnerships take effort to establish and maintain, but in the case of a reliability partnership, this is especially true. It may be necessary to dedicate a substantial number of hours to determine the needs of a single system. At the very least, it will require the acknowledgement of the level of expertise required and its availability. In the examples I will give here, I had the advantage of the availability of a staff reliability engineer, mechanical engineer and sevoperational excellent craftspeople with many years of dedication to their chosen fields, all of whom I am proud to be a colleague of and support through my work as a CMMS administrator. I joined this group as part of an effort by the reliability engineer to reestablish a reliability program.
When our site transferred to new ownership, we lost some capabilities. Our reliability program consists of several teams, each with a leader, craftsperson( s) and/or a subject matter expert (SME). The team leaders of each group are also members of a larger advisory reliability group and act there as representatives for their team. The first order of business of the reliability group was to perform a criticality assessment of all assets currently tracked in our computerized maintenance management system (CMMS). Each asset was scored based on a predefined set of criticality criteria and any corresponding weighting (multiplier) was then applied, resulting in a numerical representation of the asset’s criticality (see Figure 1).
Figure 1: Criticality Assessment
Once we identified the needs of our critical assets, we began the process of evaluating our capabilities. Many of the reliability program functions were implemented quickly when our site was transferred to new ownership. It was during that assessment when we identified a need for program improvements in certain areas. Since we had lost the capability to perform vibration analysis ourselves, we were collecting the data and sending it out to be analyzed by a vendor. This disconnect between the analyst and the equipment was identified as the probable cause for the poor performance of our vibration program. We essentially needed to hire a vendor who could bridge the gap between our identified need for reliability data and reliable systems. It was this process of ranking critical assets and evaluating needs that gave me the idea of ranking potential vendors via a similar method.
Figure 2 offers an example of a scorecard that can be used for this purpose. Once each vendor had been evaluated, team members reported their scores and the average of each criterion from all evaluators was recorded in a comparative matrix (see Figure 3).
Figure 2: Vendor selection scorecard
Figure 3: Vendor selection matrix
This tool then became what is now utilized as our selection matrix. Once all averaged criteria scores were recorded for each vendor, we were able to view the score totals side by side for final ranking. As soon as the leading potential vendor was identified, contract negotiations proceeded and were finalized shortly thereafter.
Building and Maintaining a Successful Reliability Partnership
We liked the comparative analysis approach so well that we agreed to perform further analysis of the vendor’s performance during the initial setup phase and after each service interval. In this way, we are able to trend changes in the vendor’s performance over time. Through the use of a performance matrix (see Figure 4), we identified a few service issues and worked with our vendor to address them. An additional advantage to using this or a similar method is the added value of facilitating feedback to the vendor. Essentially, this tool is used for tracking and trending service performance.
Figure 4: Vendor Performance Matrix
The criteria we used for evaluation are based on our specific needs, as identified by polling our team members. Even though all service types are not the same, similar qualities may be objectively evaluated and trended over time. Accordingly, we based our specific criteria on these 10 basic categories (see Figure 5).
Figure 5: Evaluation criteria
Using work quality and scheduling as examples, here is how these categories can be used as a base for criteria creation:
Work Quality – Report quality, Verified quality, Function of a finished product, Fitness of design.
Scheduling – Impacts other schedules, Rework, Met due date, Met project timeline.
If we are willing to subject ourselves or our employees to rigorous performance management and evaluation standards, why are we willing to potentially accept less from our service providers? Certainly, the information we gather on vendor performance is not as granular as a performance management document used to track personal growth. But at what point would we want to reevaluate a vendor’s status as a service provider? I say anything less than average is unacceptable performance. With this in mind, I set the midpoint of all possible scores to be that action point. In a matrix as I have described with 10 weighted criteria and scoring from one to five, that number is 165. This score is also reflected in the contracted services list where we track vendor information and contract expenses (see Figure 6). If the value falls too low, the conditional formatting reminds us of the vendor’s performance score.
Figure 6: Vendor Performance Matrix
The use of selection/evaluation matrixes forces us to objectively examine potentials based on real needs, rather than subjective personal preferences. I believe the inherent process also supports competition among potential vendors and likely reduces the choice of a vendor based solely upon a past relationship. After all, if we require our operations to be reliable, shouldn’t we also expect reliable performance from our service providers?
Ward Bond is a CMMS Technician at Covance Inc. in Greenfield, Indiana and an engineering student. In his role as segment administrator for Covance, he supports CMMS users at his company’s Greenfield and Indianapolis facilities. He is a member of his company’s reliability group and leads their site predictive maintenance reliability team. www.covance.com