When it comes to root cause analysis there can be many causes. Asset management systems might have problems as follows:
(physical) No Failure Data whatsoever was captured inside the AMS (but there may be some in an Excel spreadsheet somewhere).
(human) Text-only entry (i.e. problem statement); but no actionable data (validated fields) which is critical for failure analysis.
(systemic) No blended training was provided during implementation or post go-live. The users never understood the purpose of properly entered failure data.
(latent) A general "mistrust of management" as to intention behind AMS. This creates a lack of buy-in.
Unless you want the AMS to be perceived as a WO ticket system you need to have a vision. Further, this vision needs a roadmap to get there. And within the roadmap you need clear business rules, process flows, data accountability, and reports. To make all of this happen there needs to be "blended training", process reviews, error checks, and user surveys. Most organizations only have a few of these elements in place.
Creative thought is needed. The implementation team should have welcomed input from the consultants in terms of industry best practices. In this dialog, there should have been discussion about advanced processes. This is where true return on investment (ROI) occurs. It is a mistake to assume the AMS product has the analytics (reports) you need. Therein you must create these - and link inputs to outputs. If after 10 years you only have a time reporting system, some might call this a failure.
But what if you never got this advice?
The Core Team should take the lead in search of continuous improvement. There are several benchmarking techniques - one of which is attending user forums. At software-centric user forums you will learn about new features and hear how others are using the software. There are also non-software specific venues that focus on asset management and are attended by industry-experts who speak about condition monitoring (PdM), defect elimination, RCM/FMEA and failure analysis. If an organization never attends the later there is a chance they will miss out on the latest AMS trends in support of reliability and uptime.
What if a tipping point has been reached?
There's a chance you might not know. Without a business analyst you would not "have the pulse" of the user community. The maintenance technicians might see the AMS as a micro-management tool. No one ever told them about failure data and failure analysis. The engineering staff also might not trust the AMS and choose to use a separate database to trend failures. And the operators might have a separate spreadsheet to track equipment status and recurring problems. They all might complain about inadequate training or lack of reports. But due to frustration they stopped reporting these issues. Once a tipping point is reached it is nearly impossible to restore faith.
The following prerequisites exist to perform failure analysis:
1) Users not entering failure data
Failure code hierarchy not created or properly built-out
Downtime too difficult to capture
Users never trained to enter failure data; unclear business rules
Mandatory field rules not deployed (i.e. at COMP status)
2) No failure data
No problem codes
No failed components
No cause codes
No cost data by asset; no replacement costs
No work order feedback; no asset condition
3) No analytical reports exist to identify the problem assets
4) No reliability team exists to review the analytical report
5) Inability to make value-add decisions using the AMS in regards to "worst offenders" (i.e. they didn't trust the data in terms of accuracy)
So, when senior management asks why the system is not providing value, it will be due to one of the above reasons. Any one of the above could be the root cause. As an acting Core Team it is important to know what that cause is so that the process can be improved and the AMS can become a true knowledge base.
John Reeve is a Manager/Practice Leader for Maintenance & Reliability Solutions at Cohesive Information Solutions, Inc.