MW Keynote Presentation 34:38 Minutes
by Jayant Kalagnanam, IBM Research
The use of multi-modal data such as noisy sensor data, text data from operator logs, repair manuals and standard operating procedures and image and video data are at the core of the AI-driven revolution for predictive asset management. In this talk, we provide an ‘under the hood’ look at these technologies and how they power Maximo Application Suite.
Automatic learning of sparse and robust behavioral and decision models from noisy sensor data from process variables and coupled with operator logs provide real-time monitoring for anomalies and predict impending failures with high accuracy. We will highlight these techniques with examples from asset heavy industries. In addition, new modalities such as images and video can be used to directly inspect assets and infrastructure for surface defects and irregularities using high resolution images with a minimal burden in terms of the number of images of defects required. We will provide an example of the use of drones to inspect infrastructure such as bridges. Another key dimension is to enable hands-free interaction with Maximo so that a technician can focus their eyes and hands on the job at hand, enabled with natural language interfaces and voice.
Also, we will highlight some of the emerging capabilities for AI Lifecycle management that provides automation tools for data contextualization, model building and model management for day-to-day operations within Maximo. Data dictionaries capture the tag and asset hierarchy and infer the semantics to provision data for automatic exploration of purpose-built library of algorithms to identify the best models for monitoring and prediction. We will discuss how the system is designed to provide explanations for interpretability and to monitor these deployed models for drift and automatically update models with more recent data to improve performance.