Today, you may have some basic condition monitoring on your most critical assets using technology such as vibration monitoring. You may have threshold alerts on certain process variables. This technology has helped you increase your reliability, but you are still experiencing unplanned downtime and your product quality and energy efficiency have room for improvement.
Artificial intelligence (AI), machine learning (ML), and automation of process analytics are all the buzz and you are wondering how you this might help you get to the next level of maturity/quality/performance. AI/ML vendors are promising the world to you. But you’ve heard stories about AI/ML projects costing lots of money and effort and not succeeding. You don’t want to be that guy. Installing new IoT sensors is expensive. Convincing your colleagues to get on board burns social capital. Changing your processes has risk and can be expensive. Perhaps you’ve even tried an AI/ML project and the results have been underwhelming.
Challenges to the success of machine learning initiatives include vendor misrepresentations, data quality issues, lack of appropriate training data, complexity of feature engineering, false positives, repeatability, and explainability. This presentation will cut through the hype and educate you on the core capabilities of machine learning in the context of reliability engineering. And this presentation will help to bring machine learning “back to the physics” – arguing that direct application of physics and engineering knowledge can help increase the success of machine learning projects.
The objectives of this presentation are to enable an advanced reliability engineer to explain to their colleagues:
What is machine learning;
How can machine learning help with reliability;
How to frame a reliability problem as a machine learning problem;
How to use physics to make machine learning more practical; and
How to use free software tools to experiment with do-it-yourself (DIY).