Level Up Your Reliability Skills: Get Certified! Boost your career now!

Elevate your industry profile at The RELIABILITY Conference.

Sign Up

Please use your business email address if applicable

Artificial Intelligence: A Primer for the Reliability Community

Artificial intelligence or AI is the simulation of human intelligence processes by machines, especially computer systems. These processes include:

Learning – the acquisition of information and rules for using the information;

Reasoning – using the rules to reach approximate or definite conclusions;

Self-correction – automatically making adjustments.

AI, therefore, can be defined as the simulation and automation of cognition using computers

The human intelligence processes collectively are referred to as cognition. AI, therefore, can be defined as the simulation and automation of cognition using computers. Particular applications of AI include expert systems, speech recognition and machine vision.

AI, in itself, is a broad term that includes things like natural language recognition. Rule-based expert systems built in the past for industrial applications, including machinery health, are the simplest form of AI. In the context of the Industrial Internet of Things (IIoT), Industry 4.0, or Smart Industry, the specific subset of AI that is relevant is machine learning.

What Is Machine Learning (ML)?

Machine, in this context, is any computing system, from clusters of computers, often in the Cloud, to small sensors, more so in the future. People have been using computers for decades to solve problems. A program running on a computer, when given some inputs, provides an output. The programming technique used in this case is called explicit programming. It is explicit because a set of instructions is written (i.e., a program) that repeatedly solves the problem according to those instructions (i.e., logic). Two points to note about this type of computing:

  • It is given a set of instructions by a human on how to solve the problem.
  • It will not learn or get any better with experience.

ML differs from explicit programming in two ways:

  1. ML creates the program. This is referred to as an algorithm, a model and, sometimes, an agent learning from the data it is given.
  2. Its ability to solve the problem gets better with experience. In other words, ML learns. Can you see the similarity to human learning?

How Does This Learning Happen?

Actually, not very different from humans: education (i.e., learning from examples), curiosity, intuition, experience, success (i.e., rewards) and failure.

Supervised learning

The learning by example method of ML is referred to as supervised learning. Humans provide the computer with a lot of data for the different attributes or variables related to an object (e.g., a pump) or a situation (e.g., cavitation). These attributes in ML are referred to as features. If you were creating an ML model to determine pump health, pressure, flow, vibration and temperature would be features. Now, let’s say your algorithm was to detect when cavitation is likely to occur. You have a lot of historical data on different features and you have examples when cavitation happened. These examples are referred to in ML as labels. Certain correlations between the features start to occur when you are getting to cavitation. Shown enough examples of features and labels, the algorithm tries to approximate a function or, more simply, create a mathematical representation that can be used in the future to recognize similar correlations of features (i.e., patterns) to predict the outcome (cavitation). In “approximate a function,” approximate is a key word. Most ML algorithms have their origins in statistics, so they are subject to such things as probabilities and approximation. How well will this function be able to detect patterns in the future that it has not seen explicit examples or labels for? In ML, the question is, how well will it generalize? That’s where data scientists add the magic, but the explanations might get too technical! However, the important things for asset experts to know are:

  • For supervised learning, you need measurement of features (IIoT anyone?);
  • A lot of data on the features;
  • Labeled data on the situation (i.e., outcome).

The quantity and quality of data matters. If you’ve given too little data to the algorithm, it is likely not to generalize well or, as they say in ML, it will have a bias and bias leads to poor decisions. If the quality of data is bad, you will have trained your model on noise and the model will not be very accurate.

Unsupervised learning

What if you don’t have a lot of labeled data? That’s where unsupervised learning, akin to learning by curiosity in humans, comes in handy. Given a lot of data, the algorithm explores it and finds unique patterns or groups of feature correlations and can approximate a function to tell the difference between similar and dissimilar, normal and abnormal, and “belongs to the family” or is an outlier. As compared to supervised learning, a lot more data is generally required for unsupervised learning.

Unsupervised learning is commonly used for anomaly detection and outlier detection.

Deep learning

A specialized form of machine learning that doesn’t use a statistical approach, deep learning mimics the workings of neurons in the human brain. Each neuron calculates a function, communicates the results to a neuron in the next layer (like synapses firing in the human brain), which then performs a calculation function, and so on, until an answer can be computed. Each layer has not just one, but multiple neurons and the output of any given neuron is given a significance or weight. Finding the right weights of specific neuron output in each layer determines the accuracy of the output. This is similar to the formation of human intuition and other cognition (e.g., object, color recognition). And just as with human intuition or cognition, it is not easy to interpret how the deep learning algorithm arrives at the answer. However, just like human intuition, it takes a lot of data to learn, is generally better at managing “noise” and can be highly accurate.

Deep learning is commonly used for image recognition and speech recognition.

There are a few other machine learning techniques, such as reinforcement learning and general adversarial networks, topics for subsequent articles.

Deployment

A model created using any of the previously described machine learning techniques is then connected to real-time process, electrical and condition data to provide real-time predictions.However, in order to qualify as a true machine learning based system, the model cannot be static; its learning must improve over time as it is exposed to new data and user feedback.

IIoT

So what does IIoT have to do with ML, or vice versa? You probably have the answer by now. To build a model for prediction (i.e., real predictive maintenance), you need features. These features come from sensors installed on the asset. And, IIoT is just a buzzword for sensors installed on industrial assets.

In the context of ML for asset condition management (ACM), sensors are not necessarily condition sensors, like vibration. Asset health prediction using ML can be done without any condition sensors for process induced failures, or combined use of process and condition sensors. It can also be applied simply to automate condition-based maintenance (CBM), like first pass analysis of vibration data.

Rajiv Anand

Rajiv Anand, is the cofounder and CEO of Quartic.ai, a company focused on providing machine learning and artificial intelligence solutions for industrial applications, industrial IoT and smart industry. Rajiv held key engineering, management and leadership positions with Emerson’s Lakeside Process Controls prior to starting Quartic.ai. Rajiv spent a year researching industrial IoT and machine learning, and advising technology companies and customers on digital manufacturing strategies.  www.quartic.ai