How Industrial Intelligence Takes Quality Management to a New Level
Artificial Intelligence is everywhere in our daily lives: it’s there when we search the internet, when we drive our smart cars or undergo a medical examination. The explosion of successful and sometimes frightening AI applications is directly related to the digitalization of our society. But AI is also increasingly playing an important role in industry, especially in the metals production. The availability of digital production data is a real novelty and a trigger of a new industrial revolution! Learn how new technologies like AI or Machine Learning can help improve your production quality and to finally reach the silence on the shopfloor.
Knowledge-based AI technology has been used in industrial quality management for many years. For example, PSImetals Order Dressing uses expert system technology to plan and dress a production order determining all the necessary production steps and process details on how to produce a heat, slab, coil, plate, tube, etc. The expertise of the human quality and production experts is modeled in a configurable knowledge base. Moreover, PSI Metals Quality uses an expert rule base and its quality indicators to evaluate the quality after each production step and to decide on necessary quality measures. During the life cycle of a coil, tube or plate through a steel mill more or less 500.000 data characteristics are generated.
These data open the door for data-driven AI technologies into the world of quality management!
Deep Qualicision - Learning From Examples
Data analysis technologies like Fuzzy Logic and Neural Networks learn from examples. Thus, instead of defining strict rules for calculating quality indicators, we can train a model with examples. These models can then correlate tens of millions of data characteristics to categorize quality indicators in the same way they have learned from expert examples and expert rules. Deep Qualicision is one such data analysis tool that can process quality data labeling from raw data. The labeling of quality data is a first, but essential step to create added value from all available digital data.
Defect Image Detection With Machine Learning and Neural Networks
Machine Learning and Neural Networks can perfectly learn to interpret images and recognize and categorize defects from defect images. PSI has developed a defect detection service based on Convolutional Neural Network technology to identify and categorize surface defects.
Models for surface-defect detection find defects much faster and more precise than any human being!
The same technology could further also be applied to other image-based quality areas, such as scrap categorization, shape defects and ultrasonic testing.
Download the brochure to find out more about image recognition based surface defect detection!
- Much faster & more precise than any human being
- Improve quality of final products
- Reduce production cost
- Improve efficiency
How to Predict Invisible Defects
The techniques of Machine Learning are able to embed an overall experience from the past into the so called predictive model. It is somehow astonishing, or at least thought-provoking, that a broad knowledge – often grown over decades – can be skillfully packed into a model. The beauty and usefulness of this approach lies in the fact that it can later be used in a variety of circumstances. This is exactly what happens when we think of the quality prediction of coils leaving the hot strip rolling mill (HSM).
These coils are yet to undergo further technological steps (see figure 1, the red steps have already been taken, the blue steps are planned), but they often have serious defects already here, which are not easily accessible for optical inspection. The difficulty is that the steel surface is coiled, so most of it is simply hidden!
On the other hand, the correct handling of defective coils can bring considerable advantages. Depending on the type and severity of the defect, the coil can be directed to another production line or, in extreme cases, returned to the very beginning of the steel life cycle. So we need the information about the potential defect, but we can't just get it. But we do have the data that describes the coils produced in the past and the defects discovered.
A natural idea that comes to mind is to correlate the amount of information describing the coil (we call this the set of predictors) with the defects that occur (the targets). This is where Machine Learning comes into play.
Just by collecting the data and the corresponding statistical model architecture we can create a tool that can reliably predict how likely the defect is.
For this to happen, we take into account all available information, such as the chemical composition of the slab, the steel grade, description of the end customer, process information from production, etc. One could imagine all this information as a huge vector with numbers, let’s call it F.
We have this data for a variety of cases which form a representative sample. But only having a date is not enough - we also know how to use this complicated information! Depending on the circumstances, we take one of the available Machine Learning techniques, usually Neural Network or Extreme Gradient Boosting, and we train the model so that later we take a vector F of the unknown coil and calculate the result as probability of defect. At a high level of observation, we can imagine the predictive model as a nonlinear function that converts this vector into probability:
The ML function has a certain parameterization, and the entire training effort is focused on finding this parameterization. This is achieved after exposing the model to a whole, carefully pre-processed set of historical data.
The experience and knowledge from all past cases are embedded in the resulting model, so that even the dozens of terabytes of data can be compressed into a relatively small object.
More importantly, we can use this model for future, as for yet unknown cases. Due to the significant regression potential of the applied techniques, these new cases may differ significantly from historical examples. The model will internally find similar cases, place the new case somewhere nearby and calculate the probability.
Adaptive Modeling and Smart Agents
The industrial reality is a living entity. Every new slab, billet or coil leads to a bundle of new information that is stored and supposedly used in the age of the Industry 4.0 paradigm. From a human perspective, such new data forms an experience that somehow influences the way we think and the decisions we make. We tend to avoid errors empirically by simply confronting a current situation with our experience. This generic scheme is a basis for some approaches to Machine Learning that focus on extracting the experience from the environment by having a properly managed interaction with it. An agent has a well-defined set of possible decisions that affect the environment (see figure 2). The environment can be seen here as a digital image of the factory, known as the digital twin. In extreme cases, under special circumstances, it could also be a real factory.
At the beginning of the life cycle, an agent is like a newborn and capable of making only random decisions. But each time it gets the feedback formed by the Reward and the new State. In this way it can be assessed to what extent the previous decisions have led to an improvement in the Reward function. This is a general scheme of Reinforcement Learning. The main feature of this technique is the ability to learn the sequence of decisions, with the possibility to sacrifice some short-term goals for the final goal in a long-term perspective.
From the perspective of a steel mill, one can imagine a lot of exemplary use cases where this technique can be a great help, for example, when making the decision to reschedule a particular production line. After a critical mass of changes in the schedule has accumulated, at some point the best thing to do is to create a new schedule. These changes often involve, for example, a change in customer priority, unavailability of materials or equipment or production defects that have occurred. An appropriately trained agent can handle these situations.
The dynamic nature of Reinforcement Learning also allows it to adapt to an evolving environment. This is achieved by slowly incorporating new experiences. In a sense, this adaptability makes the agent an autonomous unit. Its behavior does not have to be adapted manually, the behavioral changes are an inherent feature of the entire framework. A natural next step is the release of several agents, each of which is responsible for a specific area, especially products, orders and processes. They are equipped with appropriate layers for communication, experience storage and decision making. The exemplary application could be the management of off-spec products in the hot rolling mill.
The multi-agent system can control the production process in such a way that certain deviations, e.g. due to not reaching the target thickness, are minimized.
Do you want to know more? Follow our Enjoy the Sound of Silence campaign, which is enriched with many exciting blog articles and exclusive webinars, so that you can finally enjoy the silence of a smooth shopfloor!
Enjoy the Sound of Silence Webinar Series:
What is your opinion on this topic?
Director Marketing PSI Metals GmbH
After taking over the marketing department of PSI Metals in 2015 Raffael Binder immediately positioned the company within the frame of Industry 4.0. So it is no wonder that in our blog he covers such topics as digitalization, KPIs and Artificial Intelligence (AI). Raffael’s interests range from science (fiction) and history to sports and all facets of communication.
+43 732 670 670-61