HOW FUTURE TECHNOLOGIES CAN REVOLUTIONISE INDUSTRIAL QUALITY

Production Wokshop interior. Metalworking machines in operation. Modern industrial enterprise. Control panel or console. Side View.

THE CHALLENGES OF PROCESSING QUALITY DATA?

In the age of digitalisation, the data generated within the framework of a company’s quality approach is numerous and diverse. This data is collected in the field in an increasingly automated manner via IoT (Internet of Things, in other words the ability to network machines, software and technologies in order to exploit them), and helps in the detection of problems as well as in the decision-making process for their resolution. The amount of data produced thus requires specific technologies and analytical methods in order to generate high added value. The objective of this article is to demonstrate, within a corporate quality approach, how the application of new technologies can generate added value in the identification and resolution of non-conformities.


DATA MANAGEMENT with a QUALITY APPROACH

Data aquisition

Within a quality management system, the collection of quality data in the field (deviations between a desired situation and the actual situation) facilitates the detection of problems by identifying weak signals, trends or patterns.

These data come from more or less structured sources :

  • Structured sources: Data is collected directly from field machines in an Internet of Things (IoT) logic or through statistical process control (SPC).
  • Less structured sources: The most unstructured source is text, as this requires understanding and is subject to interpretation, making it difficult to harvest data using innovative technologies. Nevertheless, this text can be processed by Natural Language Processing (NLP), allowing machines to identify correlations in a volume of unstructured data that cannot be processed by human intelligence. In general, all written sources or even transcripts of oral sources (testimonies) are generally unstructured but contain information with high added value.

The challenge behind the collection of quality data lies in big data. It is a question of combining heterogeneous information from different sources (production, marketing, economy, customers, suppliers, etc.) in order to take advantage of it. Big data represents a set of data characterised by the 3Vs:

  • Volume : Data are present in large quantities
  • Velocity : The data is coming fast
  • Variety : The nature of the data is heterogeneous 

To complement the theory, it is possible to add a further 2v’s :

  • Veracity: Data must be accurate and verified
  • Value: The approach behind big data creates value

Automate problem-solving

All these data collected allow the causes of the problems identified to be analyzed. The processing methods can be diverse depending on the context. Here are some quality approaches and good practices that can be improved by innovative technologies:

  • The cause and effect diagram, or Ishikawa diagram, can be reinforced and partly automated by iterative algorithmic modelling to find the most logical causes.
  • Troubleshooting involves organising decision trees from multiple potentially feasible tasks to determine the best solutions to apply. The principle of Bayesian networks can then be used to determine the best tasks to apply, by associating a score to them.

It is also possible to implement a Layered Process Audit after the problem has been solved. The principle of the tool is to sample the desired scope (the subjects at risk), conduct surveys and analyse them statistically to identify trends. The Layered Process Audit approach can be enhanced by machine learning.

Machine learning: Machine learning, a field of artificial intelligence engineering, is useful especially in monitoring complex processes. Indeed, the software’s ability to learn automatically from data processed in the past allows the algorithm to evolve over time. The algorithm cannot regress to a certain point. Machine learning is based on two types of learning: 

Supervised: The dataset is supervised by a human/machine pair, determining what is good and what is not, what goal to aim for, or helping with image recognition for example.

Reinforcement: Reinforcement learning is more autonomous. Indeed, requiring much less human presence, learning is done on the basis of experiments, so as to optimise the result by iteration. This type of learning has been extended to competition-based learning, in which two algorithms oppose each other. One will try to solve optimisation problems, while the other will try to undo this optimisation. The result is a more rewarding mutual learning.

CONCRETE APPLICATION WITHIN A qualit approach

To make quality data speak, data is obviously needed. The performance of advanced algorithms is linked to the quantity and quality of the accessible data. So the first step in any intelligent data processing is to gather and prepare the data for analysis. Access to the data is essential and the more structured it is at the outset, the easier it will be to integrate into the algorithm’s engine. The data must be conditioned so that it can be manipulated by the analytical algorithms, which are subject to very precise mathematical specifications. This is referred to as normalisation or the elimination of outliers.

Once the dataset is ready to be submitted to the algorithm, the learning strategy must be decided. Indeed, the so-called intelligent algorithms will have to adjust a large number of parameters, or even determine by themselves the factors that will enable them to make relevant inferences. To do this, they must be shown in which direction the user wishes to direct the intelligence of the algorithm’s engine: what solution is expected. This learning part will monopolise a large part of the available data (learning data), but some must be kept to validate the result and quantify its value (test data). 

Then comes the moment when new data must be submitted to the trained algorithm and the relevance of its results must be evaluated. A monitoring phase in run mode allows the finer adjustments to be completed to make the results as accurate as possible and gain the few percent reliability that will allow sufficient confidence in the processing. 

But the strength of advanced algorithms is also their ability to improve even once deployed. The strengthening of parameters, their responsiveness to new trends in the data, are all qualities that can emerge from the use of algorithms that are properly integrated into data processing.

THE BENEFITS OF USING FUTURE TECHNOLOGIES IN A QUALITY APPROACH

The use of new technologies, in the era of Industry 4.0 and the rise of digitalization, makes it possible to strengthen the control and exploitation of quality data. This control makes it possible to guarantee quality control over products and processes via an optimised information system, thus facilitating the matching of customer expectations and the company’s overall organisation.

Many technologies can nowadays be used to support a quality management system. For more information on the subject, we invite you to contact us, our experts in data exploitation and quality will be able to accompany you in your project.