How?
In this project we used for the data analysis and prediction of the production line’s behaviour. In the first phase we worked with historical data (drawing up of the theoretical model) to then, in the second phase, exploit real data and make predictions at the pace the manufacturing line requires.
First of all, we analysed a set of 60,000 records corresponding to 70 different variables (weights, pressures, speeds, levels, heights, etc.) collected over the course of one week. With this historical data, we determined which were the variables that, in the manufacturing phase the study concerns, were most determinant for the quality of the end product. For this it was necessary to apply a set of Big Data technologies that order, classify, clean and qualify the data. Next, we calculated the transfer function, which describes the process’s behaviour over the course of time.
Once we obtained this valuable new knowledge, we knew how things had been done, what the result was and what the factors were and how much they had affected said result. We could then start to determine the improvement criteria.
As a final step, we created a scorecard that provides all this information in real time. Next, after an intense job of testing and selecting the combination of most effective algorithms, applying Machine Learning technology, the graphs describing its prediction were entered on the scorecard, indicating how much a determined variable, identified as critical, should be adjusted to obtain a result very close to 100% reliability with respect to the one previously defined.