In a world where corporate executives have become accustomed to business analytics on demand, manufacturing remains a blind spot. The unique nature of manufacturing data makes it a big challenge to illuminate this blind spot between data science and manufacturing.
Senior corporate leaders have a data-rich picture of most areas of their business – finance, sales, supply chain, HR, etc. In all these areas, rich software tools have developed over decades that let executives see the big picture, zoom in on the smallest details, and see every level in between. Sophisticated analytics tools let executives flag problems and spot opportunities for improvement.
But manufacturing data is rarely part of the picture. According to a recent survey by LNS Research sponsored by Sight Machine, only 14% of respondents have a corporate analytics program that uses manufacturing data.
A select group of manufacturing leaders has begun taking on these challenges. From our partners, we see an increasing urgency to glean corporate-level insights from manufacturing data. Some companies using BI systems Teradata, IBM Analytics or Tableau are trying to build data pipelines from individual machines all the way up to the corporate level.
At the corporate or line of business level, they may be looking at metrics like cross plant KPIs to identify best practices that could be shared between plants, or comparing the performance of contract manufacturers. The plant level wants an accurate and immediate assessment of information like the day’s yield and scrap rates and may look at metrics like overall equipment effectiveness (OEE). The manufacturing engineers want to know the causes of machine downtime so they can improve machine performance.
Getting accurate answers from the factory floor is generally a daunting process. The characteristics of manufacturing data – volume, variety, and velocity – make it a real challenge to consolidate, analyze and take advantage of, even on a machine or line level, not to mention on a company-wide level.
One machine may output, in a torrent of data, dozens of variables like temperature, press pressure, and conveyor speed. That data might need to be associated with specific product batches, and with defect rates that might not become clear until the next stage in the production process. If the data is retained at all, it is often in a silo with little or no connection to other corporate data and no easy way to interpret.
Turning that raw data into actionable insight requires multiple levels of processing:
- Collecting the data from machines, production lines and factories
- Conditioning the data
- Combining disparate data streams into manufacturing models that enable meaningful analysis
- Conducting the analysis; and
- Interpreting the results to enable understanding and action
It also must be repeatable. The problem with data science and manufacturing is that a large corporation must be able to afford to assign a data scientist to evaluate a set of historical data to determine the efficiency of the company’s manufacturing. But each time the data changes, the process needs to be repeated and adapted, and few can afford to keep an entire team of data analysts interpreting the firehose of incoming manufacturing data.
The companies that figure out a way to systematically collect, condition, model and analyze their data in a scalable, repeatable manner will secure a strong advantage in their industries. The rest will struggle to compete wearing blinders.