If you choose your path before knowing your destination, don’t be surprised to run out of road before you arrive.
That’s why manufacturers trying to gain insight from their data often find themselves at a dead end after following three of the most common technology approaches. Each of these three approaches starts by selecting a technical strategy before the objective is clearly identified.
The first approach that often leads to a dead end is to try to cobble together a solution out of legacy manufacturing software. Traditional software is often vendor- or machine-specific, and thus offers little insight into broader manufacturing challenges, including interactions and tradeoffs among multiple machines and production steps.
The legacy software also typically lacks modern APIs, limiting the ability to coordinate between different software systems, or to share and merge data across machines, lines and plants for enterprise-wide analytics.
Attempting to customize a solution using this software often means running headlong into problems the company doesn’t yet realize exist and which can take years to solve. Chief among these problems is context, the challenge of turning disparate streams of machine data into a model of the manufacturing process. This type of problem is well suited for modern web-driven data technology, but is very different from the traditional client-side approach. The techniques developed for handling big data are essential for solving the data modeling exercise to derive context.
The second approach is the tailored data science project, in which a company throws a team of data scientists at a problem. This requires the team to identify the problem of interest, create a data model, populate it via ETL (extract/transform/load), and attempt to find a solution. This approach can be the right one for solving a specific problem that is likely to deliver very high returns, and warrants the up-front effort, but is generally too expensive to be profitable for anything but the highest-value processes.
Data science teams can provide great insights given the right tools. The tailored data science project handicaps these teams by forcing them to build every project from scratch. It solves only narrowly defined problems and requires an end-to-end project for each new problem, which isn’t scalable. Also, for distributed production bases, many teams lack access to the raw data, and if they do have it, the data is highly inconsistent from one project to another. Finally, without a platform to deploy their solution, initiating change based on the findings can be challenging. The right technology can provide considerable operational leverage to a data science team, and the absence of such technology can severely limit their impact.
The third approach that often leads to a dead end is the IoT platform. The focus of these platforms is to acquire, transfer and store data, and provide a blank slate for development and processing of that data. They also typically offer a set of proprietary APIs for accessing that data using other software.
IoT platforms can serve an important purpose. They can provide a framework for processing and collecting data. But collecting that data does little good in and of itself, leaving most of the work undone.
These platforms are often promoted as a way to achieve analytical insights and optimization. In fact, they take for granted all the hard work of data contextualization and analytics, which is where the real challenges lie.