2016 will be the year in which companies begin to apply the principles of virtualization to make good on their investments in Big Data, IoT, the cloud and hyperscale computing, in order to perform advanced analytics. In particular, it will be the year in which numerous industries start to get a handle on managing and retaining access to unstructured data, which resides outside of the realm of ERP, Asset Management and Non-conformance systems. Ashish Nadkarni of IDC predicts that this kind of data will “account for 79.2% of capacity shipped and 57.3% of revenue” by 2017.

In 2015, the stage was set for major advances in the ability to perform predictive analysis on industrial equipment like airplanes and trains. Many of the major technical tools that allow collection and processing of large datasets are already in place. But our ability to draw meaningful conclusions from mass amounts of data is directly correlated to how easy it is to access and organize that data. Design engineers–particularly those that deal with industrial products and equipment–know that major pieces of the puzzle are still missing. For example, while petabytes of telemetry data streaming in real-time in theory informs predictive maintenance, in reality such data is of limited use unless it is compared to simulation data or design files. Let’s not forget the treasure trove of notes that engineering teams create around a design. If those notes aren’t analyzed in conjunction with telemetry, the design team lacks essential context in the decision-making process.

All these files exist in unstructured datasets, meaning they don’t fit into row and column databases. These files might be tucked away in “dark” places such as laptops, desktops and other siloed computers. Instead this data should be easily accessible by an asset management module. If these data files are large–for example, petabyte scale–there’s an extra layer of difficulty when you move to aggregate them for analysis.

In 2016, industries will apply virtualization and abstraction technologies to easily store and re-access many different types of data in a common, stable structure. If done correctly, all of the data that informs the original design, as well as a wealth of information gleaned from IoT advances, will be preserved, even through numerous disruptive tech refresh cycles.

This article is written by Manuel Terranova, CEO at Peaxy. Click here to read the full story!