Think Bigger with Graph - the Killer App Is Integration

Posted by Sam Chance on Jan 15, 2019 2:45:48 PM
Find me on:

2018 was the “Year of the Graph”, heralding advancements in graph products primarily addressing graph analytics use cases like fraud and community detection. This momentum continues in 2019, with thought leaders lauding the promise and benefits of graph and uniquely graph applications. At Cambridge Semantics, we challenge the world to think even bigger with graph. We believe, and have proven with our customers over the last decade, that the destiny and greatness of the graph data model sits in mainstream data management in support of line-of-business analytics. The killer app is integration...read on to find out more.

architectureArguably, adopters are sliding into graph technology and are beginning to realize they need more capacity to query any substantial amount of data in the graph. From an architecture perspective, many organizations feel they possess enough ETL processes and analytics tools. But adopters are realizing they lack the ability to develop meaningful enterprise scope applications using  graph data models. In other words, organizations are slowly realizing they need to integrate information at scale, and awareness of the graph is driving interest as a means to create context.

Many thought the great promise was graph analytics, but that is not the true gold nugget. The goal of information technology is information on demand. Yet, we collectively have only managed to link documents – a la the World Wide Web. Information on demand involves question answering, but we still only derive fragmented subsets of answers to questions, and we rely on a high human touch to synthesize multiple query results to arrive at conclusions. This manually intensive process is inherently error prone, results in conjecture, missed opportunities and other deleterious outcomes. Why is it so hard to contextualize data from multiple sources and create more complete answers?

Large and broad scale data integration is the killer app for the graph data model. There are several useful graph analytic techniques and algorithms that answer certain well-suited questions. But these are not the killer applications for graph technology. Pivot your focus to imagine end users interacting with enterprise information as a service. On demand, users ask complex questions of the enterprise and the enterprise responds with all it knows in a business-oriented and more complete manner—with no human “stitching” of information.  Imagine automated processes interacting with the enterprise to execute sophisticated analytics and workflows. We might call this artificial intelligence. It is through this completeness, richness and immediacy of information that graph will change the world.

Graph technology provides superior ability to process disparate and complex data, while incorporating semantics to add richness and context. Until recently, graph technology struggled to cope with the volume of data which constrained broad application of graph technology to more narrowly scoped analytics applications. The advent of Massively Parallel Processing (MPP) architecture helped to overcome the volume limitation and ushered the graph data model into the mainstream of data management. MPP implementations coupled with W3C standards for representing, organizing and describing data in semantic graphs ushered graph technology into the big data arena. So why isn’t graph data the norm for large enterprises?

Enter culture.

Yes, every organization has one. System integrators and technology providers make a TON of money stitching data to satisfy users’ requirements. One argues that data lakes simply collocated disparate data sources and provided a central location for hoards of developers to access and integrate data based on ad hoc requirements. If I was a system integrator, would I want all the enterprise data to be integrated and harmonized? Surely not!

So, we can argue quite rationally that fear is the obstacle to the graph killer app. ETL continues to reign supreme despite its high-touch and abysmal ROI.

System integrators believe a LOT of people will become useless. But, in fact, the “ETL force” will shift focus to developing advanced analytics and building the graph for their customers. This activity includes many ETL-like tasks and forward-looking integrators would best shift focus and learn this trade.

Think of information management broadly in four stages: collection, processing, analysis and reporting. Succinctly, we collect data from myriad sources and sensors and store it in almost as many syntaxes, structures and semantics. Then we access the stored data and manipulate (i.e. process) it for consumption by humans and analytics. Humans and automated clients analyze the data and output conclusions. The conclusions are disseminated (i.e.reported) to interested parties.

I have observed that most people believe the “information backlog” is in the analysis stage, but I argue the backlog is the process stage. The process stage is precisely the sweet spot, the killer application of graph technology. When we integrate, harmonize and normalize collected data as an enterprise asset, we provide the foundation to shift from “transactional” interaction patterns wherein developers stitch together data, and users synthesize fragmented query results to “question answering” interaction patterns, we will have realized an inflection point for information on demand.

Enter Anzo 4.0, the first end-to-end platform that delivers a true Enterprise Data Fabric, a Semantic Layer at enterprise scope and scale. As the only big data management and exploratory analytics platform based on semantic graphs available today, implementing MPP architecture for processing and analysis, Anzo provides a dramatically superior approach to enterprise data integration, management, exploration, and analytics. This single platform empowers IT departments and end users alike to flexibly integrate, manage, explore, and analyze all of their enterprise data assets with speed of thought performance, at unprecedented big-data scale, and at a fraction of the implementation time and costs compared with any other technology or approach.

To learn more about Enterprise Data Fabrics, download our whitepaper "Robust, Agile, and Comprehensive: the Story of the Data Fabric".

Download the White Paper

Tags: Graph, Semantic Layer, Data Fabric, Knowledge Graph

Subscribe to the Smart Data Blog!

Comment on this Blogpost!