At the Gartner Data & Analytics Summit 2017, Cambridge Semantics' very own Barry Zane, Vice President of Engineering, and Ben Szekely, Vice President of Solutions, discussed how the Anzo Smart Data Lake® (ASDL) solution empowers business users with on-demand analytics of their rich data during their session entitled “Accelerating Insight with High Octane, Graph Fueled Data.”
At Gartner's Data & Analytics Summit 2017, Alok Prasad, President of Cambridge Semantics, was joined by Peter Horowitz of PricewaterhouseCoopers for their session entitled “Accelerating Insight: Smart Data Lake Customer Success Stories”. During this presentation, they discussed how Cambridge Semantics’ in-memory, massively parallel, semantic graph-based platform, Anzo Smart Data Lake®, delivers an accelerating edge to data-driven organizations, while maintaining trust with data security and governance.
In this webinar Steve Hamby, Managing Director Government, discusses semantic graph technology to help Federal Government CIOs and their agency staff that are researching enterprise data management and mining tools understand how Smart Data Lakes can be a superior mechanism for addressing their top data priorities. Here are the slides from his presentation.
In September 2016 Cambridge Semantics attended Strata+Hadoop World 2016 in New York, NY. While we were there, Marty Loughlin, our VP of Financial Services, spoke to a gathering of attendees about who we are and what our platform does. Here is his presentation.
On my flight back to Boston from Grapevine, TX, where we spent three exhausting and exhilarating days at Gartner BI & Analytics Summit, I am reflecting on the great interest shown in Cambridge Semantics Smart Data Lake at this event.
With no inherent means of adhering to governance and security protocols, data lakes are akin to the Wild West in that they are devoid of order and consistency. Each user manipulates his or her own data at the risk of the reuse of that data for others.
Data lakes are no longer anomalies. Consolidating all of an organization’s data—unstructured, semi-structured, and structured—into a single repository for integration, access, and analytics purposes is rapidly emerging as the preferred way to manage big data initiatives.
Recent developments in big data technologies have significantly impacted the prowess of contemporary analytics; the most profound of these involves the deployment of semantically enhanced semantic data lakes. These centralized repositories have revolutionized the scope and focus of analytics by enabling organizations to analyze all data assets with a specificity and speed that wasn’t previously available. The value derived from such an approach improves the analytics process at both the granular and macro levels, expediting everything from conventional data preparation to informed action.
Many data lake projects achieve their IT objective: cheap storage of all enterprise data in raw form, but fail in their business objective to deliver value from this data. Why? Because making the data accessible and usable for business users is hard.
Legacy applications that have exceeded their useful life can be expensive to maintain. They often require specialized skills and old versions of software and hardware to support. But, they can also contain very valuable data that needs to be retained for business or compliance purposes.