Last month, the Cambridge Semantics team set off to New York City for Strata + Hadoop World 2016, or what Wired likes to call, “the lollapalooza of big data conferences.”
Many data lake projects achieve their IT objective: cheap storage of all enterprise data in raw form, but fail in their business objective to deliver value from this data. Why? Because making the data accessible and usable for business users is hard.
Many Hadoop users, seeking higher performance and a better analytics engine, are turning to Apache Spark for data transformation (ELT) on HDFS. While Spark offers many advantages, you still need programmers (Scala or Java) to create your jobs.