Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Last updated : Nov 07, 2016
NOT ON THE CURRENT EDITION
This blip is not on the current edition of the Radar. If it was on one of the last few editions, it is likely that it is still relevant. If the blip is older, it might no longer be relevant and our assessment might be different today. Unfortunately, we simply don't have the bandwidth to continuously review blips from previous editions of the Radar. Understand more
Nov 2016
Trial ?

A Data Lake is an immutable data store of largely unprocessed "raw" data, acting as a source for data analytics. While the technique can clearly be misused, we have used it successfully at clients, hence motivating its move to trial. We continue to recommend other approaches for operational collaborations, limiting the use of the data lake to reporting, analytics and feeding data into data marts.

Apr 2016
Trial ?
Nov 2015
Assess ?

A Data Lake is an immutable data store of largely unprocessed 'raw' data, acting as a source for data analytics. Whereas the more familiar Data Warehouse filters and processes the data before storing it, the lake just captures the raw data, leaving it to the users of that data to carry out the particular analysis that they need. Examples include HDFS or HBase within a Hadoop, Spark or Storm processing framework. Usually only a small group of data scientists work on the raw data, developing streams of processed data into lakeshore data marts for most users to query. A Data Lake should only be used for analytics and reporting. For collaboration between operational systems we prefer using services designed for that purpose.

May 2015
Assess ?

An Enterprise Data Lake is an immutable data store of largely un-processed “raw” data, acting as a source for other processing streams but also made directly available to a significant number of internal, technical consumers using some efficient processing engine. Examples include HDFS or HBase within a Hadoop, Spark or Storm processing framework. We can contrast this with a typical system that collects raw data into some highly restricted space that is only made available to these consumers as the end result of a highly controlled ETL process.

Embracing the concept of the data lake is about eliminating bottlenecks due to lack of ETL developer staffing or excessive up front data model design. It is about empowering developers to create their own data processing pipelines in an agile fashion when they need it and how they need it—within reasonable limits—and so has much in common with another model that we think highly of, the DevOps model.

Jan 2015
Assess ?
Published : Jan 28, 2015

Download the PDF

 

 

English | Español | Português | 中文

Sign up for the Technology Radar newsletter

 

Subscribe now

Visit our archive to read previous volumes