Excerpt from ebook, Big Data: Data Science and Advanced Analytics, by DZone. Authored by Parth Patel – Field Engineer, Greg Wood – Field Engineer, and Adam Diaz – Director of Field Engineering
Data lakes are emerging as an increasingly viable solution for extracting value from big data at the enterprise level, and represent the logical next step for early adopters and newcomers alike. The flexibility, agility, and security of having structured, unstructured, and historical data readily available in segregated logical zones brings a bevy of transformational capabilities to businesses.
What many potential users fail to understand, however, is what defines a usable data lake. Often, those new to big data, and even well-versed Hadoop veterans, will attempt to stand up a few clusters and piece them together with different scripts, tools, and third-party vendors. This method is neither cost-effective nor sustainable.
In this article, we’ll describe how a data lake is much more than a few servers cobbled together: it takes planning, discipline, and governance to make an effective data lake.
Within a data lake, zones allow the logical and/or physical separation of data that keeps the environment secure, organized, and agile. Typically, the use of 3 or 4 zones is encouraged, but fewer or more may be leveraged. A generic 4-zone system might include the following:
This arrangement can be adapted to the size, maturity, and unique use cases of the business as necessary, but will leverage physical separation via exclusive servers or clusters, logical separation through the deliberate structuring of directories and access privileges, or some combination of both. Visually, this architecture is similar to the one below.
As new data sources are added, and existing data sources updated or modified, maintaining a record of the relationships within and between datasets becomes more important. These relationships might be as simple as a renaming of a column, or as complex as joining multiple tables from different sources, each of which might have several upstream transformations themselves.
In this context, lineage helps to provide both traceability to understand where a field or dataset originates and an audit trail to understand where, when, and why a change was made. This may sound simple, but capturing details about data as it moves through the lake is exceedingly hard, even with some of the purpose-built software being deployed today.
The entire process of tracking lineage involves aggregating logs at both a transactional level (who accessed the data and what did they do?) and at a structural or filesystem level (what are the relationships between datasets and fields?). In the context of the data lake, this will include any batch and streaming tools that touch the data (such as MapReduce and Spark), but also any external systems that may manipulate the data, such as RDBMS systems. This is a daunting task, but even a partial lineage graph can fill the gaps of traditional systems, especially as new regulations such as GDPR emerge; flexibility and extensibility are key to manage future change.
To learn more about data quality, privacy and security, and data lifecycle management, download the entire eBook here.
News By: Team Zaloni
Blogs By: Matthew Caspento
Blogs By: Haley Teeples