Data architecture: understanding the essentials of Data Warehouses, Data Lakes, and Data Lakehouses

Process efficiency, scalability and flexibility in a company may be significantly affected by the choice of data storage and modelling architecture. Your data structure will have a huge impact on effective analytics, information access and adaptability to changing market conditions. In this article, we will look at three popular data structures: Data Warehouse, Data Lake and Data Lakehouse.

Our analysis will focus on their fundamental premises and features, as well as the evolution of data organisation. The article will cover not only the theory but also the practical challenges and benefits of each of these architectures.

Data architecture: understanding the essentials of Data Warehouses, Data Lakes, and Data Lakehouses

Data Warehouse. A traditional approach to data architecture

Data Warehouse serves as central repository for storing and managing business data collected from various sources within an organisation. It is a specially organised database designed to facilitate data analytics, reporting and strategic decision-making.

A key feature of a Data Warehouse is that it allows data derived from multiple systems to be consolidated, integrated and transformed, providing a consistent and uniform information source for business users. 

The classical Data Warehouse structure is based on a relational data model, which means that data is stored in tables with predefined links between records. The basic objects of the architecture include Fact and Dimension tables. The structure is organised in such a way as to enable easy queries and data analytics with SQL. 

Another important element is the ETL (Extract, Transform, Load) process, which extracts data from various sources, transforms and loads them into the Data Warehouse. All data within a Data Warehouse is derived from structured data sources. 

The figure below shows how this classical data architecture works: 

Infographic showing classical Data Warehouses and how they work

Data Warehouse: advantages

The main benefit of a classical Data Warehouse architecture is that it gives you the ability to improve data standards, quality and consistency within your organisation. Organisations generate data sets from a variety of diverse and often inconsistent sources.

These are consolidated and standardised in the Data Warehouse to create a highly reliable end product, which is technically known as a single source of truth. As a result, you can have a high level of confidence that your business reports are correct.  

All data is optimised for fast and complex analysis to make it easy for users to generate business reports.

Since it is correctly processed, the data can be easily accessed by business intelligence solutions (e.g. Power BI) and quickly fed into data sheets for comparison. Data Warehouses are designed to deliver all the information you need for strategic decision-making.

Historical data

Data Warehouses can also store historical data that allows you to track changes and tendencies over time, which is important for long-term trends analysis. With access to historical data, you can better understand how situations evolve.

The concept of slowly changing dimensions (SCD) additionally allows you to track changes in Dim tables, which guarantees data consistency as data structure evolves over time.

Data Warehouse: limitations

Conventional relational Data Warehouses are a basic tool used by many companies to effectively manage and use large data volumes as a source of valuable business information. However, this traditional architecture has several drawbacks.

Deployment and maintenance

Some of its downsides include deployment and maintenance costs. A classical Data Warehouse may be expensive to implement in terms of hardware, software and human resources needed for ongoing system maintenance.

The complexity of the ETL process and the demands of adaptation to specific business needs may also have a considerable impact on the budget.

Complexity and scalability issues

Complex design is another problem. Creating a classical Data Warehouse architecture, especially in diverse, high-complexity environments, usually takes a lot of time and requires advanced expertise.

The scalability of Data Warehouses is also often limited: as data volumes increase, you may face significant additional investments in infrastructure and technology.

Delayed data access

This design also has the downside of often delaying access to data. In a business environment when quick response to change is often of the essence, access to data in near real-time is crucial. This is difficult to achieve with a classical ETL integration. In addition, the process of adding new features or any further development is rather time-intensive, which may also significantly hamper a dynamic business.

Structural data orientation

Another weakness of a classical Data Warehouse is its orientation toward structured data and its relational data model. Unstructured data, such as multimedia, texts or social media data, is difficult for Data Warehouses to process, and this is the type of data that the world today needs more and more.

Unstructured data sources are of key importance to data science initiatives and innovative AI projects, and effective, flexible access to such data is a matter of growing importance.

Examples of traditional Data Warehouses

  • Microsoft SQL Server – MS SQL allows organisations to store their business data; integrated solutions such as SQL Server Integration Services (SSIS) and SQL Server Analysis Services (SSAS) support ETL and analysis processes.
  • Oracle Exadata – a dedicated data storage and processing solution, with guaranteed high performance and scalability. Oracle Exadata is often used as a data warehousing platform in the Oracle environment.
  • IBM Db2 Warehouse – a Data Warehouse management product delivered by IBM, it provides advanced analytics, integration with data reporting and visualisation solutions, and enables flexible data processing.
  • SAP BW (Business Warehouse) – this Data Warehouse system by SAP integrates data from various sources in a single repository. It is optimised for business analytics and works with other SAP products.
  • Snowflake – flexible, scalable and easy-to-use, Snowflake enables cloud data storage and integration with different analytics tools.
  • Vertica (Hewlett Packard Enterprise) – an analytics-oriented Data Warehouse platform that ensures high performance and allows you to store and work on large data sets.

Data Lake, or a flexible storage structure

As a term, “Data Lake” was first used in 2010 by James Dixon, the founder and former technology director at Pentaho. In his description of the concept, Dixon zoomed in on the need to overcome the limitations of the traditional warehousing model, with its low degree of flexibility and scalability. He said: 

 “If you think of a Data Mart as a store of bottled water, cleansed and packaged and structured for easy consumption, the Data Lake is a large body of water in a more natural state. The contents of the Data Lake stream in from a source to fill the lake, and various users of the lake can come to examine, dive in or take samples.”

Data Lake repository architecture

When to choose a Data Lake?

If your company deals with large and diverse data sets, a Data Lake will be a very good choice. It will allow you to store various types of data, including structured, unstructured and semi-structured data. It can contain textual data, multimedia files, IoT device data and more.

  • The structure of a Data Lake is scalable and can be easily expanded to handle increasing data volumes. It relies on a distributed environment, which makes it easier to add new resources to the system.
  • In a Data Lake, you don’t need to have a predefined template. Its flexibility means that you can store data in its original form and only transform them for analytics purposes.
  • Data Lakes support real-time data, which enables fast analytics and a quick business response. This is particularly important in dynamic environments and rapidly evolving market conditions.
  • Data Lake architecture is often used for Big Data analytics. Because of their flexibility and scalability, Data Lakes can store huge data volumes, which can then be analysed with various analytics tools and technologies.
  • Data in Data Lakes is available for many applications and analytics tools. They can be used by employees from different departments, which facilitates cooperation and data-driven decision-making.

Data Lake: limitations

Despite their many advantages, Data Lakes are not free of limitations, which may pose a challenge for your organisation:

Complex data exploration

The flexibility of Data Lakes means that there is no single data storage template. Without adequate control and management, there is a risk that the data may become chaotic, making it more difficult to analyse and understand available information. Moreover, because the data structure is very diverse and disordered, its exploration may be more complex than in a traditional Data Warehouse. It might require advanced tools and skills. Data Lakes are a better choice for data analysts than business professionals or managers.

Low data quality

Data Lakes can store data in their original form, which may pose data quality issues. Disordered, erroneous or inconsistent data may have a negative impact on the quality of your analytics and decision-making. Another weakness is the lack of a uniform data model. In contrast to classical Data Warehouses, Data Lakes do not enforce a uniform template, which makes it more difficult for users to interpret and understand available data.

Security issues

Managing data security within a Data Lake can also be complicated, especially if your data is very diverse. You need to set up appropriate access and control mechanisms to prevent unauthorised access to sensitive information. For regulated data, such as personal information, Data Lakes require complicated safeguard mechanisms to ensure compliance with privacy laws and industry regulations.

Despite these challenges, when managed effectively with appropriate tools and strategies, Data Lakes have great potential and may bring important benefits by enabling diverse data analytics.

Data Lake examples

Data Lakes are platforms for collecting, storing and processing diverse data in their original form. Below is a list of some of the most popular tools and platforms for building and managing Data Lakes:

  • Azure Data Lake Storage (ADLS) – a cloud data storage service provided by Microsoft Azure, designed specifically to support Data Lakes. It enables you to store large volumes of data in the form of files and can be integrated with other cloud-based data analytics tools.
  • Amazon S3 (Simple Storage Service) – a public cloud data storage service available as part of the AWS platform. Although it is not specifically designed for managing Data Lakes, it is often used to store large volumes of different types of data (the basic principle of a Data Lake).
  • Google Cloud Storage – similar to the above, offered by Google Cloud.
  • Hadoop Distributed File System (HDFS) – HDFS is a distributed file system often used in the Apache Hadoop ecosystem. HDFS enables you to store large, distributed data volumes, which is characteristic of the Data Lake approach.
  • IBM Cloud Object Storage – a cloud object storage service, often used as a platform to build Data Lakes. It ensures flexibility and scalability for the storage of large data volumes.

It is worth noting that the deployment of a Data Lake involves not only a data storage platform, but also tools and services for access control management, data processing and data analytics. The choice of tools will depend on the needs of your organisation, data type and cloud environment preferences.

Data Lakehouse. Combining the traditional and the modern in data storage

A Data Lakehouse is a hybrid of the two architectures described above: a Data Lake and Data Warehouse. It is a modern data storage and management architecture that combines the flexibility and scalability of Data Lakes with the data processing and analysis power of traditional Data Warehouses. It also has certain unique features of its own, designed in response to the challenges posed by the other two architectures.

Data platform. Data Lakehouse architecture

Why choose a Data Lakehouse?

Data Lakehouses provide the flexibility of diverse data storage, without requiring a predefined data structure, which allows you to collect data in their original form. However, in a Data Lakehouse, the management of metadata, such as template definitions, is much more developed than in a pure Data Lake.

This helps you track and understand data stored in this structure much better. Moreover, in contrast to Data Lakes, Data Lakehouses offer processing and analytics tools and mechanisms similar to those available in Data Warehouses. This means that data can be processed in a way that is optimised for fast querying and reporting.

Security and historical data

Data Lakehouses offer more advanced security and access control solutions, which ensures better data protection and legal compliance. Some platforms also provide data versioning features so you can track changes over time. This can be important for historical analytics and auditing.

Integration with external tools

Data Lakehouse platforms provide different APIs to enable easy integration with a variety of analytics tools and offer easy-to-use visual interfaces. The possibility of integration with different analytics solutions makes it easier to work with data and generate reports. In addition, platforms can often be integrated with popular business intelligence tools such as Microsoft Power BI, Tableau or Qlik to create interactive data visualisations, reports and management dashboards. Moreover, Data Lakehouses usually support SQL, which makes them easier to navigate for users familiar with traditional Data Warehouses. 

Data Lakehouses are an answer to the need to combine the flexibility of Data Lakes with the processing and analytics power of traditional Data Warehouses. This approach aims to neutralise the downsides of both structures, offering a more comprehensive and versatile solution for data-driven organisations.

Data Lakehouse: limitations

While Data Lakehouses represent a significant step forward and an attempt to make the most of both worlds, its implementation may be quite challenging.

Loss of control over data

Because of their internal complexity, it is a mistake to treat Data Lakehouses as an ideal “all-in-one” structure or “all-purpose platform”. Even though the solution aims to ensure consistency, the risk of losing control over the data structure is still real, especially if your data set is very large and diverse. Moreover, metadata management can be complicated; even though it can, indeed, provide a data control advantage, it may require much more setup and maintenance work.

High maintenance costs

Not unlike other advanced data structures, Data Lakehouse may be cost-intensive to maintain. Data Lakehouses combine the flexibility of Data Lakes with the efficiency of Data Warehouses, which means their structure is usually more complex and requires advanced setup, management and maintenance.

Large amounts of data will also impact the cost: if your organisation works at scale and stores huge data volumes, costs will naturally be higher. To deploy a Data Lakehouse, or other advanced technologies, you might also first need to train your employees which, in turn, will entail additional training costs.

However, these downsides are relative and depend on your business context and how well the data structure matches the needs of your organisation. Despite all these potential challenges, Data Lakehouses are still a powerful tool that can deliver value by improving data analytics and enabling more flexible management.

Data Lakehouse examples

Even though “Data Lakehouse” is a relatively new and not always clearly defined term, some platforms and tools are already designed to combine the features of Data Lakes and traditional Data Warehouses. Below is a list of some such platforms and tools that provide Data Lakehouse-type storage:

  • Azure Synapse Analytics – a cloud data analytics platform from Microsoft Azure, which combines the features of a Data Lake and a Data Warehouse. It enables the integration of data from different sources and advanced large-scale analytics.
  • Delta Lake (Databricks) – Delta Lake is an open-source project developed by the online community and offered as a service by Databricks. Delta Lake combines the features of a Data Lake and a Data Warehouse, adding transactional features to Apache Spark, which allows you to update, enter and delete data. An important deployment case study in a pharmaceutical company, Walgreens, is discussed in an article you can read here.
  • Snowflake Data Lakehouse – a cloud-based storage and analytics platform offering Data Lakehouse features. In Snowflake, users can integrate data from various sources, store them in Data Lakes and at the same time harness the advanced analytical features of a Data Warehouse.
  • Amazon Redshift – an AWS cloud storage service. An Amazon Redshift AQUA (Advanced Query Accelerator) for S3 allows you to use data stored in the Amazon Simple Storage Service (S3) as a Data Lake, while ensuring the high efficiency of a traditional Data Warehouse.
  • Google BigQuery Omni – a cloud storage service that combines the features of a Data Lake and a Data Warehouse. It allows you to analyse data stored on different clouds and derived from different data sources, improving flexibility and providing a uniform interface for data analytics.

It is worth noting that the Data Lakehouse concept is still dynamic, and new platforms and tools may soon appear in the market in response to growing data management and analytics needs.

Which architecture to choose?

Choosing between a Data Warehouse, Data Lake and Data Lakehouse will depend on the characteristics of your data, the goals of your organisation and your business strategy. Each of these architectures represents a unique approach to data collection, storage and processing.

Human resources and IT team skills will also play an important role. You need to adapt your choice to your resources and your employees, as well as to the needs of your enterprise as a whole.

Please note that each of the three architectures has its benefits and limitations. A Data Warehouse may be expensive, but it is also more effective in terms of business analysis. A Data Lake is more scalable, but requires complicated data quality management.

Lastly, a Data Lakehouse, which combines these two approaches, tries to balance out these aspects, offering both flexible architecture and efficiency. What might be ideal is a hybrid architecture approach that would tap the benefits of different architectures depending on specific business needs.