Why the industry is moving towards an open data ecosystem
November 3, 2022
Is vendor lock-in suddenly out of fashion? Looking at recent headlines, it very much seems so.
Google: “Building the most open data cloud ecosystem: Unifying data across multiple sources and platforms”
Google announced several steps to provide the most open and extensible Data Cloud and to promote open standards and interoperability between popular data applications. Some of the most interesting steps are the following:
- Support for major data formats in the industry, including Apache Iceberg, and soon Delta Lake and Apache Hudi.
- A new integrated experience in BigQuery for Apache Spark, an open-source query engine.
- Expanding integrations with many of the most popular enterprise data platforms to help remove barriers between data and give customers more choice and prevent data lock-in.
Snowflake: “Iceberg Tables: Powering Open Standards with Snowflake Innovations”
Snowflake recently announced Iceberg Tables to combine Snowflake capabilities with the open-source projects Apache Iceberg and Apache Parquet to solve challenges such as control, cost, and interoperability. With Iceberg tables, companies can benefit from the features and performance of Snowflake but can also use open formats, tools outside of Snowflake, or their own cloud storage.
To put that into perspective. We just read the announcements of two leading providers of proprietary cloud data warehouses that they are opening their systems. This is remarkable because having customers and their data locked in solutions is an excellent business for those providers.
Why is this happening, and why are players such as Google and Snowflake joining the movement toward an open data ecosystem?
Why we need an open data ecosystem
Digital transformation is held back by challenges that can only be tackled and solved with an open approach. Companies have a significant part of data use cases where proprietary warehouse solutions are not well suited. Those include complex and machine learning use cases such as demand forecasting or personalized recommendations. Companies also require flexibility to adjust quickly to a fast-changing environment and to take full advantage of all their data. Being dependent on the roadmap of a single provider limits the ability to innovate. If a new provider offers a solution that is ideal for your needs or complements your existing solution, you want to be able to take that opportunity. This interoperability and flexibility are only possible with open standards.
On top of that, the current macro-environment forces companies to optimize their spending on data analytics and machine learning, and costs can escalate quickly with proprietary cloud data warehouses.
The convergence of Data Lakes and Data Warehouses
We saw that cloud data warehouse providers are moving towards an open ecosystem, joining other companies at the forefront of the movement, such as Databricks and Dremio, among others. They are pushing for the Data Lakehouse approach.
In a nutshell, the Data Lakehouse combines the advantages of data warehouses and data lakes. It is open, simple, flexible, and low-cost. It is designed to allow companies to serve all their Business intelligence and Machine Learning use cases from one system.
A crucial part of this approach are open data formats such as Delta Lake, Iceberg, or Hudi. Those formats provide a Metadata and Governance Layer or, let’s say, the ¨magic¨ to solve the problems of traditional data lakes. Traditional data lakes do not enforce data quality and lack governance. Users can also not work on the same data simultaneously, and only limited metadata is available to provide information on the data layout, which makes loading data and analysis very slow.
.
How Data Lakehouses benefit companies
Companies such as H&M and HSBC have already adopted the open Data Lakehouse approach, and many others will follow.
H&M, for example, faced the problem that their legacy architecture couldn’t support company growth. Complex infrastructure took a toll on the Data Engineering team, and scaling was very costly. All of this led to slow time-to-market for data products and ML models. Implementing a Data Lakehouse approach, in this case with Databricks on Delta Lake, led to simplified data operations and faster ML innovations. The result was a 70% reduction in operational costs and improved strategic decisions and business forecasting.¹
HSBC, on the other hand, was able to replace 14 databases with Delta Lake. They were able to improve engagement in their mobile banking app by 4,5 times with more efficient data analytics and data science processes.²
So, does the Data Lakehouse solve it all? Not quite; the reality is that some challenges still need to be addressed.
Pending problems
Firstly, the performance of solutions based on open formats is not yet good enough. There is a heated debate ongoing on Warehouse vs. Lakehouse performance, but I think it’s fair to say that, at least in some use cases, the Lakehouse still needs to catch up. Data Warehouses are optimized for the processing and storage of structured data and are very performant in those cases. For example, if you want to identify the most profitable customer segments for the marketing team based on the information you collected from different sources.
Secondly, working with open formats is complex, and you need a skilled engineering team to build and maintain your data infrastructure and ensure data quality.
How Qbeast supports the open data ecosystem
At Qbeast, we embrace the open data ecosystem and want to do our part to push it forward. We developed the open-source Qbeast Format, which improves existing open data formats such as Delta Lake.
We enhance the metadata layer and use multi-dimensional indexing and efficient sampling techniques to improve performance significantly. Simply put, we organize the data smarter so it can be analyzed much faster and cheaper.
We also know that data engineering is a bottleneck for many companies. Serving the data requirements for Business Intelligence or Machine Learning use cases can be tricky. Data needs to be extracted, transformed, and served correctly. Developing and maintaining these ETL processes is a considerable challenge, especially when your engineering power is limited. At Qbeast, we built a managed solution to ensure those processes run smoothly. We handle data ingestion and transformations and ensure data quality. We make sure that the data layout is optimal for consumption so that the tools you use for BI or ML run in the most efficient way possible. This means that we not only help to break the engineering bottlenecks but that we also help companies to realize significant cost savings.
We use open-source formats and tools, so we make sure to help companies with the latest and best tools available in the open data ecosystem.
An open data ecosystem is the future
We are extremely excited to see the industry moving towards an open data ecosystem, and we are convinced that it is the future. As Sapphire Ventures points out in their blog, the benefits for customers are clear: cost-effectiveness, scalability, choice, democratization, and flexibility.
At Qbeast, we are dedicated to accelerating this transition and supporting an ecosystem that enables companies to pick the right tools from the best providers without worrying about compatibility and switching costs. To power true innovation.