

Data mesh is revolutionizing the way organizations manage data. Unlike traditional centralized models, data mesh uses a decentralized, domain-oriented structure. But how does governance work in such a distributed system?
At ACA Group, we believe data mesh is an answer to the challenge of managing data by focusing on building a decentralized, self-serve data ecosystem. The goal is to embed data-driven innovation within each department or team, making everyone in the organization responsible for creating reusable data that fuels new products and services across departments.
In a data mesh, not only the management of ownership and infrastructure is different. The key to success is transforming data governance itself. Instead of making a centralized IT team responsible for data governance, data mesh distributes the responsibility across different teams.
This approach, known as "federated computational governance", ensures active participation from both data-producing and data-consuming teams in crafting and adopting governance policies.
Four pillars of data mesh and their governance challenges
To understand the importance of governance in a data mesh, we need to break down the core principles of a data mesh and how they relate to data governance challenges:
1. Decentralization
In a data mesh, data ownership and responsibility are distributed across different business domains or teams. Each domain becomes a self-contained unit, managing its own data products. This also means that each data product and domain is self-governing, but needs to be interoperable with other data products and domains.
2. Domain-oriented approach
Instead of a monolithic data warehouse, a data mesh is made up of interconnected data products. This implies that each data product might come with its own “local dialect”. The challenge here is how to speak the same language, without speaking the same language.
3. Data as a product
This approach treats data as a product, with each domain creating and maintaining data products that are discoverable, accessible, and reusable. Metadata management becomes an important topic, since metadata is used to discover, access, integrate with and use the data encapsulated within a data product.
4. Self-serve platform
This engine and control panel empowers data producers and consumers alike. Developer portals, data catalogs, lineage tools, and collaboration spaces facilitate seamless navigation, while automated policy enforcement and regular audits are used to ensure compliance and promote data product quality without manual intervention. Automation of governance is a core challenge associated with the self-serve platform.

Now that you have a better understanding of the central building blocks and challenges of data governance in a data mesh, let’s take a closer look at each of these challenges individually.
Federated Governance
A standout feature of data mesh is federated governance. But what does it actually mean?
“Federated” refers to the fact that while each domain (and data product within those domains) has its own autonomy, they come together to hash out a few things that are relevant and valuable for everyone. You might think of it as a parliamentary democracy, where representatives come together to make joint decisions, which then need to be broadly implemented.
This cross-domain collaboration means that quite a few teams are going to be involved.
Federated Governance Team
This is a group of domain representatives and experts who collaborate across business units and areas of expertise. They ensure data quality, compliance, and alignment with organizational goals.They oversee tasks such as:
- Automated data quality assessments
- Data access and privacy management
- Ensuring data products and datasets can be shared and reused
This team defines standardized data governance policies and ensures that data products and datasets can be shared and reused, while safeguarding overall quality. To continue our earlier comparison, the Governance team is like a “parliament” that discusses and passes “laws”.
Platform Team
This team is essential to automate and enforce the governance policies defined by the Governance Team on the self-serve platform. They ensure that policies can be adopted by Data Products on a low-effort basis, promoting interoperability and collaboration without introducing unnecessary overhead.
Domain Teams
Aligned with business units, domain teams handle operational data governance within their own domains. Responsibilities include:
- Data mapping and documentation
- Ensuring data quality
- Implementing standards defined by the federated governance team
Importantly, each domain team has the autonomy and resources to execute the standards defined by the federated governance team.
In summary
While local domain teams make decisions specific to their domain, federated data governance ensures global rules are applied to all data products and their interfaces. These rules must ensure a healthy and interoperable ecosystem.
How does federated data governance work?
Let’s start with an important note: Federated Governance requires a different way of thinking compared to more traditional governance approaches.
Federated governance is focused on promoting autonomy and interoperability as much as possible, keeping interference by a centralized team to an absolute minimum. Do you want to successfully implement federated data governance in your organization? Then, make sure you establish the following key foundations:
- Culture of ownership
Teams must feel accountable for their data. This requires a high level of maturity in data literacy, and a willingness to invest in training and continuous education on data management and governance best practices. - Robust data infrastructure
You need to be ready to invest in scalable and flexible data infrastructure that supports decentralized data management. - Governance framework
You will need a clear governance framework that defines roles, responsibilities, and processes. This framework should be flexible enough to adapt to the needs of different domains while maintaining overall coherence. - Cross-functional collaboration
Collaboration between IT, data professionals, and business units is essential.
Enterprise ontology: bridging domain-specific language gaps
Each domain can have its own specific lingo, creating challenges when terms differ in definition across teams. To bridge the gaps between domains, we need a solid basis for “translation” and a common understanding of terms. This is where the enterprise ontology comes in.
What is an enterprise ontology?
You can see it as a large, hierarchically structured “dictionary” that links concepts used in different domains to each other based on a common denominator.
For example: a sales team and a finance team both use the term “customer”, but the definitions for this term used by each team are somewhat different.
- The Sales team calls people who have received a quote a customer.
- The Finance team defines a "customer" as someone with a signed contract and invoicing details. Others are referred to as “prospects”.
Without a shared ontology, combining the data products from these teams would yield inconsistent results, highlighting the need for clarity.
How an enterprise ontology works
By tagging domain-specific terms to a unified concept (e.g., "customer") in the ontology, teams can reconcile differences and enable cross-domain understanding.
To bridge the gaps between domain-specific terms:
- Tag terms to a common ontology: Terms from each domain are linked to a unified concept in the enterprise ontology using tags. For instance, "sales customer" and "finance customer" might both map to a universal "customer" term.
- Leverage unique identifiers: When consulting the ontology, you might discover that the unique identifier across all “customers” is their email address. Moreover, finding a unique identifier across terms linked to the same concept is valuable, as it allows you to correlate data related to the same term across domains.

Metadata: Enabling prevention, validation, and auditing
Metadata, often described as "data about data," plays a crucial role in Federated Data Governance within a data mesh. It provides the necessary context to make data understandable, accessible, and usable across different domains.
Key roles of metadata in federated data governance
- Enhancing data discoverability
Metadata enables users to easily find and understand data across the organization. It includes practical information such as the data source(s), creation date, format, and usage instructions, but also information specifically linked to discoverability, like which enterprise ontology tags are applicable, who the owner is, or associated data products. This makes it easier for teams to locate (and integrate with) relevant data products. - Improving data quality and trust
Metadata includes (or should include) data quality metrics and lineage information, helping teams ensure data accuracy and reliability. It allows users to trace data back to its origin, understand transformations it has undergone, and assess its quality. - Facilitating compliance and security
Metadata helps in maintaining compliance with data privacy and security regulations. The data product team can specify who or which roles can access the data and for what purpose, ensuring accountability and transparency. Furthermore, tagging sensitive data elements helps to automatically apply data privacy and masking policies, ensuring regulatory compliance. - Enabling interoperability
Metadata ensures that data from different domains can be integrated and used together. Standardized metadata formats and definitions enable seamless data exchange and interoperability.
Best practices for metadata management in data mesh
In a data mesh, metadata should be managed as close to the source as possible. Each data product team is responsible to carefully author and curate the metadata associated with their data product. Exceptions, like the automated addition of data quality metrics from the self-serve platform, can apply, but the data product itself remains the source of truth, and they should be managed as such. In short, metadata should be decentrally managed, but centrally consumable.
Metadata management should be automated as much as reasonably possible and integrated with data governance tools to ensure accuracy and consistency. Key practices include:
- Careful metadata authoring and curation: Use tools that automatically capture and update metadata. Introduce processes and practices that motivate data product owners to take special care when they create and modify the metadata associated with their data product. The data product owner should ensure that the metadata presented to consumers gives a truthful representation of the content of the data product, so these consumers can make an informed decision about the value of the product for their use case.
- Standardization: Implement standardized metadata formats and definitions across all domains (where appropriate) to ensure maximal interoperability and ease of use.
- Automated validation: Define procedures and policies to automatically validate metadata, in order to spot mistakes and inconsistencies early on and prevent error propagation throughout the system. As always, prevention and validation come first, audits second.
- Regular audits: Conduct regular automated audits to ensure metadata accuracy and compliance with governance policies.
The self-serve platform: automating governance
The self-serve platform embodies "Federated Computational Governance." It provides tools and infrastructure that allow both users and creators to independently access and manage data products without relying on a central IT team.
Key features of a self-serve platform
- Empowering domain teams: Self-serve platforms enable domain teams to take ownership of their data. They can create, manage, and use data products independently, fostering a sense of accountability.
- Ensuring compliance: Self-serve platforms integrate governance controls, ensuring that data usage complies with organizational policies and regulations, balancing autonomy with oversight.
- Metadata management: Through the use of the right tooling, the self-serve platform can facilitate the careful curation and automated validation of metadata. This eases both integration with the self-serve platform and management of metadata within the individual data products.
- Policy management: Governance policies can be translated to automated processes, which can be enforced through the platform. Automated policy enforcement ensures that data usage complies with internal guidelines and external regulations.
- Monitoring and auditing: Monitoring and auditing capabilities can be used to track data usage and ensure compliance. Regular audits help identify and address any governance issues. Alerting data product or domain teams of these issues and their consequences allows them to address them in their own way and at their own time.
Conclusion: striking the balance between autonomy and oversight
Embracing a data mesh architecture requires a different approach to governance. The traditional centralized model of managing data no longer suffices in a world where agility, autonomy, and cross-functional collaboration are paramount.
Federated data governance empowers domain teams to take ownership of their data products while ensuring alignment with global organizational standards. By distributing responsibilities across domain teams, supported by a self-serve platform and strong metadata management practices, organizations can enhance data quality, interoperability, and compliance without adding unnecessary complexity.
However, the success of data mesh governance depends on fostering a strong culture of data ownership, building a robust self-service platform, and establishing clear frameworks that promote seamless cross-domain collaboration.
That’s a lot of buzzwords for one sentence, but it rings true nonetheless:
- Data ownership holds people accountable for the data they create and maintain, while allowing them to take full control of their data products.
- Strong infrastructure and a self-service platform is needed to facilitate this practice of ownership, giving data product teams the autonomy they need to put their product out there, while also allowing for collaboration and sharing.
- Clear governance frameworks are needed to establish what quality looks like and guides data product teams in implementing best practices related to integration, collaboration, and more.
The key to thriving in data mesh is a governance model that strikes the right balance between autonomy and oversight—allowing teams to produce while safeguarding the integrity and value of the organization's data ecosystem.
Ready to embrace data mesh?
Contact us for expert guidance and tailored solutions!

What others have also read


In the ever-evolving landscape of data management, investing in platforms and navigating migrations between them is a recurring theme in many data strategies. How can we ensure that these investments remain relevant and can evolve over time, avoiding endless migration projects? The answer lies in embracing ‘Composability’ - a key principle for designing robust, future-proof data (mesh) platforms. Is there a silver bullet we can buy off-the-shelf? The data-solution market is flooded with data vendor tools positioning themselves as the platform for everything, as the all-in-one silver bullet. It's important to know that there is no silver bullet. While opting for a single off-the-shelf platform might seem like a quick and easy solution at first, it can lead to problems down the line. These monolithic off-the-shelf platforms often end up inflexible to support all use cases, not customizable enough, and eventually become outdated.This results in big complicated migration projects to the next silver bullet platform, and organizations ending up with multiple all-in-one platforms, causing disruptions in day-to-day operations and hindering overall progress. Flexibility is key to your data mesh platform architecture A complete data platform must address numerous aspects: data storage, query engines, security, data access, discovery, observability, governance, developer experience, automation, a marketplace, data quality, etc. Some vendors claim their all-in-one data solution can tackle all of these. However, typically such a platform excels in certain aspects, but falls short in others. For example, a platform might offer a high-end query engine, but lack depth in features of the data marketplace included in their solution. To future-proof your platform, it must incorporate the best tools for each aspect and evolve as new technologies emerge. Today's cutting-edge solutions can be outdated tomorrow, so flexibility and evolvability are essential for your data mesh platform architecture. Embrace composability: Engineer your future Rather than locking into one single tool, aim to build a platform with composability at its core. Picture a platform where different technologies and tools can be seamlessly integrated, replaced, or evolved, with an integrated and automated self-service experience on top. A platform that is both generic at its core and flexible enough to accommodate the ever-changing landscape of data solutions and requirements. A platform with a long-term return on investment by allowing you to expand capabilities incrementally, avoiding costly, large-scale migrations. Composability enables you to continually adapt your platform capabilities by adding new technologies under the umbrella of one stable core platform layer. Two key ingredients of composability Building blocks: These are the individual components that make up your platform. Interoperability: All building blocks must work together seamlessly to create a cohesive system. An ecosystem of building blocks When building composable data platforms, the key lies in sourcing the right building blocks. But where do we get these? Traditional monolithic data platforms aim to solve all problems in one package, but this stifles the flexibility that composability demands. Instead, vendors should focus on decomposing these platforms into specialized, cost-effective components that excel at addressing specific challenges. By offering targeted solutions as building blocks, they empower organizations to assemble a data platform tailored to their unique needs. In addition to vendor solutions, open-source data technologies also offer a wealth of building blocks. It should be possible to combine both vendor-specific and open-source tools into a data platform tailored to your needs. This approach enhances agility, fosters innovation, and allows for continuous evolution by integrating the latest and most relevant technologies. Standardization as glue between building blocks To create a truly composable ecosystem, the building blocks must be able to work together, i.e. interoperability. This is where standards come into play, enabling seamless integration between data platform building blocks. Standardization ensures that different tools can operate in harmony, offering a flexible, interoperable platform. Imagine a standard for data access management that allows seamless integration across various components. It would enable an access management building block to list data products and grant access uniformly. Simultaneously, it would allow data storage and serving building blocks to integrate their data and permission models, ensuring that any access management solution can be effortlessly composed with them. This creates a flexible ecosystem where data access is consistently managed across different systems. The discovery of data products in a catalog or marketplace can be greatly enhanced by adopting a standard specification for data products. With this standard, each data product can be made discoverable in a generic way. When data catalogs or marketplaces adopt this standard, it provides the flexibility to choose and integrate any catalog or marketplace building block into your platform, fostering a more adaptable and interoperable data ecosystem. A data contract standard allows data products to specify their quality checks, SLOs, and SLAs in a generic format, enabling smooth integration of data quality tools with any data product. It enables you to combine the best solutions for ensuring data reliability across different platforms. Widely accepted standards are key to ensuring interoperability through agreed-upon APIs, SPIs, contracts, and plugin mechanisms. In essence, standards act as the glue that binds a composable data ecosystem. A strong belief in evolutionary architectures At ACA Group, we firmly believe in evolutionary architectures and platform engineering, principles that seamlessly extend to data mesh platforms. It's not about locking yourself into a rigid structure but creating an ecosystem that can evolve, staying at the forefront of innovation. That’s where composability comes in. Do you want a data platform that not only meets your current needs but also paves the way for the challenges and opportunities of tomorrow? Let’s engineer it together Ready to learn more about composability in data mesh solutions? {% module_block module "widget_f1f5c870-47cf-4a61-9810-b273e8d58226" %}{% module_attribute "buttons" is_json="true" %}{% raw %}[{"appearance":{"link_color":"light","primary_color":"primary","secondary_color":"primary","tertiary_color":"light","tertiary_icon_accent_color":"dark","tertiary_text_color":"dark","variant":"primary"},"content":{"arrow":"right","icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"tertiary_icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"text":"Contact us now!"},"target":{"link":{"no_follow":false,"open_in_new_tab":false,"rel":"","sponsored":false,"url":{"content_id":230950468795,"href":"https://25145356.hs-sites-eu1.com/en/contact","href_with_scheme":null,"type":"CONTENT"},"user_generated_content":false}},"type":"normal"}]{% endraw %}{% end_module_attribute %}{% module_attribute "child_css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "definition_id" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "field_types" is_json="true" %}{% raw %}{"buttons":"group","styles":"group"}{% endraw %}{% end_module_attribute %}{% module_attribute "isJsModule" is_json="true" %}{% raw %}true{% endraw %}{% end_module_attribute %}{% module_attribute "label" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "module_id" is_json="true" %}{% raw %}201493994716{% endraw %}{% end_module_attribute %}{% module_attribute "path" is_json="true" %}{% raw %}"@projects/aca-group-project/aca-group-app/components/modules/ButtonGroup"{% endraw %}{% end_module_attribute %}{% module_attribute "schema_version" is_json="true" %}{% raw %}2{% endraw %}{% end_module_attribute %}{% module_attribute "smart_objects" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "smart_type" is_json="true" %}{% raw %}"NOT_SMART"{% endraw %}{% end_module_attribute %}{% module_attribute "tag" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "type" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "wrap_field_tag" is_json="true" %}{% raw %}"div"{% endraw %}{% end_module_attribute %}{% end_module_block %}
Read more

You may well be familiar with the term ‘data mesh’. It is one of those buzzwords to do with data that have been doing the rounds for some time now. Even though data mesh has the potential to bring a lot of value for an organization in quite a few situations, we should not stare ourselves blind on all the fancy terminology. If you are looking to develop a proper data strategy, you do well to start off by asking yourselves the following questions: what is the challenge we are seeking to tackle with data? And how can a solution contribute to achieving our business goals? There is certainly nothing new about organizations using data, but we have come a long way. Initially, companies gathered data from various systems in a data warehouse. The drawback being that the data management was handled by a central team and the turnaround time of reports was likely to seriously run up. Moreover, these data engineers needed to have a solid understanding of the entire business. Over the years that followed, the rise of social media meant the sheer amount of data positively mushroomed, which in turn led to the term Big Data. As a result, tools were developed to analyse huge data volumes, with the focus increasingly shifting towards self-service. The latter trend now means that the business itself is increasingly better able to handle data under their own steam. Which in turn brings yet another new challenge: as is often the case, we are unable to dissociate technology from the processes at the company or from the people that use these data. Are these people ready to start using data? Do they have the right skills and have you thought about the kind of skills you will be needing tomorrow? What are the company’s goals and how can employees contribute towards achieving them? The human aspect is a crucial component of any potent data strategy. How to make the difference with data? In practice, the truth is that, when it comes to their data strategies, a lot of companies have not progressed from where they were a few years ago. Needless to say, this is hardly a robust foundation to move on to the next step. So let’s hone in on some of the key elements in any data strategy: Data need to incite action: it is not enough to just compare a few numbers; a high-quality report leads to a decision or should at the very least make it clear which kind of action is required. Sharing is caring: if you do have data anyway, why not share them? Not just with your own in-house departments, but also with the outside world. If you manage to make data available again to the customer there is a genuine competitive advantage to be had. Visualise: data are often collected in poorly organised tables without proper layout. Studies show the human brain struggles to read these kinds of tables. Visualising data (using GeoMapping for instance) may see you arrive at insights you had not previously thought of. Connect data sets: in the case of data sets, at all times 1+1 needs to equal 3. If you are measuring the efficacy of a marketing campaign, for example, do not just look at the number of clicks. The real added value resides in correlating the data you have with data about the business, such as (increased) sales figures. Make data transparent: be clear about your business goals and KPIs, so everybody in the organization is able to use the data and, in doing so, contribute to meeting a benchmark. Train people: make sure your people understand how to use technology, but also how data are able to simplify their duties and how data contribute to achieving the company goals. Which problem are you seeking to resolve with data? Once you have got the foundations right, we can work up a roadmap. No solution should ever set out from the data themselves, but at all times needs to be linked to a challenge or a goal. This is why ACA Group always organises a workshop first in order to establish what the customer’s goals are. Based on the outcome of this workshop, we come up with concrete problem definition, which sets us on the right track to find a solution for each situation. The integration of data sets will gain even greater importance in the near future, in amongst other things as part of sustainability reporting. In order to prepare and guide companies as best as possible, over the course of this year, we will be digging deeper into some important terminologies, methods and challenges around data with a series of blogs. If in the meantime, are you keen to find out exactly what ‘Data Mesh’ entails, and why this could be rewarding for your organization? {% module_block module "widget_1aee89e6-fefb-47ef-92d6-45fc3014a2b0" %}{% module_attribute "child_css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "definition_id" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "field_types" is_json="true" %}{% raw %}{"buttons":"group","styles":"group"}{% endraw %}{% end_module_attribute %}{% module_attribute "isJsModule" is_json="true" %}{% raw %}true{% endraw %}{% end_module_attribute %}{% module_attribute "label" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "module_id" is_json="true" %}{% raw %}201493994716{% endraw %}{% end_module_attribute %}{% module_attribute "path" is_json="true" %}{% raw %}"@projects/aca-group-project/aca-group-app/components/modules/ButtonGroup"{% endraw %}{% end_module_attribute %}{% module_attribute "schema_version" is_json="true" %}{% raw %}2{% endraw %}{% end_module_attribute %}{% module_attribute "smart_objects" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "smart_type" is_json="true" %}{% raw %}"NOT_SMART"{% endraw %}{% end_module_attribute %}{% module_attribute "tag" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "type" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "wrap_field_tag" is_json="true" %}{% raw %}"div"{% endraw %}{% end_module_attribute %}{% end_module_block %}
Read more

In recent years, the exponential growth of data has led to an increasing demand for more effective ways to manage it. Building a data-driven business remains one of the top strategic goals of many business stakeholders. And while it may seem logical for companies to embrace the idea of being data-driven, it’s far more difficult to execute on that idea. Data Mesh and Data Lakes are two important concepts in the world of data architectures that can work together to provide a flexible and scalable approach to data management. Data Lakes have already proven to be a popular solution, but a newer approach, Data Mesh, is gaining attention. This blog will dive into the two concepts and explore how they can complement each other . Data Lakes A data lake is a large and central storage repository that holds massive amounts of data, from various sources, and in various data formats. It can store structured, semi-structured, and unstructured data (e.g. images). Think of it as a huge pool of water, where you can store all sorts of data, such as customer data, transaction data, social media feeds, images, videos and more. It is a cost-effective and accessible solution for companies dealing with large data volumes and various data formats . Additionally, data lakes allow teams to work with raw data , without the need for extensive preprocessing or normalization. Data Mesh Data Mesh is a relatively new concept that takes a decentralized approach to data management. It treats data as a product and is managed by autonomous teams that are responsible for a particular domain. Data Mesh advocates that data should be owned and managed by the people who understand it best - the domain experts - and should be treated as a product. It means that each team is responsible for the data quality, reliability and accessibility of data within its domain. This creates a more scalable and flexible approach to data management, where teams can make decisions about their data independently, without requiring intervention from a centralized data team. How can data lake technology be used in a data mesh approach? In short, Data Mesh is an architecture where data is owned and managed by individual product teams, creating a decentralized approach to data management. A data lake is a technology that provides a centralized storage solution, allowing teams to store and manage large amounts of data without worrying about data structure or format. Decentralization in Data Mesh is about taking ownership of sharing data as products in a decentralized way. It’s not about abandoning centralized storage solutions, such as Data Lakes, but about using them in a way that adheres to the principles of Data Mesh. Data Mesh is all about defining and managing Data Products as a building block to make data easily accessible and reusable for various use cases. Each ‘Data Product’ should be able to provide its data in multiple ways through different output ports . An output port is aimed at making data natively accessible for a specific use case. Example use cases are analytics and reporting, machine learning, real-time processing, etc. As such, multiple types of output ports need corresponding data technologies that enable a specific access mode. One technology that can support a Data Mesh architecture is a data lake. The data in an output port for a data product can be stored in a data lake . This type of output port then receives all the benefits offered by data lake technology. In a Data Mesh architecture, each data product gets its own segment in the data lake (e.g. an S3 Bucket). This segment acts as the output port for the data product, where the team responsible for the data product can write their data to the lake. By segmenting the data lake in this way, teams can manage and secure their own data without worrying about conflicting with other teams. As such, decentralized ownership is made possible, even when using a more centralized storage technology . While a data lake is an important technology for supporting a Data Mesh architecture, it may not be the ideal solution for every use case . Using a data lake as the only type of data storage technology may limit the flexibility of the Data Mesh platform, as it only provides one type of storage. For example, when it comes to business intelligence and reporting, a data warehouse technology with tabular storage may be more suitable. Another example is when time series databases or graph databases are a better option because of the type of data we want to make natively reusable. To make the Data Mesh platform more flexible , it should provide the capability to plug in different types of data storage technology . Each of them is a different type of output port. In this way, each data product can have its own output ports, with different types of data storage technologies, geared towards specific data usage patterns. We have noticed that cloud vendors frequently recommend implementing a Data Mesh solution using one of their existing data lake services . Typically, their approach involves defining security boundaries to separate segments within these services, which can be owned by different domain teams to create various data products. However, the reference architectures they provide only incorporate one storage technology , namely their own data lake technology. Consequently, the resulting Data Mesh platform is less adaptable and tied to a single technology. What is lacking is an explicit ‘Data Product’ abstraction that goes beyond merely enforcing security boundaries and allows for the integration of various data storage technologies and solutions. Conclusion Data management is a critical component of any organization. Various technologies and approaches are available, like data lakes, data warehouses, data vaults, time series databases, graph databases, etc. They all have their unique strengths and limitations. Ultimately, a successful Data Mesh architecture provides the flexibility to share and reuse data with the right technology for the right use case . While a data lake is a powerful tool for managing raw data, it may not be the best solution for all types of data usage. By considering different types of data storage technologies, teams can choose the solution that best meets their specific needs and optimize their data management workflows. By using data products in a Data Mesh, teams can create a flexible and scalable architecture that can adapt to changing data management needs . Want to find out more about Data Mesh or Data Lakes? {% module_block module "widget_9cdc4a9f-7cb9-4bf2-a07a-3fd969809937" %}{% module_attribute "buttons" is_json="true" %}{% raw %}[{"appearance":{"link_color":"light","primary_color":"primary","secondary_color":"primary","tertiary_color":"light","tertiary_icon_accent_color":"dark","tertiary_text_color":"dark","variant":"primary"},"content":{"arrow":"right","icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"tertiary_icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"text":"Discover data mesh"},"target":{"link":{"no_follow":false,"open_in_new_tab":false,"rel":"","sponsored":false,"url":{"content_id":null,"href":"","href_with_scheme":"","type":"CONTENT"},"user_generated_content":false}},"type":"normal"}]{% endraw %}{% end_module_attribute %}{% module_attribute "child_css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "definition_id" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "field_types" is_json="true" %}{% raw %}{"buttons":"group","styles":"group"}{% endraw %}{% end_module_attribute %}{% module_attribute "isJsModule" is_json="true" %}{% raw %}true{% endraw %}{% end_module_attribute %}{% module_attribute "label" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "module_id" is_json="true" %}{% raw %}201493994716{% endraw %}{% end_module_attribute %}{% module_attribute "path" is_json="true" %}{% raw %}"@projects/aca-group-project/aca-group-app/components/modules/ButtonGroup"{% endraw %}{% end_module_attribute %}{% module_attribute "schema_version" is_json="true" %}{% raw %}2{% endraw %}{% end_module_attribute %}{% module_attribute "smart_objects" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "smart_type" is_json="true" %}{% raw %}"NOT_SMART"{% endraw %}{% end_module_attribute %}{% module_attribute "tag" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "type" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "wrap_field_tag" is_json="true" %}{% raw %}"div"{% endraw %}{% end_module_attribute %}{% end_module_block %}
Read moreWant to dive deeper into this topic?
Get in touch with our experts today. They are happy to help!

Want to dive deeper into this topic?
Get in touch with our experts today. They are happy to help!

Want to dive deeper into this topic?
Get in touch with our experts today. They are happy to help!

Want to dive deeper into this topic?
Get in touch with our experts today. They are happy to help!


