Written by
Marnick Vanloffelt
Marnick Vanloffelt
Marnick Vanloffelt
All blog posts
futuristic man behind laptop ai
futuristic man behind laptop ai
Reading time 4 min
8 MAY 2025

The world of data analysis is changing fast. AI tools like Copilot are automating tasks that used to take us hours, which is exciting! But it also means we need to evolve our skills to stay ahead of the curve. Instead of spending time on repetitive tasks, data analysts can now focus on the bigger picture: strategy, problem-solving, and truly understanding the business. This blog explores the key skills data analysts need to thrive in this new AI-powered environment. The data analyst’s new focus: from repetitive tasks to strategy Imagine having more time to focus on what really matters: understanding the business, solving complex problems, and making strategic decisions. That's the opportunity AI provides. To maximize Copilot’s potential, data analysts need to shift their focus from manual tasks to work that require deep business knowledge and critical thinking. A crucial part of this shift is collaborating closely with stakeholders. Data analysts need to understand their challenges, define the right questions, and ensure their insights truly drive decision-making. Key skills data analysts need when working with AI 1. Advanced data modeling and metadata management Why it matters: With AI tools like Copilot handling much of the front-end report creation, the quality of insights will increasingly depend on the robustness of the underlying data model. Data analysts should invest time in refining their data modeling and metadata management skills. Actionable steps: Ensure that data models are clean, scalable, and well-documented. Be honest, how often have you filled out the “Description” field in your Power BI data model? How often have you used the “Synonyms” field? Our guess is: not all that often. Ironically, these fields will now be crucial in your pursuit of qualitative responses from Copilot … You will need to organize metadata to improve discoverability, ensuring Copilot (or other AI tools) can leverage the right data to generate insights. Build a deep understanding of how to structure data to enable AI to create actionable, accurate insights. Take a good hard look at your data model and how it is built. Define what can be improved based on best practices, and then apply them systematically. 2. Data governance and quality assurance Why it matters: Copilot can only produce reliable outputs with high-quality data. Data analysts will need to focus on ensuring data consistency, reliability, and governance. Actionable steps: Implement and maintain best practices for data governance. Use clear naming conventions, predefined measures, and logical data structures that make it easier for Copilot to generate actionable insights. 3. Business acumen and strategic insight generation Why it matters: AI tools lack contextual understanding, so data analysts must bridge this gap. Developing a strong grasp of business operations, industry trends, and strategic objectives allows analysts to create insights that are both relevant and impactful. Actionable steps: Invest in learning about your organization’s goals and strategic challenges. The clearer you can understand and document these goals and challenges, the better you will be able to translate them into relevant insights. Regularly engage with business leaders to understand the context behind the data, which in turn helps translate findings into actionable strategies. 4. Communication and storytelling skills Why it matters: Translating technical insights into stories that resonate with business stakeholders is crucial. Storytelling bridges the gap between data and decision-makers. Actionable steps: Become an expert at framing the insights. Work on presenting data in narrative formats that highlight the “why” and “how” behind the insights. Focus on how the data aligns with the company’s goals, offering clear recommendations and visualizations that stakeholders can easily grasp. How to implement these skills: practical actions for data analysts Developing data modeling and metadata management skills With AI tools like Copilot in the mix, the quality of insights depends significantly on data models. Data analysts should dedicate time to refining their data modeling skills, focusing on: Organizing and documenting data: Pay attention to metadata fields like descriptions and synonyms, which will help AI generate more accurate outputs. Data structure optimization : Ensure your data structure is scalable, clean, and flexible. This will streamline Copilot’s ability to work with the data seamlessly. Engaging with business stakeholders AI-generated insights are only as valuable as their alignment with business goals. Data analysts must regularly engage with stakeholders to: Define clear objectives: Discuss goals and pain points with stakeholders to set a clear direction for AI analysis. Gather feedback: Regular feedback helps adjust AI-generated insights to better meet business needs, ensuring outputs are practical and actionable. Conclusion: the future of data analysis is here AI tools like Copilot are transforming data analysis, and it's an exciting time to be in this field! By focusing on strategic thinking, communication, and strong data foundations, data analysts can not only adapt but thrive. The ability to connect data insights to business context, combined with excellent communication and storytelling, will define the most successful data analysts in the years to come. By investing in these skills, data analysts can stay at the forefront of data-driven innovation. For more insights on how Copilot is shaping data analysis, read the article “How Copilot in Power BI is Transforming Data Analysis” . 🚀 Ready to empower your data team with advanced AI skills? Contact our experts to support your transformation.

Read more
Copilot and Power BI
Copilot and Power BI
Reading time 5 min
8 MAY 2025

In May 2024, Microsoft’s announcement of Copilot for Power BI signaled a major shift in data analysis. This AI-powered tool lets users perform complex data tasks using conversational prompts, transforming how data is modeled, analyzed, and presented. But what does this mean for businesses, IT managers, and data analysts? Find out how Copilot integrates into Power BI, the range of tasks it can handle, and its broader implications for data analysts. While Copilot simplifies routine tasks, it also demands new skills and perspectives to fully realize its potential. Let’s dive deeper into this. What is Copilot for Power BI? Incorporating AI into Power BI Copilot introduces powerful AI-driven tools that automate and streamline tasks previously requiring advanced technical knowledge. Here’s a breakdown of Copilot’s main functions in Power BI: Summarize data models : Provides overviews of underlying semantic models. Suggest content for reports : Uses prompts to recommend relevant visuals and layouts. Generate visuals and report pages: Automates the creation of report elements. Answer data model questions : Responds to data queries within the model context. Write DAX queries: Generates DAX expressions, reducing the need for deep DAX expertise. Enhance Q A with synonyms and descriptions : Improves model usability by enabling natural language processing. These features enable quicker, easier data exploration and report creation. However, there’s more beneath the surface for data analysts to consider. Copilot: a junior-level assistant with limitations If we would look at Copilot as a colleague, think of it as a junior-level assistant, capable of helping with tasks such as generating reports, dashboards, and queries in Power BI. However, Copilot operates without domain-specific knowledge and tends to have a literal approach to tasks. It can efficiently follow instructions and generate outputs based on well-constructed prompts, but lacks the deep business context that human analysts bring to the table. Copilot doesn’t possess the ability to understand the nuances of a business problem or the industry-specific intricacies that often influence data insights. Yet 😉. While Copilot can automate some of the more routine and mechanical aspects of data analysis, such as building visuals or applying basic transformations, it still requires guidance and oversight. It’s up to the analyst to ensure that Copilot’s outputs are relevant, meaningful, and aligned with the organization's goals. Rather than replacing data analysts, Copilot elevates their roles, pushing them to focus on higher-level tasks which involve a high degree of critical thinking. Adopting Copilot for PowerBI: Costs and challenges Despite its promise, Copilot comes with a significant entry barrier for many organizations. As of now, using Copilot in Power BI minimally requires either an F64 Fabric capacity or a P1 Premium capacity, which is rather costly. Smaller organizations or those with limited budgets may not have immediate access, limiting its widespread adoption at the moment. For organizations that do invest in the necessary infrastructure, Copilot has the potential to speed up certain processes. However, the high cost of entry means that data analysts in these environments will need to demonstrate a clear return on investment. This makes it even more critical for analysts to focus on delivering high-value insights that directly impact business decisions, rather than simply generating reports. The impact of Copilot on the role of the data analyst Front-end development of Power BI reports and dashboards used to be a key responsibility of Power BI developers or the data analysts themselves. However, with Copilot, well-constructed prompts can lead to fully functional reports and dashboards, automating much of the manual work. This means that data analysts can significantly reduce the time spent on technical report building. The process of designing visuals, formatting reports, and creating dashboards will largely be handled by Copilot. While automation will save time, it will reshape the job of the data analyst: Shifting focus from dashboards to business value: Data analysts will prioritize delivering actionable insights over building dashboards, ensuring dashboards deliver insights that are actionable and easy for business stakeholders to understand. Translating problems into data solutions: Analysts must frame business problems as data questions, leveraging Copilot effectively. Strong business acumen and communication skills are essential for collaborating with leaders and ensuring insights address key challenges. Building robust and flexible semantic models: Copilot depends on well-structured models, making data modeling and metadata management essential. Analysts must create robust, flexible, and well-documented semantic models that support evolving business needs, focusing on long-term strategies and key metrics. Mastering data governance: To maximize Copilot’s value, data analysts have to make sure that the data is clean, reliable, and well-managed. High-quality data and strong metadata management are critical, as Copilot relies on these to generate effective outputs. Consider the following list of considerations Microsoft published for datasets being used with Copilot to guide you in the right direction. Conclusion: is Copilot for Power BI a must-have for your organization? Microsoft’s Copilot for Power BI is a game-changer, yet it emphasizes the need for analysts to evolve their skills beyond technical tasks. Analysts are being pushed to elevate their work, focusing on insight generation and strategic thinking. To learn more about what skills will be essential for data analysts in a Copilot-powered environment, read the article on “Essential skills for data analysts in the age of AI” . Curious about the impact of Copilot on your data team or need help with implementing it effectively? {% module_block module "widget_940af9b0-b1f5-43b5-9803-c344d01d992f" %}{% module_attribute "buttons" is_json="true" %}{% raw %}[{"appearance":{"link_color":"light","primary_color":"primary","secondary_color":"primary","tertiary_color":"light","tertiary_icon_accent_color":"dark","tertiary_text_color":"dark","variant":"primary"},"content":{"arrow":"right","icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"tertiary_icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"text":"Contact us!"},"target":{"link":{"no_follow":false,"open_in_new_tab":false,"rel":"","sponsored":false,"url":{"content_id":null,"href":"https://25145356.hs-sites-eu1.com/en/blog/how-copilot-in-power-bi-is-transforming-data-analysis-new-ai-tools-new-opportunities","href_with_scheme":"https://25145356.hs-sites-eu1.com/en/blog/how-copilot-in-power-bi-is-transforming-data-analysis-new-ai-tools-new-opportunities","type":"EXTERNAL"},"user_generated_content":false}},"type":"normal"}]{% endraw %}{% end_module_attribute %}{% module_attribute "child_css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "definition_id" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "field_types" is_json="true" %}{% raw %}{"buttons":"group","styles":"group"}{% endraw %}{% end_module_attribute %}{% module_attribute "isJsModule" is_json="true" %}{% raw %}true{% endraw %}{% end_module_attribute %}{% module_attribute "label" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "module_id" is_json="true" %}{% raw %}201493994716{% endraw %}{% end_module_attribute %}{% module_attribute "path" is_json="true" %}{% raw %}"@projects/aca-group-project/aca-group-app/components/modules/ButtonGroup"{% endraw %}{% end_module_attribute %}{% module_attribute "schema_version" is_json="true" %}{% raw %}2{% endraw %}{% end_module_attribute %}{% module_attribute "smart_objects" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "smart_type" is_json="true" %}{% raw %}"NOT_SMART"{% endraw %}{% end_module_attribute %}{% module_attribute "tag" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "type" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "wrap_field_tag" is_json="true" %}{% raw %}"div"{% endraw %}{% end_module_attribute %}{% end_module_block %}

Read more
data mesh
data mesh
Reading time 10 min
6 MAY 2025

Data mesh is revolutionizing the way organizations manage data. Unlike traditional centralized models, data mesh uses a decentralized, domain-oriented structure. But how does governance work in such a distributed system? At ACA Group, we believe data mesh is an answer to the challenge of managing data by focusing on building a decentralized, self-serve data ecosystem. The goal is to embed data-driven innovation within each department or team, making everyone in the organization responsible for creating reusable data that fuels new products and services across departments. In a data mesh, not only the management of ownership and infrastructure is different. The key to success is transforming data governance itself. Instead of making a centralized IT team responsible for data governance, data mesh distributes the responsibility across different teams. This approach, known as "federated computational governance", ensures active participation from both data-producing and data-consuming teams in crafting and adopting governance policies. Four pillars of data mesh and their governance challenges To understand the importance of governance in a data mesh, we need to break down the core principles of a data mesh and how they relate to data governance challenges: 1. Decentralization In a data mesh, data ownership and responsibility are distributed across different business domains or teams. Each domain becomes a self-contained unit, managing its own data products. This also means that each data product and domain is self-governing, but needs to be interoperable with other data products and domains. 2. Domain-oriented approach Instead of a monolithic data warehouse, a data mesh is made up of interconnected data products. This implies that each data product might come with its own “local dialect”. The challenge here is how to speak the same language, without speaking the same language. 3. Data as a product This approach treats data as a product, with each domain creating and maintaining data products that are discoverable, accessible, and reusable. Metadata management becomes an important topic, since metadata is used to discover, access, integrate with and use the data encapsulated within a data product. 4. Self-serve platform This engine and control panel empowers data producers and consumers alike. Developer portals, data catalogs, lineage tools, and collaboration spaces facilitate seamless navigation, while automated policy enforcement and regular audits are used to ensure compliance and promote data product quality without manual intervention. Automation of governance is a core challenge associated with the self-serve platform. Now that you have a better understanding of the central building blocks and challenges of data governance in a data mesh, let’s take a closer look at each of these challenges individually. Federated Governance A standout feature of data mesh is federated governance . But what does it actually mean? “Federated” refers to the fact that while each domain (and data product within those domains) has its own autonomy, they come together to hash out a few things that are relevant and valuable for everyone. You might think of it as a parliamentary democracy, where representatives come together to make joint decisions, which then need to be broadly implemented. This cross-domain collaboration means that quite a few teams are going to be involved. Federated Governance Team This is a group of domain representatives and experts who collaborate across business units and areas of expertise. They ensure data quality, compliance, and alignment with organizational goals.They oversee tasks such as: Automated data quality assessments Data access and privacy management Ensuring data products and datasets can be shared and reused This team defines standardized data governance policies and ensures that data products and datasets can be shared and reused, while safeguarding overall quality. To continue our earlier comparison, the Governance team is like a “parliament” that discusses and passes “laws”. Platform Team This team is essential to automate and enforce the governance policies defined by the Governance Team on the self-serve platform. They ensure that policies can be adopted by Data Products on a low-effort basis, promoting interoperability and collaboration without introducing unnecessary overhead. Domain Teams Aligned with business units, domain teams handle operational data governance within their own domains. Responsibilities include: Data mapping and documentation Ensuring data quality Implementing standards defined by the federated governance team Importantly, each domain team has the autonomy and resources to execute the standards defined by the federated governance team. In summary While local domain teams make decisions specific to their domain, federated data governance ensures global rules are applied to all data products and their interfaces. These rules must ensure a healthy and interoperable ecosystem. How does federated data governance work? Let’s start with an important note: Federated Governance requires a different way of thinking compared to more traditional governance approaches. Federated governance is focused on promoting autonomy and interoperability as much as possible, keeping interference by a centralized team to an absolute minimum. Do you want to successfully implement federated data governance in your organization? Then, make sure you establish the following key foundations: Culture of ownership Teams must feel accountable for their data. This requires a high level of maturity in data literacy, and a willingness to invest in training and continuous education on data management and governance best practices. Robust data infrastructure You need to be ready to invest in scalable and flexible data infrastructure that supports decentralized data management. Governance framework You will need a clear governance framework that defines roles, responsibilities, and processes. This framework should be flexible enough to adapt to the needs of different domains while maintaining overall coherence. Cross-functional collaboration Collaboration between IT, data professionals, and business units is essential. Enterprise ontology: bridging domain-specific language gaps Each domain can have its own specific lingo, creating challenges when terms differ in definition across teams. To bridge the gaps between domains, we need a solid basis for “translation” and a common understanding of terms. This is where the enterprise ontology comes in. What is an enterprise ontology? You can see it as a large, hierarchically structured “dictionary” that links concepts used in different domains to each other based on a common denominator. For example: a sales team and a finance team both use the term “customer”, but the definitions for this term used by each team are somewhat different. The Sales team calls people who have received a quote a customer. The Finance team defines a "customer" as someone with a signed contract and invoicing details. Others are referred to as “prospects”. Without a shared ontology, combining the data products from these teams would yield inconsistent results, highlighting the need for clarity. How an enterprise ontology works By tagging domain-specific terms to a unified concept (e.g., "customer") in the ontology, teams can reconcile differences and enable cross-domain understanding. To bridge the gaps between domain-specific terms: Tag terms to a common ontology : Terms from each domain are linked to a unified concept in the enterprise ontology using tags. For instance, "sales customer" and "finance customer" might both map to a universal "customer" term. Leverage unique identifiers : When consulting the ontology, you might discover that the unique identifier across all “customers” is their email address. Moreover, finding a unique identifier across terms linked to the same concept is valuable, as it allows you to correlate data related to the same term across domains. Metadata: Enabling prevention, validation, and auditing Metadata, often described as "data about data," plays a crucial role in Federated Data Governance within a data mesh. It provides the necessary context to make data understandable, accessible, and usable across different domains. Key roles of metadata in federated data governance Enhancing data discoverability Metadata enables users to easily find and understand data across the organization. It includes practical information such as the data source(s), creation date, format, and usage instructions, but also information specifically linked to discoverability, like which enterprise ontology tags are applicable, who the owner is, or associated data products. This makes it easier for teams to locate (and integrate with) relevant data products. Improving data quality and trust Metadata includes (or should include) data quality metrics and lineage information, helping teams ensure data accuracy and reliability. It allows users to trace data back to its origin, understand transformations it has undergone, and assess its quality. Facilitating compliance and security Metadata helps in maintaining compliance with data privacy and security regulations. The data product team can specify who or which roles can access the data and for what purpose, ensuring accountability and transparency. Furthermore, tagging sensitive data elements helps to automatically apply data privacy and masking policies, ensuring regulatory compliance. Enabling interoperability Metadata ensures that data from different domains can be integrated and used together. Standardized metadata formats and definitions enable seamless data exchange and interoperability. Best practices for metadata management in data mesh In a data mesh, metadata should be managed as close to the source as possible. Each data product team is responsible to carefully author and curate the metadata associated with their data product. Exceptions, like the automated addition of data quality metrics from the self-serve platform, can apply, but the data product itself remains the source of truth, and they should be managed as such. In short, metadata should be decentrally managed, but centrally consumable. Metadata management should be automated as much as reasonably possible and integrated with data governance tools to ensure accuracy and consistency. Key practices include: Careful metadata authoring and curation: Use tools that automatically capture and update metadata. Introduce processes and practices that motivate data product owners to take special care when they create and modify the metadata associated with their data product. The data product owner should ensure that the metadata presented to consumers gives a truthful representation of the content of the data product, so these consumers can make an informed decision about the value of the product for their use case. Standardization: Implement standardized metadata formats and definitions across all domains (where appropriate) to ensure maximal interoperability and ease of use. Automated validation: Define procedures and policies to automatically validate metadata, in order to spot mistakes and inconsistencies early on and prevent error propagation throughout the system. As always, prevention and validation come first, audits second. Regular audits: Conduct regular automated audits to ensure metadata accuracy and compliance with governance policies. The self-serve platform: automating governance The self-serve platform embodies "Federated Computational Governance." It provides tools and infrastructure that allow both users and creators to independently access and manage data products without relying on a central IT team. Key features of a self-serve platform Empowering domain teams: Self-serve platforms enable domain teams to take ownership of their data. They can create, manage, and use data products independently, fostering a sense of accountability. Ensuring compliance: Self-serve platforms integrate governance controls, ensuring that data usage complies with organizational policies and regulations, balancing autonomy with oversight. Metadata management: Through the use of the right tooling, the self-serve platform can facilitate the careful curation and automated validation of metadata. This eases both integration with the self-serve platform and management of metadata within the individual data products. Policy management: Governance policies can be translated to automated processes, which can be enforced through the platform. Automated policy enforcement ensures that data usage complies with internal guidelines and external regulations. Monitoring and auditing: Monitoring and auditing capabilities can be used to track data usage and ensure compliance. Regular audits help identify and address any governance issues. Alerting data product or domain teams of these issues and their consequences allows them to address them in their own way and at their own time. Conclusion: striking the balance between autonomy and oversight Embracing a data mesh architecture requires a different approach to governance. The traditional centralized model of managing data no longer suffices in a world where agility, autonomy, and cross-functional collaboration are paramount. Federated data governance empowers domain teams to take ownership of their data products while ensuring alignment with global organizational standards. By distributing responsibilities across domain teams, supported by a self-serve platform and strong metadata management practices, organizations can enhance data quality, interoperability, and compliance without adding unnecessary complexity. However, the success of data mesh governance depends on fostering a strong culture of data ownership, building a robust self-service platform, and establishing clear frameworks that promote seamless cross-domain collaboration. That’s a lot of buzzwords for one sentence, but it rings true nonetheless: Data ownership holds people accountable for the data they create and maintain, while allowing them to take full control of their data products. Strong infrastructure and a self-service platform is needed to facilitate this practice of ownership, giving data product teams the autonomy they need to put their product out there, while also allowing for collaboration and sharing. Clear governance frameworks are needed to establish what quality looks like and guides data product teams in implementing best practices related to integration, collaboration, and more. The key to thriving in data mesh is a governance model that strikes the right balance between autonomy and oversight—allowing teams to produce while safeguarding the integrity and value of the organization's data ecosystem. Ready to embrace data mesh? Contact us for expert guidance and tailored solutions! {% module_block module "widget_82e3f15a-94e5-4379-b5f9-0c6ba5bd6db7" %}{% module_attribute "buttons" is_json="true" %}{% raw %}[{"appearance":{"link_color":"light","primary_color":"primary","secondary_color":"primary","tertiary_color":"light","tertiary_icon_accent_color":"dark","tertiary_text_color":"dark","variant":"primary"},"content":{"arrow":"right","icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"tertiary_icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"text":"Reach out now"},"target":{"link":{"no_follow":false,"open_in_new_tab":false,"rel":"","sponsored":false,"url":{"content_id":220624043195,"href":"https://25145356.hs-sites-eu1.com/en/services/data-solutions","href_with_scheme":null,"type":"CONTENT"},"user_generated_content":false}},"type":"normal"}]{% endraw %}{% end_module_attribute %}{% module_attribute "child_css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "definition_id" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "field_types" is_json="true" %}{% raw %}{"buttons":"group","styles":"group"}{% endraw %}{% end_module_attribute %}{% module_attribute "isJsModule" is_json="true" %}{% raw %}true{% endraw %}{% end_module_attribute %}{% module_attribute "label" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "module_id" is_json="true" %}{% raw %}201493994716{% endraw %}{% end_module_attribute %}{% module_attribute "path" is_json="true" %}{% raw %}"@projects/aca-group-project/aca-group-app/components/modules/ButtonGroup"{% endraw %}{% end_module_attribute %}{% module_attribute "schema_version" is_json="true" %}{% raw %}2{% endraw %}{% end_module_attribute %}{% module_attribute "smart_objects" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "smart_type" is_json="true" %}{% raw %}"NOT_SMART"{% endraw %}{% end_module_attribute %}{% module_attribute "tag" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "type" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "wrap_field_tag" is_json="true" %}{% raw %}"div"{% endraw %}{% end_module_attribute %}{% end_module_block %}

Read more