

CloudBrew has always been a highlight on our calendar, but the 2025 edition felt different. Perhaps it was the timing. Just the month prior, November 2025, the Azure Belgium Central region finally opened its doors. ACA has always operated from the heart of Europe, so seeing this massive national milestone go live just before the conference added a layer of excitement.
We are happy to see that photobombing is still a concept in 2025 😉
It was fitting, then, that our partners from Microsoft, Wouter Gevaert and Jan Gezels, kicked things off after the initial key note.
Azure Belgium Central and the Cloud-Native Path

The opening session wasn't just a victory lap for the new datacenter; it was a technical roadmap. Even though Azure Belgium Central (ABC) is open for business, there is much work in the background to continuously bring more Azure services live.
The biggest shift is how we think about reliability. We are moving away from the old model of paired regions and leaning hard into modern multi-region High Availability (HA) and Disaster Recovery Planning. With multiple Availability Zones now in our backyard, we can finally offer true local data residency to our compliance-sensitive clients without sacrificing fault tolerance.
The biggest takeaways?
- Check availability correctly: There is currently only one reliable way to see which services are available in the ABC region: the Azure Pricing Calculator. Use this tool above all others to verify service availability.
- Latency & interconnectivity: The ABC region has one of the lowest latency connections to Europe-West (Amsterdam). Interconnecting resources and workloads with that region is not an issue, latency-wise.
- Handling missing services: Services such as Azure Databricks and Microsoft Fabric are not yet available in ABC. Technically, this is manageable: you can deploy your primary workload in ABC while interconnecting at the network level to instances of Databricks or Fabric in another region.

Healthy apps, happy users: smarter monitoring with Azure Health Models
With the infrastructure foundation set, we headed to Massimo Crippa’s session on "Healthy apps, happy users: smarter monitoring with Azure Health Models"
Massimo did a great job explaining the future of monitoring. We are all painfully aware that monitoring alerts are incredibly hard to get right. You constantly have to balance which alerts to configure and what thresholds to set.
- Too low: Triggering an alert if CPU usage hits 70% often causes a flood of notifications. This leads to alert fatigue, increasing the risk of missing critical issues.
- Too high: If the threshold is too high, the environment may already be impacted by the time you are notified.
Additionally, traditional alerting has two major flaws:
- The flood: If one Azure resource fails, your ticketing system is usually flooded with multiple alerts (availability, latency, CPU) for the same root cause.
- False criticals: Non-critical alerts are often flagged as critical. For example, if one node behind a load balancer goes down but others handle the traffic, the user experience isn't impacted, yet the alert screams "Critical."
Azure Health Models isn't a silver bullet, but it is a massive improvement. The main takeaway is that alerts are bundled depending on the system. If an App Service spikes in CPU, you receive one consolidated alert rather than a flood of symptoms.
Secondly, you can distinguish between "Critical" (user impact) and "Degraded" (backend issue, no user impact) statuses.

To summarize, Azure Health Models provide these benefits:
- Bring together business and technical viewpoints of a health model
- Quick detection of a system's overall health status
- Elimination of alert fatigue
Vibecoding: From coder to operator
We promise we didn't "vibewrite" this blog post, but we certainly enjoyed Sakari Nahi’s session: "Let's vibecode something."
This was arguably the most forward-looking talk of the event. Sakari highlighted a strategic shift that many of us are already feeling: AI is changing the engineer’s role from a simple writer of code to an "operator of AI."
The entire session was a live demonstration on how you can vibecode yourself into a fully working multiplayer game.

We learned how to leverage OpenSpec and why this is incredibly powerful. In short, it’s a specialized development toolkit designed for AI assisted coding. It focuses on Spec-Driven Development (SDD) - a workflow where you clearly define what you want before an AI agent like Claude or Cursor attempts to write the code.
Why is OpenSpec powerful?
- AI agents often hallucinate or miss requirements when instructions are buried in a long chat history. OpenSpec forces a "Source of Truth" file that the AI must follow, reducing errors and "lazy" coding.
- Great for Existing Code: Unlike some AI starter kits that only work for new projects (0→1), OpenSpec is designed to manage changes in existing complex codebases (1→N). It tracks "deltas" (changes) rather than just generating whole files.
- Agent agnostic: It is not a standalone AI; it is a protocol. You can use it with Cursor, Claude Code, GitHub Copilot, or any LLM that can read files. It doesn't require its own API keys.
We all know that AI can be a bit unpredictable at times. While the multiplayer game did not fully work at the end of the session, we were inspired.
So inspired, that late the same evening in our own home we fired up Antigravity from Google. We followed the principles of OpenSpec and what we had learned in the session during the day.
With awe, we saw a complete space top down shooter game being created by Antigravity with a couple of prompts and iterations. In 15 minutes we had a local space invaders games, with graphics, levels, highscore and all the bells and whistles.
That’s the power of OpenSpec and vibe coding!
Day Two: The Governance Reality Check
If day one was about the future, day two was about the nitty-gritty reality of managing it all.
We started with a provocative title: "Azure Tags are Dead. Meet Their Weird Cousin: Service Groups," by Stijn Depril and Tim Verbist.
Let’s be honest, Azure tagging was supposed to bring order to chaos. In reality, inconsistent, missing, or unenforced tags often turn FinOps into a nightmare. Stijn and Tim introduced Service Groups as the "missing link."
The live demo was an eye opener, showing how Service Groups can provide the cost visibility and accountability that traditional tagging constantly struggles to deliver. This can be thought of as just a virtual container which can contain Azure resources from anywhere within the tenant, across subscriptions.
This allows different teams in your organization to have a different view. For example, a Product Owner may have one Service Group with all the resources for a specific application regardless if it’s in a production or development subscription.

While Service Groups are still in preview, the session suggested they might eventually replace tags, and perhaps even Management Groups as the primary way to assign policies and group resources. At ACA, we aren't convinced they will replace everything just yet, but the potential is undeniable.
Level Up Security
The afternoon took an interesting turn with the session from Roland Guijt “Level Up Your Security: OpenID Connect/OAuth update”.
With the release of OAuth 2.1, Roland highlighted three major protocol improvements:
- PKCE (Proof Key for Code Exchange) required for authorization code flow
- It acts like a matching digital ticket stub. It ensures the app that started your login is the same one finishing it, making it nearly impossible for a hacker to intercept the process mid-way.
- Implicit and resource owner password grant omitted
- It retires outdated login methods that allowed apps to see your actual password or were easy to trick. This ensures apps never touch your credentials directly, protecting you from leaks and phishing
- Refresh tokens for public clients must either be sender-constrained or for one-time use
- It puts a "self-destruct" or "lock" on the digital keys that keep you logged in. If a thief manages to steal one of these keys, they can’t use it to access your account because it will either be invalid or expire immediately.

Conclusion: Full speed ahead to 2026
CloudBrew 2025 is in the books, and it left us with plenty to do and consider.
With Azure Belgium Central officially open, data sovereignty is no longer a buzzword, it’s a baseline. Security is mandatory, AI is becoming integrated in all workflows and governance is finally getting the tools it needs.
Together with our customers and partners, ACA is ready to turn these insights into action. By the time CloudBrew 2026 rolls around, we expect the landscape to have shifted again, and we’ll be ready for it.
Until next time!

What others have also read


Better uptime, lower costs, and avoiding vendor lock-in. These are three of the reasons why our customers opt for a multicloud strategy. Our Cloud Project Manager Roel Van Steenberghe explains what such a strategy entails and what the advantages of Google Cloud Platform (GCP) are.
Read more

In the complex world of modern software development, companies are faced with the challenge of seamlessly integrating diverse applications developed and managed by different teams. An invaluable asset in overcoming this challenge is the Service Mesh. In this blog article, we delve into Istio Service Mesh and explore why investing in a Service Mesh like Istio is a smart move." What is Service Mesh? A service mesh is a software layer responsible for all communication between applications, referred to as services in this context. It introduces new functionalities to manage the interaction between services, such as monitoring, logging, tracing, and traffic control. A service mesh operates independently of the code of each individual service, enabling it to operate across network boundaries and collaborate with various management systems. Thanks to a service mesh, developers can focus on building application features without worrying about the complexity of the underlying communication infrastructure. Istio Service Mesh in Practice Consider managing a large cluster that runs multiple applications developed and maintained by different teams, each with diverse dependencies like ElasticSearch or Kafka. Over time, this results in a complex ecosystem of applications and containers, overseen by various teams. The environment becomes so intricate that administrators find it increasingly difficult to maintain a clear overview. This leads to a series of pertinent questions: What is the architecture like? Which applications interact with each other? How is the traffic managed? Moreover, there are specific challenges that must be addressed for each individual application: Handling login processes Implementing robust security measures Managing network traffic directed towards the application ... A Service Mesh, such as Istio, offers a solution to these challenges. Istio acts as a proxy between the various applications (services) in the cluster, with each request passing through a component of Istio. How Does Istio Service Mesh Work? Istio introduces a sidecar proxy for each service in the microservices ecosystem. This sidecar proxy manages all incoming and outgoing traffic for the service. Additionally, Istio adds components that handle the incoming and outgoing traffic of the cluster. Istio's control plane enables you to define policies for traffic management, security, and monitoring, which are then applied to the added components. For a deeper understanding of Istio Service Mesh functionality, our blog article, "Installing Istio Service Mesh: A Comprehensive Step-by-Step Guide" , provides a detailed, step-by-step explanation of the installation and utilization of Istio. Why Istio Service Mesh? Traffic Management: Istio enables detailed traffic management, allowing developers to easily route, distribute, and control traffic between different versions of their services. Security: Istio provides a robust security layer with features such as traffic encryption using its own certificates, Role-Based Access Control (RBAC), and capabilities for implementing authentication and authorization policies. Observability: Through built-in instrumentation, Istio offers deep observability with tools for monitoring, logging, and distributed tracing. This allows IT teams to analyze the performance of services and quickly detect issues. Simplified Communication: Istio removes the complexity of service communication from application developers, allowing them to focus on building application features. Is Istio Suitable for Your Setup? While the benefits are clear, it is essential to consider whether the additional complexity of Istio aligns with your specific setup. Firstly, a sidecar container is required for each deployed service, potentially leading to undesired memory and CPU overhead. Additionally, your team may lack the specialized knowledge required for Istio. If you are considering the adoption of Istio Service Mesh, seek guidance from specialists with expertise. Feel free to ask our experts for assistance. More Information about Istio Istio Service Mesh is a technological game-changer for IT professionals aiming for advanced control, security, and observability in their microservices architecture. Istio simplifies and secures communication between services, allowing IT teams to focus on building reliable and scalable applications. Need quick answers to all your questions about Istio Service Mesh? Contact our experts
Read more

On December 7 and 8, 2023, several ACA members participated in CloudBrew 2023 , an inspiring two-day conference about Microsoft Azure. In the scenery of the former Lamot brewery, visitors had the opportunity to delve into the latest cloud developments and expand their network. With various tracks and fascinating speakers, CloudBrew offered a wealth of information. The intimate setting allowed participants to make direct contact with both local and international experts. In this article we would like to highlight some of the most inspiring talks from this two-day cloud gathering: Azure Architecture: Choosing wisely Rik Hepworth , Chief Consulting Officer at Black Marble and Microsoft Azure MVP/RD, used a customer example in which .NET developers were responsible for managing the Azure infrastructure. He engaged the audience in an interactive discussion to choose the best technologies. He further emphasized the importance of a balanced approach, combining new knowledge with existing solutions for effective management and development of the architecture. From closed platform to Landing Zone with Azure Policy David de Hoop , Special Agent at Team Rockstars IT, talked about the Azure Enterprise Scale Architecture, a template provided by Microsoft that supports companies in setting up a scalable, secure and manageable cloud infrastructure. The template provides guidance for designing a cloud infrastructure that is customizable to a business's needs. A critical aspect of this architecture is the landing zone, an environment that adheres to design principles and supports all application portfolios. It uses subscriptions to isolate and scale application and platform resources. Azure Policy provides a set of guidelines to open up Azure infrastructure to an enterprise without sacrificing security or management. This gives engineers more freedom in their Azure environment, while security features are automatically enforced at the tenant level and even application-specific settings. This provides a balanced approach to ensure both flexibility and security, without the need for separate tools or technologies. Belgium's biggest Azure mistakes I want you to learn from! During this session, Toon Vanhoutte , Azure Solution Architect and Microsoft Azure MVP, presented the most common errors and human mistakes, based on the experiences of more than 100 Azure engineers. Using valuable practical examples, he not only illustrated the errors themselves, but also offered clear solutions and preventive measures to avoid similar incidents in the future. His valuable insights helped both novice and experienced Azure engineers sharpen their knowledge and optimize their implementations. Protecting critical ICS SCADA infrastructure with Microsoft Defender This presentation by Microsoft MVP/RD, Maarten Goet , focused on the use of Microsoft Defender for ICS SCADA infrastructure in the energy sector. The speaker shared insights on the importance of cybersecurity in this critical sector, and illustrated this with a demo demonstrating the vulnerabilities of such systems. He emphasized the need for proactive security measures and highlighted Microsoft Defender as a powerful tool for protecting ICS SCADA systems. Using Azure Digital Twin in Manufacturing Steven De Lausnay , Specialist Lead Data Architecture and IoT Architect, introduced Azure Digital Twin as an advanced technology to create digital replicas of physical environments. By providing insight into the process behind Azure Digital Twin, he showed how organizations in production environments can leverage this technology. He emphasized the value of Azure Digital Twin for modeling, monitoring and optimizing complex systems. This technology can play a crucial role in improving operational efficiency and making data-driven decisions in various industrial applications. Turning Azure Platform recommendations into gold Magnus Mårtensson , CEO of Loftysoft and Microsoft Azure MVP/RD, had the honor of closing CloudBrew 2023 with a compelling summary of the highlights. With his entertaining presentation he offered valuable reflection on the various themes discussed during the event. It was a perfect ending to an extremely successful conference and gave every participant the desire to immediately put the insights gained into practice. We are already looking forward to CloudBrew 2024! 🚀
Read moreCurious how we can help you get more out of your cloud solutions?
Reach out to us!

Curious how we can help you get more out of your cloud solutions?
Reach out to us!

Curious how we can help you get more out of your cloud solutions?
Reach out to us!

Curious how we can help you get more out of your cloud solutions?
Reach out to us!


