

Disclaimer: This content was not created with AI ;-)
From April 1st to April 4th our a part of our Cloud team was present at KubeCon + CloudNativeCon Europe 2025 in London. In this blog, they’re sharing their personal and unfiltered recap of what it's like to have attended KubeCon, the biggest conference of the Cloud Native Computing Foundation!
A Day in the Life of a KubeCon + CloudNativeCon Attendee
DAY 0
Our journey started at Brussels South Rail Station, where we (Peter, Bregt and Jonas) hopped on the Eurostar to London. After arriving, we took a quick Underground ride to our hotel, aka our home base for the next few days. Once we had dropped off our bags, we met up with Johan, who had arrived a day earlier and was already exploring the beautiful city of London.

DAY 1
We kicked off our day with breakfast at the hotel, followed by a short Underground trip. As we stepped out at Custom House station, a red carpet led us straight to the KubeCon venue at ExCeL London.
One thing was immediately clear: We were not alone. This year’s turnout was massive!

This edition of KubeCon was extra special, marking the 10-year anniversary of the Cloud Native Computing Foundation (CNCF). We still remember the first KubeCon Bregt attended back in 2017 in Berlin with just 1,500 attendees. And only about 15 were running Kubernetes in production. ACA Group was proudly among them!
The number of attendees announced during one of the keynotes this year was, however, a whopping 12.500 people. This event is a different cup of tea (pun intended)!

Kubernetes and Cloud Native technologies have become the foundation for many modern solutions.

From the keynotes and agenda, it was clear that this year’s hot topics were AI and Machine Learning, especially how they can be used to build new tools and solve complex problems.
We also noticed a strong presence of booths focused on FinOps and cost optimization, which will be featured in our upcoming blog posts (alongside plenty of other topics too!).
Key Takeaways from the Keynotes:
- AI and ML are reshaping how we use our hardware. These workloads demand not just a lot of resources but specific resources. Gone are the days of assuming endless availability. We'll need to plan ahead and schedule resources smartly as we simply can't afford to have them always on standby.
- AI and ML aren’t plug-and-play. You can’t just activate them and expect miracles. They need solid datasets that define what a “normal” situation looks like. Feeding data consistently is key to detecting anomalies. The advice? Start small and gradually scale up ML use for things like automated anomaly detection.
After the keynotes, we dove into the regular sessions. One standout was a talk from Spotify, where they displayed a tool called AiKA. Built using LLM and AI principles, it consolidates documentation scattered across version control, Slack, Google Drive, Confluence, and more, merging it into one centralized, searchable, and easy-to-maintain platform.

We can’t summarize every talk here (there were a lot), but stay tuned ‘cause we’ll cover several of them in more detail in our upcoming KubeCon blog series.
After the sessions, we explored the city a bit. Pro tip: don’t wait too long for a drink; most bars stop serving after 10 PM. Bummer! On the bright side, it meant we were well-rested and ready for another big day!
DAY 2
Day 2 opened with a series of keynotes on AI, LLMs, Platform Engineering, and Gateway API. We saw how commercial products are being built on CNCF open-source tools to make platform engineering more accessible. At ACA, we’re also big believers in platform engineering and are actively building out our platform fundamentals to support and empower our developers.
We also heard from some big-name companies sharing how they’ve leveraged Cloud Native and Open Source tools for their architecture. It was nice to see how closely their setups align with ACA’s approach. We're clearly on the right track!

Then, a bit of bad luck: Johan and Bregt’s session was cancelled. The speaker didn’t show up. Seems someone did find a bar serving drinks after 10 PM ;p
No worries, though; we seized the opportunity, took the stage ourselves, and entertained the crowd. Needless to say, the crowd went wild, and a big round of applause followed.

Thankfully, all the other sessions we planned to attend went ahead as expected, and we walked away with tons of valuable insights.
In a session about Istio, we learned about the introduction of ambient mode, which will bring major changes. We’ll share more on that in an upcoming technical blog!

During a talk from Etsy on Prometheus, we were reminded that throwing more resources at a problem isn’t always the answer. Sometimes, optimizing your setup (smaller data blocks, disabling compaction, smarter GC tuning) is the smarter (and cheaper) move.
We had a visit at some of the bigger partners (like AWS and Azure), platforms and tools we use today.

Afterwards we wrapped up the day with more city exploration and some team-building fun. 🙂

DAY 3
Many of us at ACA Group are Formula 1 fans, so it was pretty cool to learn that Red Bull Racing uses Cloud Native tooling to collect real-time data, which is essential for fast, data-driven decisions about pit stops and tire changes in changing weather.

After the keynotes, we split up for different sessions:
- Bregt attended a talk on VolumeGroupSnapshots, a new way to ensure consistent backups for workloads using multiple persistent volumes.
- Johan joined a session on Crossplane v2, highlighting how it now allows you to abstract not just infrastructure but applications, too — a big step forward for platform engineering.
- Jonas explored tools for securing Kubernetes clusters across the entire pipeline, from build to runtime.
- Peter attended a talk on how K8sGPT is transforming Enterprise Ops. Where the goal is to identify a problem before it impacts users using AI in the cluster. To goal is to do more work, prevent more with less people.

After wrapping up those final sessions, it was time to pack our bags and head home.
Three days of inspiration, innovation, and connection: We’re heading back with fresh ideas and new energy to put them into practice.
Goodbye KubeCon, see you next year in Amsterdam!


What others have also read


CloudBrew has always been a highlight on our calendar, but the 2025 edition felt different. Perhaps it was the timing. Just the month prior, November 2025, the Azure Belgium Central region finally opened its doors. ACA has always operated from the heart of Europe, so seeing this massive national milestone go live just before the conference added a layer of excitement.
Read more

Better uptime, lower costs, and avoiding vendor lock-in. These are three of the reasons why our customers opt for a multicloud strategy. Our Cloud Project Manager Roel Van Steenberghe explains what such a strategy entails and what the advantages of Google Cloud Platform (GCP) are.
Read more

In the complex world of modern software development, companies are faced with the challenge of seamlessly integrating diverse applications developed and managed by different teams. An invaluable asset in overcoming this challenge is the Service Mesh. In this blog article, we delve into Istio Service Mesh and explore why investing in a Service Mesh like Istio is a smart move." What is Service Mesh? A service mesh is a software layer responsible for all communication between applications, referred to as services in this context. It introduces new functionalities to manage the interaction between services, such as monitoring, logging, tracing, and traffic control. A service mesh operates independently of the code of each individual service, enabling it to operate across network boundaries and collaborate with various management systems. Thanks to a service mesh, developers can focus on building application features without worrying about the complexity of the underlying communication infrastructure. Istio Service Mesh in Practice Consider managing a large cluster that runs multiple applications developed and maintained by different teams, each with diverse dependencies like ElasticSearch or Kafka. Over time, this results in a complex ecosystem of applications and containers, overseen by various teams. The environment becomes so intricate that administrators find it increasingly difficult to maintain a clear overview. This leads to a series of pertinent questions: What is the architecture like? Which applications interact with each other? How is the traffic managed? Moreover, there are specific challenges that must be addressed for each individual application: Handling login processes Implementing robust security measures Managing network traffic directed towards the application ... A Service Mesh, such as Istio, offers a solution to these challenges. Istio acts as a proxy between the various applications (services) in the cluster, with each request passing through a component of Istio. How Does Istio Service Mesh Work? Istio introduces a sidecar proxy for each service in the microservices ecosystem. This sidecar proxy manages all incoming and outgoing traffic for the service. Additionally, Istio adds components that handle the incoming and outgoing traffic of the cluster. Istio's control plane enables you to define policies for traffic management, security, and monitoring, which are then applied to the added components. For a deeper understanding of Istio Service Mesh functionality, our blog article, "Installing Istio Service Mesh: A Comprehensive Step-by-Step Guide" , provides a detailed, step-by-step explanation of the installation and utilization of Istio. Why Istio Service Mesh? Traffic Management: Istio enables detailed traffic management, allowing developers to easily route, distribute, and control traffic between different versions of their services. Security: Istio provides a robust security layer with features such as traffic encryption using its own certificates, Role-Based Access Control (RBAC), and capabilities for implementing authentication and authorization policies. Observability: Through built-in instrumentation, Istio offers deep observability with tools for monitoring, logging, and distributed tracing. This allows IT teams to analyze the performance of services and quickly detect issues. Simplified Communication: Istio removes the complexity of service communication from application developers, allowing them to focus on building application features. Is Istio Suitable for Your Setup? While the benefits are clear, it is essential to consider whether the additional complexity of Istio aligns with your specific setup. Firstly, a sidecar container is required for each deployed service, potentially leading to undesired memory and CPU overhead. Additionally, your team may lack the specialized knowledge required for Istio. If you are considering the adoption of Istio Service Mesh, seek guidance from specialists with expertise. Feel free to ask our experts for assistance. More Information about Istio Istio Service Mesh is a technological game-changer for IT professionals aiming for advanced control, security, and observability in their microservices architecture. Istio simplifies and secures communication between services, allowing IT teams to focus on building reliable and scalable applications. Need quick answers to all your questions about Istio Service Mesh? Contact our experts
Read moreWant to dive deeper into this topic?
Get in touch with our experts today. They are happy to help!

Want to dive deeper into this topic?
Get in touch with our experts today. They are happy to help!

Want to dive deeper into this topic?
Get in touch with our experts today. They are happy to help!

Want to dive deeper into this topic?
Get in touch with our experts today. They are happy to help!

