

Did you miss KubeCon/CloudNativeCon 2021 last week, or didn’t have any time to spare to go through all the sessions? We’ve got your back! These are our highlights and takeaways from this year’s digital event on everything related to Cloud Native and Kubernetes.
Multi-Cloud and Multi-Cluster with CNCF’s Kuma on Kubernetes and VMs – Marco Palladino, Kong
This keynote was the first session our team members followed during ServiceMeshCon 2021. Even though it could be considered a sales pitch for the Kuma project, it gave a good idea of the current challenges of implementing service meshes.
A lot of companies – including us – have successfully installed service meshes add-ons on their Kubernetes cluster. Now that we know what’s possible with these service meshes, we want to be able to use them everywhere. Think about multiple clusters, on-premise and in the cloud, but perhaps also virtual machines. In the end, the goal is to have a workload and schedule it somewhere on our infrastructure. You don’t necessarily need to know exactly where this will be scheduled. Patterns like spreading load 20% on VMs, 30% on on-premise Kubernetes clusters and 50% on AWS EKS clusters are a possibility.
This keynote was a good introduction on what was explained more in-depth throughout the day.
“Extend All The Things!”: Cloud Provider Edition – Joe Betz, Google
One of the powerful features of Kubernetes is extensibility: the possibility to add additional functionalities to the cluster, making it more powerful and adjust it to your needs. There are many different ways to achieve this. In this session, Joe explained multiple extensibility features.
You can create your own controllers, Custom Resource Definitions, you can use Mutating and Validation WebHooks, you can write an API that extends the existing API, you can use a specific CSI (Container Storage Interface), CNI (Container Network Interface), CRI (Container Runtime Interface). There’s also a new way to use specific registries based on different patterns. If you want to know more about any of these, I suggest watching a recording of this talk.
Apart from adding functionalities, Joe also explained the shift from in-tree cloud providers to out-of-tree cloud providers. This means that add-ons for cloud providers like AWS and GCE will no longer be part of the Kubernetes core, but a separate plugin. This way, the core is smaller and easier to maintain, since the cloud provider add-on has less dependencies on other code within the core and a dedicated release cycle. From Kubernetes 1.24 onwards (begin 2022), the in-tree cloud providers will no longer be available.
How DoD Uses K8s and Flux to Achieve Compliance and Deployment Consistency – Michael Medellin & Gordon Tillman, Department of Defense
The DoD, just like any other software company, needs an infrastructure to deploy their applications on. They, like many others, have been transitioning to Kubernetes these last few years. But of course, the DoD is not your ordinary software company. They are a monolithic entity in charge of the entire armed forces of the US with its own compliancy and regulatory challenges.
In this presentation, Michael and Gordon talk about how they do it and what tools and practices they use to ship secure, reliable, and resilient software to their globally distributed user base.
First and foremost is Git, which is the single source of truth for all their infrastructure. Flux runs on top of this Git which makes sure no changes are made to the clusters without going through the entire deployment pipeline. It does this by signing the commits that are allowed to be applied to the infrastructure. Finally, there’s ClusterAPI which is used as an overview of all the clusters and their management.
There’s also the deployment of new clusters. Once again, git is the single source of truth here. First, Terraform deploys the underlying networking components, bastion instances, and endpoints to securely connect to stuff outside the cluster. Next up, custom resources are made based on the output of Terraform and are committed to git. After this point you actually come we come back to Flux who makes sure the commits are signed and deployed.
This talk is a must for anyone interested in how you can tackle deployment/management of secure, reliable, and resilient infrastructure in an environment filled with compliancy and regulatory challenges. You can find the talk’s presentation slides here.
Contour, a High Performance Multitenant Ingress Controller for Kubernetes – Steve Sloka, VMware
This session was about Contour (Github link) an open source Kubernetes ingress controller like Nginx. Contour provides a control plane for the Envoy edge and service proxy. It supports dynamic configuration updates, TLS termination, passthrough and load-balancing algorithms.
The session started with some new features the Contour team implemented in their last version of Contour. The most interesting new feature is Rate Limiting. With this Rate Limiting feature, you can decide how much traffic is allowed to certain services. It’s definitely useful against some cyberattacks, like DDoS. The team also added Gateway API functionality to Contour.
After the theoretical part of the session, Steve showed us how you can use Global and Local rate limiting using a ConfigMap.
Introduction and Deep Dive into Containerd – Kohei Tokunaga & Akihiro Suda, NTT Corporation
This talks gave an overview and the recent updates of Containerd, as well as how it’s being used by Kubernetes, Docker and other container-based systems. Basically, you learn how various products such as Docker use Containerd to provide container services. Kubernetes itself will directly interact with Containerd, where older K8s implementations still used Dockerd.
The talk also deep dove into how to leverage Containerd by extending and customizing it for your use case with low-level plugins like remote snapshotters, as well as by implementing your own Containerd client. An interesting extension to Containerd is a snapshotter plugin that allows a container to do a lazy pull of an image and already start up without waiting for the entire image contents being locally available.
Upcoming features and recent discussion in Containerd community were also covered. Useful updates in Containerd 1.5 include the addition of the zstd compression algorithm, which allows for a faster compression and decompression, OCIcrypt decryption by default and nerdctl (contaiNERD ctl) as a non-core subproject. Future features will include Filesystem quota and CRI support for user namespaces, so that one can run Kubernetes pods as a user that is different from the daemon user.
You can find the presentations slides here.
GitOps Con opening keynotes – Cornelia Davis, Weaveworks
GitOps Con was one of the co-located events leading up to KubeCon/CloudNativeCon Europe 2021. In the opening keynotes, Cornelia Davis talked about Git as an interface to operations rather than being just a store. Git can be used to represent the desired state, whereas a K8s cluster represents the actual state.
GitOps can enable DevOps teams to release more frequently, reduce lead time and operate their applications more effectively. This is achieved by using familiar tooling (Git) and allowing platform teams to focus on security, compliance (Git log), resilience and cost management.
An interesting approach with regards to security was working with a pull model for continuous delivery (CD). By using an operator inside the cluster that updates deployments based on the desired state in Git, there’s no need for a central CD component. This in turn enhances overall security by not having a single component with access to multiple clusters.
In the cncf/podtato-head GitHub repo, you can find a demo project for showcasing cloud-native application delivery use cases using different tools for various use cases.
You can find the opening keynotes and the other talks from GitOps Con on the GitOps Working Group YouTube channel.
Hacking into Kubernetes Security – Ellen Körbes, Tilt & Tabitha Sable, Datadog
Kubernetes is a cool and fun place to run your containers. But is it safe? In this talk Ellen and Tabitha demonstrated how important it is to secure you Kubernetes cluster by adding RBAC, admission control, network policies to the cluster and updating/patching vulnerabilities.
This talk was more like a live action scene than just reading some PowerPoint slides. In the first place, they showed how important it is to configure RBAC in your cluster. Without RBAC or with a half configurered RBAC, it’s easy to access someone else’s work or namespace. If someone inside the cluster can access the namespaces, this also means that someone outside the cluster can access your namespace. Just using RBAC is not enough though. You also need to configure your network policies so you can manage your network flow by blocking or allowing network traffic between pods and/or namespaces.
One of the most important things to do is to subscribe on the CVE list of Kubernetes. By subscribing to the CVE list you’re always notified when a new exploit has been found. This will help you to patch your cluster before any hacker tries to get inside your cluster.
But what do you do if, despite all efforts, you have been hacked? First of all: be calm and inform your admin about this. Together, check what’s been changed in your cluster by using audit logging or other tools. Make a sheet and write everything down. Then try to fix or patch any open vulnerabilities and backdoors. Finally, report the case to the police.

What others have also read


CloudBrew has always been a highlight on our calendar, but the 2025 edition felt different. Perhaps it was the timing. Just the month prior, November 2025, the Azure Belgium Central region finally opened its doors. ACA has always operated from the heart of Europe, so seeing this massive national milestone go live just before the conference added a layer of excitement.
Read more

Better uptime, lower costs, and avoiding vendor lock-in. These are three of the reasons why our customers opt for a multicloud strategy. Our Cloud Project Manager Roel Van Steenberghe explains what such a strategy entails and what the advantages of Google Cloud Platform (GCP) are.
Read more

In the complex world of modern software development, companies are faced with the challenge of seamlessly integrating diverse applications developed and managed by different teams. An invaluable asset in overcoming this challenge is the Service Mesh. In this blog article, we delve into Istio Service Mesh and explore why investing in a Service Mesh like Istio is a smart move." What is Service Mesh? A service mesh is a software layer responsible for all communication between applications, referred to as services in this context. It introduces new functionalities to manage the interaction between services, such as monitoring, logging, tracing, and traffic control. A service mesh operates independently of the code of each individual service, enabling it to operate across network boundaries and collaborate with various management systems. Thanks to a service mesh, developers can focus on building application features without worrying about the complexity of the underlying communication infrastructure. Istio Service Mesh in Practice Consider managing a large cluster that runs multiple applications developed and maintained by different teams, each with diverse dependencies like ElasticSearch or Kafka. Over time, this results in a complex ecosystem of applications and containers, overseen by various teams. The environment becomes so intricate that administrators find it increasingly difficult to maintain a clear overview. This leads to a series of pertinent questions: What is the architecture like? Which applications interact with each other? How is the traffic managed? Moreover, there are specific challenges that must be addressed for each individual application: Handling login processes Implementing robust security measures Managing network traffic directed towards the application ... A Service Mesh, such as Istio, offers a solution to these challenges. Istio acts as a proxy between the various applications (services) in the cluster, with each request passing through a component of Istio. How Does Istio Service Mesh Work? Istio introduces a sidecar proxy for each service in the microservices ecosystem. This sidecar proxy manages all incoming and outgoing traffic for the service. Additionally, Istio adds components that handle the incoming and outgoing traffic of the cluster. Istio's control plane enables you to define policies for traffic management, security, and monitoring, which are then applied to the added components. For a deeper understanding of Istio Service Mesh functionality, our blog article, "Installing Istio Service Mesh: A Comprehensive Step-by-Step Guide" , provides a detailed, step-by-step explanation of the installation and utilization of Istio. Why Istio Service Mesh? Traffic Management: Istio enables detailed traffic management, allowing developers to easily route, distribute, and control traffic between different versions of their services. Security: Istio provides a robust security layer with features such as traffic encryption using its own certificates, Role-Based Access Control (RBAC), and capabilities for implementing authentication and authorization policies. Observability: Through built-in instrumentation, Istio offers deep observability with tools for monitoring, logging, and distributed tracing. This allows IT teams to analyze the performance of services and quickly detect issues. Simplified Communication: Istio removes the complexity of service communication from application developers, allowing them to focus on building application features. Is Istio Suitable for Your Setup? While the benefits are clear, it is essential to consider whether the additional complexity of Istio aligns with your specific setup. Firstly, a sidecar container is required for each deployed service, potentially leading to undesired memory and CPU overhead. Additionally, your team may lack the specialized knowledge required for Istio. If you are considering the adoption of Istio Service Mesh, seek guidance from specialists with expertise. Feel free to ask our experts for assistance. More Information about Istio Istio Service Mesh is a technological game-changer for IT professionals aiming for advanced control, security, and observability in their microservices architecture. Istio simplifies and secures communication between services, allowing IT teams to focus on building reliable and scalable applications. Need quick answers to all your questions about Istio Service Mesh? Contact our experts
Read moreWant to dive deeper into this topic?
Get in touch with our experts today. They are happy to help!

Want to dive deeper into this topic?
Get in touch with our experts today. They are happy to help!

Want to dive deeper into this topic?
Get in touch with our experts today. They are happy to help!

Want to dive deeper into this topic?
Get in touch with our experts today. They are happy to help!


