

In Belgium, there’s a saying, “Every Belgian is born with a brick in their stomach,” reflecting the nation's deep-rooted drive to build homes that last. But this principle doesn’t just apply to houses, it’s equally true for your cloud infrastructure.
Without a strong foundation, your Azure workloads risk becoming unstable, inefficient, or even vulnerable. That’s where Microsoft’s Well Architected Framework (WAF) comes in. Read on to discover how this framework’s five pillars can turn your cloud workload into a structure built to last.
What is the Well Architected Framework (WAF)?
The Well Architected Framework helps you build secure, high-performing, resilient, and efficient infrastructure applications on Azure. By following the guidelines in this framework, you ensure that your cloud infrastructure is following the recommendations and standards set by Microsoft.
This framework consists of five pillars:

(Image source: https://learn.microsoft.com/en-us/azure/well-architected/)
Each of these pillars offer valuable guidance and best practices, but they also involve tradeoffs. Every decision - whether financial or technical - comes with its own set of considerations. For example, while securing workloads is important, it comes with added costs and potential technical implications.
Let’s take a closer look at each of the five pillars of the Well Architected Framework.
Reliability
Failures are inevitable, no matter how much we wish otherwise. That’s why designing systems with failure in mind is crucial. A workload must survive failures while continuing to deliver services without disruption.
This requires more than just designing your workload for failures, it also means setting reliable recovery targets and conducting sufficient testing.
First you need to identify the reliability targets. After all, making everything Geo redundant is great - but comes with a cost for the business. Once your reliability targets are identified, the next step is to map redundancy level to the Azure technology. Only considering the compute parts of an application is not enough, you also need to take into account the supporting components, such as network, data and other infrastructure tiers.
Deep dive into the Microsoft checklist: https://learn.microsoft.com/en-us/azure/well-architected/reliability/checklist
Security
All workloads should be built around the zero-trust approach. A secure workload is resilient to attacks while ensuring confidentiality, integrity and availability. Just like availability, confidentiality and integrity come with multiple options - each with its own impact on cost and complexity. For instance, how important is Encryption in Use? Answering this question can significantly shape your solution.
Security isn’t a one-layer fix; it must be applied at every level. While it’s standard practice to route all incoming (ingress) traffic through a firewall, the same must be done for outgoing (egress) traffic. Ensuring all outgoing traffic is approved and routed through a firewall is essential.

There are additional ways to secure communication within your Azure environment. Using Private Endpoints is essential for secure communication between application components, offering better protection compared to Service Endpoints, which are cheaper but carry the risk of data exfiltration.
Don’t overlook Azure DDoS Protection either. DDoS attacks can target any publicly accessible endpoint, potentially causing downtime and forcing your environment to scale up and out. This not only slows down your workload but also leaves you with a large consumption bill.

The comprehensive checklist from Microsoft is available here: https://learn.microsoft.com/en-us/azure/well-architected/security/checklist
Cost Optimization
Any architecture design and workload is driven by business goals. The focus of this pillar is not about cutting costs to the minimum. It’s about finding the most cost-effective solution.
This pillar aligns closely with the FinOps framework which we have covered here. A good first step is to create a cost model to estimate the initial cost, run rates, and ongoing costs.
This model provides a baseline to compare the actual cost of the environment on a daily basis. The work doesn’t stop here, it’s essential to set up anomaly alerts that notify you when the expected baseline is exceeded.

It’s also important to optimize the scaling of your application. Can your resources scale both out and up? Which approach is the most cost-effective and delivers the best results? Certain applications may hit a performance plateau when scaling up, which is where you add cpu and memory. Perhaps the application can only handle a minor extra load when you reach 256GB of memory. Instead, it may be more beneficial to scale out by adding more instances rather than simply scaling up with additional compute power.
The comprehensive checklist from Microsoft is available here: https://learn.microsoft.com/en-us/azure/well-architected/cost-optimization/checklist
Operational Excellence
The core of Operational Excellence are DevOps practices, which define the operating procedures for development practices, observability and release management. One key goal in this pillar is to reduce the chance of human error.
It’s important to approach implementations and workload with a long term vision. Take the distinction between ClickOps and DevOps as an example. While it's tempting to quickly set up resources using the Azure Portal (ClickOps), this builds up technical debt. Instead, adopting a DevOps approach helps you build a more sustainable, efficient, and automated workflow for the future.

Read our in depth blog about moving from ClickOps to DevOps for more details.
Always use a standardized Infrastructure as Code (IaC) approach. Formalize the way you handle operation tasks with clear documentation, checklists, and automation. This ties into what we covered under Resiliency, but focuses on processes. Make sure you have a strategy to address unexpected rollout issues and recover swiftly.
The comprehensive checklist from Microsoft is available here: https://learn.microsoft.com/en-us/azure/well-architected/operational-excellence/checklist
Performance Efficiency
This pillar is all about your workload’s ability to adapt to changing demand. Your application must be able to handle increased load without compromising the user experience.
Think about the thresholds you use to scale your application. How quickly can Azure resources scale up or out ? Consider traffic patterns as there may be high load during certain hours, like in the morning. Perhaps you can schedule scaling in advance to ensure resources are available when needed.
The overall recommendation is to make performance a priority at every stage of the design. As you move through each phase, you should regularly test and measure performance. This will provide valuable insights, helping you identify and address potential issues before they become problems.
The checklist from Microsoft is valuable: https://learn.microsoft.com/en-us/azure/well-architected/performance-efficiency/checklist
Start optimizing your Azure workload today!
Our team of experts is ready to assist you in applying the Well Architected Framework to your Azure environment. Let’s ensure your workload is secure, cost-optimized, and ready for the future.
What others have also read


In the complex world of modern software development, companies are faced with the challenge of seamlessly integrating diverse applications developed and managed by different teams. An invaluable asset in overcoming this challenge is the Service Mesh. In this blog article, we delve into Istio Service Mesh and explore why investing in a Service Mesh like Istio is a smart move." What is Service Mesh? A service mesh is a software layer responsible for all communication between applications, referred to as services in this context. It introduces new functionalities to manage the interaction between services, such as monitoring, logging, tracing, and traffic control. A service mesh operates independently of the code of each individual service, enabling it to operate across network boundaries and collaborate with various management systems. Thanks to a service mesh, developers can focus on building application features without worrying about the complexity of the underlying communication infrastructure. Istio Service Mesh in Practice Consider managing a large cluster that runs multiple applications developed and maintained by different teams, each with diverse dependencies like ElasticSearch or Kafka. Over time, this results in a complex ecosystem of applications and containers, overseen by various teams. The environment becomes so intricate that administrators find it increasingly difficult to maintain a clear overview. This leads to a series of pertinent questions: What is the architecture like? Which applications interact with each other? How is the traffic managed? Moreover, there are specific challenges that must be addressed for each individual application: Handling login processes Implementing robust security measures Managing network traffic directed towards the application ... A Service Mesh, such as Istio, offers a solution to these challenges. Istio acts as a proxy between the various applications (services) in the cluster, with each request passing through a component of Istio. How Does Istio Service Mesh Work? Istio introduces a sidecar proxy for each service in the microservices ecosystem. This sidecar proxy manages all incoming and outgoing traffic for the service. Additionally, Istio adds components that handle the incoming and outgoing traffic of the cluster. Istio's control plane enables you to define policies for traffic management, security, and monitoring, which are then applied to the added components. For a deeper understanding of Istio Service Mesh functionality, our blog article, "Installing Istio Service Mesh: A Comprehensive Step-by-Step Guide" , provides a detailed, step-by-step explanation of the installation and utilization of Istio. Why Istio Service Mesh? Traffic Management: Istio enables detailed traffic management, allowing developers to easily route, distribute, and control traffic between different versions of their services. Security: Istio provides a robust security layer with features such as traffic encryption using its own certificates, Role-Based Access Control (RBAC), and capabilities for implementing authentication and authorization policies. Observability: Through built-in instrumentation, Istio offers deep observability with tools for monitoring, logging, and distributed tracing. This allows IT teams to analyze the performance of services and quickly detect issues. Simplified Communication: Istio removes the complexity of service communication from application developers, allowing them to focus on building application features. Is Istio Suitable for Your Setup? While the benefits are clear, it is essential to consider whether the additional complexity of Istio aligns with your specific setup. Firstly, a sidecar container is required for each deployed service, potentially leading to undesired memory and CPU overhead. Additionally, your team may lack the specialized knowledge required for Istio. If you are considering the adoption of Istio Service Mesh, seek guidance from specialists with expertise. Feel free to ask our experts for assistance. More Information about Istio Istio Service Mesh is a technological game-changer for IT professionals aiming for advanced control, security, and observability in their microservices architecture. Istio simplifies and secures communication between services, allowing IT teams to focus on building reliable and scalable applications. Need quick answers to all your questions about Istio Service Mesh? Contact our experts
Read more

On December 7 and 8, 2023, several ACA members participated in CloudBrew 2023 , an inspiring two-day conference about Microsoft Azure. In the scenery of the former Lamot brewery, visitors had the opportunity to delve into the latest cloud developments and expand their network. With various tracks and fascinating speakers, CloudBrew offered a wealth of information. The intimate setting allowed participants to make direct contact with both local and international experts. In this article we would like to highlight some of the most inspiring talks from this two-day cloud gathering: Azure Architecture: Choosing wisely Rik Hepworth , Chief Consulting Officer at Black Marble and Microsoft Azure MVP/RD, used a customer example in which .NET developers were responsible for managing the Azure infrastructure. He engaged the audience in an interactive discussion to choose the best technologies. He further emphasized the importance of a balanced approach, combining new knowledge with existing solutions for effective management and development of the architecture. From closed platform to Landing Zone with Azure Policy David de Hoop , Special Agent at Team Rockstars IT, talked about the Azure Enterprise Scale Architecture, a template provided by Microsoft that supports companies in setting up a scalable, secure and manageable cloud infrastructure. The template provides guidance for designing a cloud infrastructure that is customizable to a business's needs. A critical aspect of this architecture is the landing zone, an environment that adheres to design principles and supports all application portfolios. It uses subscriptions to isolate and scale application and platform resources. Azure Policy provides a set of guidelines to open up Azure infrastructure to an enterprise without sacrificing security or management. This gives engineers more freedom in their Azure environment, while security features are automatically enforced at the tenant level and even application-specific settings. This provides a balanced approach to ensure both flexibility and security, without the need for separate tools or technologies. Belgium's biggest Azure mistakes I want you to learn from! During this session, Toon Vanhoutte , Azure Solution Architect and Microsoft Azure MVP, presented the most common errors and human mistakes, based on the experiences of more than 100 Azure engineers. Using valuable practical examples, he not only illustrated the errors themselves, but also offered clear solutions and preventive measures to avoid similar incidents in the future. His valuable insights helped both novice and experienced Azure engineers sharpen their knowledge and optimize their implementations. Protecting critical ICS SCADA infrastructure with Microsoft Defender This presentation by Microsoft MVP/RD, Maarten Goet , focused on the use of Microsoft Defender for ICS SCADA infrastructure in the energy sector. The speaker shared insights on the importance of cybersecurity in this critical sector, and illustrated this with a demo demonstrating the vulnerabilities of such systems. He emphasized the need for proactive security measures and highlighted Microsoft Defender as a powerful tool for protecting ICS SCADA systems. Using Azure Digital Twin in Manufacturing Steven De Lausnay , Specialist Lead Data Architecture and IoT Architect, introduced Azure Digital Twin as an advanced technology to create digital replicas of physical environments. By providing insight into the process behind Azure Digital Twin, he showed how organizations in production environments can leverage this technology. He emphasized the value of Azure Digital Twin for modeling, monitoring and optimizing complex systems. This technology can play a crucial role in improving operational efficiency and making data-driven decisions in various industrial applications. Turning Azure Platform recommendations into gold Magnus Mårtensson , CEO of Loftysoft and Microsoft Azure MVP/RD, had the honor of closing CloudBrew 2023 with a compelling summary of the highlights. With his entertaining presentation he offered valuable reflection on the various themes discussed during the event. It was a perfect ending to an extremely successful conference and gave every participant the desire to immediately put the insights gained into practice. We are already looking forward to CloudBrew 2024! 🚀
Read more

Like every year, Amazon held its AWS re:Invent 2021 in Las Vegas. While we weren’t able to attend in person due to the pandemic, as an AWS Partner we were eager to follow the digital event. Below is a quick rundown of our highlights of the event to give you a summary in case you missed it! AWS closer to home AWS will build 30 new ‘ Local Zones ’ in 2022, including one in our home base: Belgium. AWS Local Zones are a type of infrastructure deployment that places compute, storage, database, and other select AWS services close to large population and industry centers. The Belgian Local Zone should be operational by 2023. Additionally, the possibilities of AWS Outposts have increased . The most important change is that you can now run far more services on your own server delivered by AWS. Quick recap: AWS Outposts is a family of fully managed solutions delivering AWS infrastructure and services to virtually any on-premises or edge location for a consistent hybrid experience. Outposts was previously only available in a 42U Outposts rack configuration. From now on, AWS offers a variety of form factors, including 1U and 2U Outposts servers for when there’s less space available. We’re very tempted to get one for the office… AWS EKS Anywhere was previously announced, but is now a reality! With this service, it’s possible to set up a Kubernetes cluster on your own infrastructure or infrastructure from your favorite cloud provider, while still managing it through AWS EKS. All the benefits of freedom of choice combined with the unified overview and dashboard of AWS EKS. Who said you can’t have your cake and eat it too? Low-code to regain primary focus With Amplify Studio , AWS takes the next step in low-code development. Amplify Studio is a fully-fledged low-code generator platform that builds upon the existing Amplify framework. The platform allows users to build applications through drag and drop with the possibility of adding custom code wherever necessary. Definitely something we’ll be looking at on our next Ship-IT Day! Machine Learning going strong(er) Ever wanted to start with machine learning, but not quite ready to invest some of your hard-earned money? With SageMaker Studio Lab , AWS announced a free platform that lets users start exploring AI/ML tools without having to register for an AWS account or leave credit card details behind. You can try it yourself for free in your browser through Jupyter notebooks ! Additionally, AWS announced SageMaker Canvas : a visual, no-code machine learning capability for business analysts. This allows them to get started with ML without having extensive experience and get more insights in data. The third chapter in the SageMaker saga consists of SageMaker Ground Truth Plus . With this new service, you hire a team of experts to train and label your data, a traditionally very labor intensive process. According to Amazon, customers can expect to save up to 40% through SageMaker Ground Truth Plus. There were two more minor announcements: the AI ML Scholarschip Program , a free program for students to get to know ML tools, and Lex Automated Chatbot Designer , which lets you quickly develop a smart chatbot with advanced natural language processing support. Networking for everyone Tired of less than optimal reception or a slow connection? Why not build your own private 5G network? Yep: with AWS Private 5G , Amazon delivers the hardware, management and sim cards for you to set up your very own 5G network. Use cases (besides being fed up with your current cellular network) include warehouses or large sites (e.g. a football stadium) that require low latency, excellent coverage and a large bandwidth. The best part? Customers only pay for the end user’s usage of the network. Continuing the network theme, there’s now AWS Cloud WAN . This service allows users to build a managed WAN (Wide Area Network) to connect cloud and on-premise environments with a central management UI on a network components level as well as service level. Lastly, there’s also AWS Workspaces Web . Through this service, customers can grant employees safe access to internal website and SaaS applications. The big advantage here is that information critical to the company never leaves the environment and doesn’t leave any traces on workstations, thanks to a non-persistent web browser. Kubernetes anyone? No AWS event goes without mentioning Kubernetes, and AWS re:Invent 2021 is no different. Amazon announced two new services in the Kubernetes space: AWS Karpenter and AWS Marketplace for Containers Anywhere . With AWS Karpenter, managing autoscaling Kubernetes infrastructure becomes both simpler and less restrictive. It takes care of automatically starting compute when the load of an application changes. Interestingly, Karpenter is fully open-source, a trend which we’ll see more and more according to Amazon. AwS Marketplace for Containers Anywhere is primarily useful for customers who’ve already fully committed to container managed platforms. It allows users to search, subscribe and deploy 3rd party Kubernetes apps from the AWS Marketplace in any Kubernetes cluster, no matter the environment. IoT updates There have been numerous smaller updates to AWS’s IoT services, most notably to: GreenGrass SSM , which now allows you to securely manage your devices using AWS Systems Manager Amazon Monitron to predict when maintenance is required for rotating parts in machines AWS IoT TwinMaker , to simply make Digital Twins of real-world systems AWS IoT FleetWise , whichs helps users to collect vehicle data in the cloud in near-real time. Upping the serverless game In the serverless landscape, AWS announced serverless Redshift , EMR , MSK , and Kinesis . This enables to set up services while the right instance type is automatically linked. If the service is not in use, the instance automatically stops. This way, customers only pay for when a service is actually being used. This is particularly interesting for experimental services and integrations in environments which do not get used very often. Sustainability Just like ACA Group’s commitment to sustainability , AWS is serious about their ambition towards net-zero carbon by 2040. They’ve developed the AWS Customer Carbon Footprint tool, which lets users calculate carbon emissions through their website . Other announcements included AWS Mainframe Modernization , a collection of tools and guides to take over existing mainframes with AWS, and AWS Well-Architected Framework , a set of design principles, guidelines, best practices and improvements to validate sustainability goals and create reports. We can't wait to start experimenting with all the new additions and improvements announced at AWS re:Invent 2021. Thanks for reading! Discover our cloud hosting services
Read moreWant to dive deeper into this topic?
Get in touch with our experts today. They are happy to help!

Want to dive deeper into this topic?
Get in touch with our experts today. They are happy to help!

Want to dive deeper into this topic?
Get in touch with our experts today. They are happy to help!

Want to dive deeper into this topic?
Get in touch with our experts today. They are happy to help!


