We learn & share

ACA Group Blog

Read more about our thoughts, views, and opinions on various topics, important announcements, useful insights, and advice from our experts.

Featured

8 MAY 2025
Reading time 5 min

In the ever-evolving landscape of data management, investing in platforms and navigating migrations between them is a recurring theme in many data strategies. How can we ensure that these investments remain relevant and can evolve over time, avoiding endless migration projects? The answer lies in embracing ‘Composability’ - a key principle for designing robust, future-proof data (mesh) platforms. Is there a silver bullet we can buy off-the-shelf? The data-solution market is flooded with data vendor tools positioning themselves as the platform for everything, as the all-in-one silver bullet. It's important to know that there is no silver bullet. While opting for a single off-the-shelf platform might seem like a quick and easy solution at first, it can lead to problems down the line. These monolithic off-the-shelf platforms often end up inflexible to support all use cases, not customizable enough, and eventually become outdated.This results in big complicated migration projects to the next silver bullet platform, and organizations ending up with multiple all-in-one platforms, causing disruptions in day-to-day operations and hindering overall progress. Flexibility is key to your data mesh platform architecture A complete data platform must address numerous aspects: data storage, query engines, security, data access, discovery, observability, governance, developer experience, automation, a marketplace, data quality, etc. Some vendors claim their all-in-one data solution can tackle all of these. However, typically such a platform excels in certain aspects, but falls short in others. For example, a platform might offer a high-end query engine, but lack depth in features of the data marketplace included in their solution. To future-proof your platform, it must incorporate the best tools for each aspect and evolve as new technologies emerge. Today's cutting-edge solutions can be outdated tomorrow, so flexibility and evolvability are essential for your data mesh platform architecture. Embrace composability: Engineer your future Rather than locking into one single tool, aim to build a platform with composability at its core. Picture a platform where different technologies and tools can be seamlessly integrated, replaced, or evolved, with an integrated and automated self-service experience on top. A platform that is both generic at its core and flexible enough to accommodate the ever-changing landscape of data solutions and requirements. A platform with a long-term return on investment by allowing you to expand capabilities incrementally, avoiding costly, large-scale migrations. Composability enables you to continually adapt your platform capabilities by adding new technologies under the umbrella of one stable core platform layer. Two key ingredients of composability Building blocks: These are the individual components that make up your platform. Interoperability: All building blocks must work together seamlessly to create a cohesive system. An ecosystem of building blocks When building composable data platforms, the key lies in sourcing the right building blocks. But where do we get these? Traditional monolithic data platforms aim to solve all problems in one package, but this stifles the flexibility that composability demands. Instead, vendors should focus on decomposing these platforms into specialized, cost-effective components that excel at addressing specific challenges. By offering targeted solutions as building blocks, they empower organizations to assemble a data platform tailored to their unique needs. In addition to vendor solutions, open-source data technologies also offer a wealth of building blocks. It should be possible to combine both vendor-specific and open-source tools into a data platform tailored to your needs. This approach enhances agility, fosters innovation, and allows for continuous evolution by integrating the latest and most relevant technologies. Standardization as glue between building blocks To create a truly composable ecosystem, the building blocks must be able to work together, i.e. interoperability. This is where standards come into play, enabling seamless integration between data platform building blocks. Standardization ensures that different tools can operate in harmony, offering a flexible, interoperable platform. Imagine a standard for data access management that allows seamless integration across various components. It would enable an access management building block to list data products and grant access uniformly. Simultaneously, it would allow data storage and serving building blocks to integrate their data and permission models, ensuring that any access management solution can be effortlessly composed with them. This creates a flexible ecosystem where data access is consistently managed across different systems. The discovery of data products in a catalog or marketplace can be greatly enhanced by adopting a standard specification for data products. With this standard, each data product can be made discoverable in a generic way. When data catalogs or marketplaces adopt this standard, it provides the flexibility to choose and integrate any catalog or marketplace building block into your platform, fostering a more adaptable and interoperable data ecosystem. A data contract standard allows data products to specify their quality checks, SLOs, and SLAs in a generic format, enabling smooth integration of data quality tools with any data product. It enables you to combine the best solutions for ensuring data reliability across different platforms. Widely accepted standards are key to ensuring interoperability through agreed-upon APIs, SPIs, contracts, and plugin mechanisms. In essence, standards act as the glue that binds a composable data ecosystem. A strong belief in evolutionary architectures At ACA Group, we firmly believe in evolutionary architectures and platform engineering, principles that seamlessly extend to data mesh platforms. It's not about locking yourself into a rigid structure but creating an ecosystem that can evolve, staying at the forefront of innovation. That’s where composability comes in. Do you want a data platform that not only meets your current needs but also paves the way for the challenges and opportunities of tomorrow? Let’s engineer it together Ready to learn more about composability in data mesh solutions? {% module_block module "widget_f1f5c870-47cf-4a61-9810-b273e8d58226" %}{% module_attribute "buttons" is_json="true" %}{% raw %}[{"appearance":{"link_color":"light","primary_color":"primary","secondary_color":"primary","tertiary_color":"light","tertiary_icon_accent_color":"dark","tertiary_text_color":"dark","variant":"primary"},"content":{"arrow":"right","icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"tertiary_icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"text":"Contact us now!"},"target":{"link":{"no_follow":false,"open_in_new_tab":false,"rel":"","sponsored":false,"url":{"content_id":230950468795,"href":"https://25145356.hs-sites-eu1.com/en/contact","href_with_scheme":null,"type":"CONTENT"},"user_generated_content":false}},"type":"normal"}]{% endraw %}{% end_module_attribute %}{% module_attribute "child_css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "definition_id" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "field_types" is_json="true" %}{% raw %}{"buttons":"group","styles":"group"}{% endraw %}{% end_module_attribute %}{% module_attribute "isJsModule" is_json="true" %}{% raw %}true{% endraw %}{% end_module_attribute %}{% module_attribute "label" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "module_id" is_json="true" %}{% raw %}201493994716{% endraw %}{% end_module_attribute %}{% module_attribute "path" is_json="true" %}{% raw %}"@projects/aca-group-project/aca-group-app/components/modules/ButtonGroup"{% endraw %}{% end_module_attribute %}{% module_attribute "schema_version" is_json="true" %}{% raw %}2{% endraw %}{% end_module_attribute %}{% module_attribute "smart_objects" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "smart_type" is_json="true" %}{% raw %}"NOT_SMART"{% endraw %}{% end_module_attribute %}{% module_attribute "tag" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "type" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "wrap_field_tag" is_json="true" %}{% raw %}"div"{% endraw %}{% end_module_attribute %}{% end_module_block %}

Read more
We learn & share

ACA Group Blog

Read more about our thoughts, views, and opinions on various topics, important announcements, useful insights, and advice from our experts.

Featured

8 MAY 2025
Reading time 5 min

In the ever-evolving landscape of data management, investing in platforms and navigating migrations between them is a recurring theme in many data strategies. How can we ensure that these investments remain relevant and can evolve over time, avoiding endless migration projects? The answer lies in embracing ‘Composability’ - a key principle for designing robust, future-proof data (mesh) platforms. Is there a silver bullet we can buy off-the-shelf? The data-solution market is flooded with data vendor tools positioning themselves as the platform for everything, as the all-in-one silver bullet. It's important to know that there is no silver bullet. While opting for a single off-the-shelf platform might seem like a quick and easy solution at first, it can lead to problems down the line. These monolithic off-the-shelf platforms often end up inflexible to support all use cases, not customizable enough, and eventually become outdated.This results in big complicated migration projects to the next silver bullet platform, and organizations ending up with multiple all-in-one platforms, causing disruptions in day-to-day operations and hindering overall progress. Flexibility is key to your data mesh platform architecture A complete data platform must address numerous aspects: data storage, query engines, security, data access, discovery, observability, governance, developer experience, automation, a marketplace, data quality, etc. Some vendors claim their all-in-one data solution can tackle all of these. However, typically such a platform excels in certain aspects, but falls short in others. For example, a platform might offer a high-end query engine, but lack depth in features of the data marketplace included in their solution. To future-proof your platform, it must incorporate the best tools for each aspect and evolve as new technologies emerge. Today's cutting-edge solutions can be outdated tomorrow, so flexibility and evolvability are essential for your data mesh platform architecture. Embrace composability: Engineer your future Rather than locking into one single tool, aim to build a platform with composability at its core. Picture a platform where different technologies and tools can be seamlessly integrated, replaced, or evolved, with an integrated and automated self-service experience on top. A platform that is both generic at its core and flexible enough to accommodate the ever-changing landscape of data solutions and requirements. A platform with a long-term return on investment by allowing you to expand capabilities incrementally, avoiding costly, large-scale migrations. Composability enables you to continually adapt your platform capabilities by adding new technologies under the umbrella of one stable core platform layer. Two key ingredients of composability Building blocks: These are the individual components that make up your platform. Interoperability: All building blocks must work together seamlessly to create a cohesive system. An ecosystem of building blocks When building composable data platforms, the key lies in sourcing the right building blocks. But where do we get these? Traditional monolithic data platforms aim to solve all problems in one package, but this stifles the flexibility that composability demands. Instead, vendors should focus on decomposing these platforms into specialized, cost-effective components that excel at addressing specific challenges. By offering targeted solutions as building blocks, they empower organizations to assemble a data platform tailored to their unique needs. In addition to vendor solutions, open-source data technologies also offer a wealth of building blocks. It should be possible to combine both vendor-specific and open-source tools into a data platform tailored to your needs. This approach enhances agility, fosters innovation, and allows for continuous evolution by integrating the latest and most relevant technologies. Standardization as glue between building blocks To create a truly composable ecosystem, the building blocks must be able to work together, i.e. interoperability. This is where standards come into play, enabling seamless integration between data platform building blocks. Standardization ensures that different tools can operate in harmony, offering a flexible, interoperable platform. Imagine a standard for data access management that allows seamless integration across various components. It would enable an access management building block to list data products and grant access uniformly. Simultaneously, it would allow data storage and serving building blocks to integrate their data and permission models, ensuring that any access management solution can be effortlessly composed with them. This creates a flexible ecosystem where data access is consistently managed across different systems. The discovery of data products in a catalog or marketplace can be greatly enhanced by adopting a standard specification for data products. With this standard, each data product can be made discoverable in a generic way. When data catalogs or marketplaces adopt this standard, it provides the flexibility to choose and integrate any catalog or marketplace building block into your platform, fostering a more adaptable and interoperable data ecosystem. A data contract standard allows data products to specify their quality checks, SLOs, and SLAs in a generic format, enabling smooth integration of data quality tools with any data product. It enables you to combine the best solutions for ensuring data reliability across different platforms. Widely accepted standards are key to ensuring interoperability through agreed-upon APIs, SPIs, contracts, and plugin mechanisms. In essence, standards act as the glue that binds a composable data ecosystem. A strong belief in evolutionary architectures At ACA Group, we firmly believe in evolutionary architectures and platform engineering, principles that seamlessly extend to data mesh platforms. It's not about locking yourself into a rigid structure but creating an ecosystem that can evolve, staying at the forefront of innovation. That’s where composability comes in. Do you want a data platform that not only meets your current needs but also paves the way for the challenges and opportunities of tomorrow? Let’s engineer it together Ready to learn more about composability in data mesh solutions? {% module_block module "widget_f1f5c870-47cf-4a61-9810-b273e8d58226" %}{% module_attribute "buttons" is_json="true" %}{% raw %}[{"appearance":{"link_color":"light","primary_color":"primary","secondary_color":"primary","tertiary_color":"light","tertiary_icon_accent_color":"dark","tertiary_text_color":"dark","variant":"primary"},"content":{"arrow":"right","icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"tertiary_icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"text":"Contact us now!"},"target":{"link":{"no_follow":false,"open_in_new_tab":false,"rel":"","sponsored":false,"url":{"content_id":230950468795,"href":"https://25145356.hs-sites-eu1.com/en/contact","href_with_scheme":null,"type":"CONTENT"},"user_generated_content":false}},"type":"normal"}]{% endraw %}{% end_module_attribute %}{% module_attribute "child_css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "definition_id" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "field_types" is_json="true" %}{% raw %}{"buttons":"group","styles":"group"}{% endraw %}{% end_module_attribute %}{% module_attribute "isJsModule" is_json="true" %}{% raw %}true{% endraw %}{% end_module_attribute %}{% module_attribute "label" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "module_id" is_json="true" %}{% raw %}201493994716{% endraw %}{% end_module_attribute %}{% module_attribute "path" is_json="true" %}{% raw %}"@projects/aca-group-project/aca-group-app/components/modules/ButtonGroup"{% endraw %}{% end_module_attribute %}{% module_attribute "schema_version" is_json="true" %}{% raw %}2{% endraw %}{% end_module_attribute %}{% module_attribute "smart_objects" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "smart_type" is_json="true" %}{% raw %}"NOT_SMART"{% endraw %}{% end_module_attribute %}{% module_attribute "tag" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "type" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "wrap_field_tag" is_json="true" %}{% raw %}"div"{% endraw %}{% end_module_attribute %}{% end_module_block %}

Read more

All blog posts

Lets' talk!

We'd love to talk to you!

Contact us and we'll get you connected with the expert you deserve!

Lets' talk!

We'd love to talk to you!

Contact us and we'll get you connected with the expert you deserve!

Lets' talk!

We'd love to talk to you!

Contact us and we'll get you connected with the expert you deserve!

Lets' talk!

We'd love to talk to you!

Contact us and we'll get you connected with the expert you deserve!

code
code
Istio Service Mesh: What and Why
Reading time 3 min
8 MAY 2025

In the complex world of modern software development, companies are faced with the challenge of seamlessly integrating diverse applications developed and managed by different teams. An invaluable asset in overcoming this challenge is the Service Mesh. In this blog article, we delve into Istio Service Mesh and explore why investing in a Service Mesh like Istio is a smart move." What is Service Mesh? A service mesh is a software layer responsible for all communication between applications, referred to as services in this context. It introduces new functionalities to manage the interaction between services, such as monitoring, logging, tracing, and traffic control. A service mesh operates independently of the code of each individual service, enabling it to operate across network boundaries and collaborate with various management systems. Thanks to a service mesh, developers can focus on building application features without worrying about the complexity of the underlying communication infrastructure. Istio Service Mesh in Practice Consider managing a large cluster that runs multiple applications developed and maintained by different teams, each with diverse dependencies like ElasticSearch or Kafka. Over time, this results in a complex ecosystem of applications and containers, overseen by various teams. The environment becomes so intricate that administrators find it increasingly difficult to maintain a clear overview. This leads to a series of pertinent questions: What is the architecture like? Which applications interact with each other? How is the traffic managed? Moreover, there are specific challenges that must be addressed for each individual application: Handling login processes Implementing robust security measures Managing network traffic directed towards the application ... A Service Mesh, such as Istio, offers a solution to these challenges. Istio acts as a proxy between the various applications (services) in the cluster, with each request passing through a component of Istio. How Does Istio Service Mesh Work? Istio introduces a sidecar proxy for each service in the microservices ecosystem. This sidecar proxy manages all incoming and outgoing traffic for the service. Additionally, Istio adds components that handle the incoming and outgoing traffic of the cluster. Istio's control plane enables you to define policies for traffic management, security, and monitoring, which are then applied to the added components. For a deeper understanding of Istio Service Mesh functionality, our blog article, "Installing Istio Service Mesh: A Comprehensive Step-by-Step Guide" , provides a detailed, step-by-step explanation of the installation and utilization of Istio. Why Istio Service Mesh? Traffic Management: Istio enables detailed traffic management, allowing developers to easily route, distribute, and control traffic between different versions of their services. Security: Istio provides a robust security layer with features such as traffic encryption using its own certificates, Role-Based Access Control (RBAC), and capabilities for implementing authentication and authorization policies. Observability: Through built-in instrumentation, Istio offers deep observability with tools for monitoring, logging, and distributed tracing. This allows IT teams to analyze the performance of services and quickly detect issues. Simplified Communication: Istio removes the complexity of service communication from application developers, allowing them to focus on building application features. Is Istio Suitable for Your Setup? While the benefits are clear, it is essential to consider whether the additional complexity of Istio aligns with your specific setup. Firstly, a sidecar container is required for each deployed service, potentially leading to undesired memory and CPU overhead. Additionally, your team may lack the specialized knowledge required for Istio. If you are considering the adoption of Istio Service Mesh, seek guidance from specialists with expertise. Feel free to ask our experts for assistance. More Information about Istio Istio Service Mesh is a technological game-changer for IT professionals aiming for advanced control, security, and observability in their microservices architecture. Istio simplifies and secures communication between services, allowing IT teams to focus on building reliable and scalable applications. Need quick answers to all your questions about Istio Service Mesh? Contact our experts

Read more
Cloudbrew 2023
Cloudbrew 2023
CloudBrew 2023
Reading time 3 min
8 MAY 2025

On December 7 and 8, 2023, several ACA members participated in CloudBrew 2023 , an inspiring two-day conference about Microsoft Azure. In the scenery of the former Lamot brewery, visitors had the opportunity to delve into the latest cloud developments and expand their network. With various tracks and fascinating speakers, CloudBrew offered a wealth of information. The intimate setting allowed participants to make direct contact with both local and international experts. In this article we would like to highlight some of the most inspiring talks from this two-day cloud gathering: Azure Architecture: Choosing wisely Rik Hepworth , Chief Consulting Officer at Black Marble and Microsoft Azure MVP/RD, used a customer example in which .NET developers were responsible for managing the Azure infrastructure. He engaged the audience in an interactive discussion to choose the best technologies. He further emphasized the importance of a balanced approach, combining new knowledge with existing solutions for effective management and development of the architecture. From closed platform to Landing Zone with Azure Policy David de Hoop , Special Agent at Team Rockstars IT, talked about the Azure Enterprise Scale Architecture, a template provided by Microsoft that supports companies in setting up a scalable, secure and manageable cloud infrastructure. The template provides guidance for designing a cloud infrastructure that is customizable to a business's needs. A critical aspect of this architecture is the landing zone, an environment that adheres to design principles and supports all application portfolios. It uses subscriptions to isolate and scale application and platform resources. Azure Policy provides a set of guidelines to open up Azure infrastructure to an enterprise without sacrificing security or management. This gives engineers more freedom in their Azure environment, while security features are automatically enforced at the tenant level and even application-specific settings. This provides a balanced approach to ensure both flexibility and security, without the need for separate tools or technologies. Belgium's biggest Azure mistakes I want you to learn from! During this session, Toon Vanhoutte , Azure Solution Architect and Microsoft Azure MVP, presented the most common errors and human mistakes, based on the experiences of more than 100 Azure engineers. Using valuable practical examples, he not only illustrated the errors themselves, but also offered clear solutions and preventive measures to avoid similar incidents in the future. His valuable insights helped both novice and experienced Azure engineers sharpen their knowledge and optimize their implementations. Protecting critical ICS SCADA infrastructure with Microsoft Defender This presentation by Microsoft MVP/RD, Maarten Goet , focused on the use of Microsoft Defender for ICS SCADA infrastructure in the energy sector. The speaker shared insights on the importance of cybersecurity in this critical sector, and illustrated this with a demo demonstrating the vulnerabilities of such systems. He emphasized the need for proactive security measures and highlighted Microsoft Defender as a powerful tool for protecting ICS SCADA systems. Using Azure Digital Twin in Manufacturing Steven De Lausnay , Specialist Lead Data Architecture and IoT Architect, introduced Azure Digital Twin as an advanced technology to create digital replicas of physical environments. By providing insight into the process behind Azure Digital Twin, he showed how organizations in production environments can leverage this technology. He emphasized the value of Azure Digital Twin for modeling, monitoring and optimizing complex systems. This technology can play a crucial role in improving operational efficiency and making data-driven decisions in various industrial applications. Turning Azure Platform recommendations into gold Magnus Mårtensson , CEO of Loftysoft and Microsoft Azure MVP/RD, had the honor of closing CloudBrew 2023 with a compelling summary of the highlights. With his entertaining presentation he offered valuable reflection on the various themes discussed during the event. It was a perfect ending to an extremely successful conference and gave every participant the desire to immediately put the insights gained into practice. We are already looking forward to CloudBrew 2024! 🚀

Read more
aws invent 2021
aws invent 2021
Reading time 5 min
6 MAY 2025

Like every year, Amazon held its AWS re:Invent 2021 in Las Vegas. While we weren’t able to attend in person due to the pandemic, as an AWS Partner we were eager to follow the digital event. Below is a quick rundown of our highlights of the event to give you a summary in case you missed it! AWS closer to home AWS will build 30 new ‘ Local Zones ’ in 2022, including one in our home base: Belgium. AWS Local Zones are a type of infrastructure deployment that places compute, storage, database, and other select AWS services close to large population and industry centers. The Belgian Local Zone should be operational by 2023. Additionally, the possibilities of AWS Outposts have increased . The most important change is that you can now run far more services on your own server delivered by AWS. Quick recap: AWS Outposts is a family of fully managed solutions delivering AWS infrastructure and services to virtually any on-premises or edge location for a consistent hybrid experience. Outposts was previously only available in a 42U Outposts rack configuration. From now on, AWS offers a variety of form factors, including 1U and 2U Outposts servers for when there’s less space available. We’re very tempted to get one for the office… AWS EKS Anywhere was previously announced, but is now a reality! With this service, it’s possible to set up a Kubernetes cluster on your own infrastructure or infrastructure from your favorite cloud provider, while still managing it through AWS EKS. All the benefits of freedom of choice combined with the unified overview and dashboard of AWS EKS. Who said you can’t have your cake and eat it too? Low-code to regain primary focus With Amplify Studio , AWS takes the next step in low-code development. Amplify Studio is a fully-fledged low-code generator platform that builds upon the existing Amplify framework. The platform allows users to build applications through drag and drop with the possibility of adding custom code wherever necessary. Definitely something we’ll be looking at on our next Ship-IT Day! Machine Learning going strong(er) Ever wanted to start with machine learning, but not quite ready to invest some of your hard-earned money? With SageMaker Studio Lab , AWS announced a free platform that lets users start exploring AI/ML tools without having to register for an AWS account or leave credit card details behind. You can try it yourself for free in your browser through Jupyter notebooks ! Additionally, AWS announced SageMaker Canvas : a visual, no-code machine learning capability for business analysts. This allows them to get started with ML without having extensive experience and get more insights in data. The third chapter in the SageMaker saga consists of SageMaker Ground Truth Plus . With this new service, you hire a team of experts to train and label your data, a traditionally very labor intensive process. According to Amazon, customers can expect to save up to 40% through SageMaker Ground Truth Plus. There were two more minor announcements: the AI ML Scholarschip Program , a free program for students to get to know ML tools, and Lex Automated Chatbot Designer , which lets you quickly develop a smart chatbot with advanced natural language processing support. Networking for everyone Tired of less than optimal reception or a slow connection? Why not build your own private 5G network? Yep: with AWS Private 5G , Amazon delivers the hardware, management and sim cards for you to set up your very own 5G network. Use cases (besides being fed up with your current cellular network) include warehouses or large sites (e.g. a football stadium) that require low latency, excellent coverage and a large bandwidth. The best part? Customers only pay for the end user’s usage of the network. Continuing the network theme, there’s now AWS Cloud WAN . This service allows users to build a managed WAN (Wide Area Network) to connect cloud and on-premise environments with a central management UI on a network components level as well as service level. Lastly, there’s also AWS Workspaces Web . Through this service, customers can grant employees safe access to internal website and SaaS applications. The big advantage here is that information critical to the company never leaves the environment and doesn’t leave any traces on workstations, thanks to a non-persistent web browser. Kubernetes anyone? No AWS event goes without mentioning Kubernetes, and AWS re:Invent 2021 is no different. Amazon announced two new services in the Kubernetes space: AWS Karpenter and AWS Marketplace for Containers Anywhere . With AWS Karpenter, managing autoscaling Kubernetes infrastructure becomes both simpler and less restrictive. It takes care of automatically starting compute when the load of an application changes. Interestingly, Karpenter is fully open-source, a trend which we’ll see more and more according to Amazon. AwS Marketplace for Containers Anywhere is primarily useful for customers who’ve already fully committed to container managed platforms. It allows users to search, subscribe and deploy 3rd party Kubernetes apps from the AWS Marketplace in any Kubernetes cluster, no matter the environment. IoT updates There have been numerous smaller updates to AWS’s IoT services, most notably to: GreenGrass SSM , which now allows you to securely manage your devices using AWS Systems Manager Amazon Monitron to predict when maintenance is required for rotating parts in machines AWS IoT TwinMaker , to simply make Digital Twins of real-world systems AWS IoT FleetWise , whichs helps users to collect vehicle data in the cloud in near-real time. Upping the serverless game In the serverless landscape, AWS announced serverless Redshift , EMR , MSK , and Kinesis . This enables to set up services while the right instance type is automatically linked. If the service is not in use, the instance automatically stops. This way, customers only pay for when a service is actually being used. This is particularly interesting for experimental services and integrations in environments which do not get used very often. Sustainability Just like ACA Group’s commitment to sustainability , AWS is serious about their ambition towards net-zero carbon by 2040. They’ve developed the AWS Customer Carbon Footprint tool, which lets users calculate carbon emissions through their website . Other announcements included AWS Mainframe Modernization , a collection of tools and guides to take over existing mainframes with AWS, and AWS Well-Architected Framework , a set of design principles, guidelines, best practices and improvements to validate sustainability goals and create reports. We can't wait to start experimenting with all the new additions and improvements announced at AWS re:Invent 2021. Thanks for reading! Discover our cloud hosting services

Read more
cloudbrew 2024
cloudbrew 2024
Reading time 4 min
6 MAY 2025

The yearly inspiring Azure conference CloudBrew organized by the Azure User Group took place on December 12th and 13th, 2024. The best speakers from all over Europe were invited to share their experiences and knowledge of the latest developments in Azure. ACA Group is one of the partners of CloudBrew and the Azure User Group in Belgium . This allowed us not only to participate but have selected customers join us into the ever evolving world of Azure at the event. With two action packed days of events, we want to highlight the topics which touched and inspired us. Opening Keynote: The opening keynote was delivered by no other than Sakari Nahi , the CEO of Zure. He talked about the advances in AI and specifically how it will impact the hard working engineers and architects in the public cloud. There’s no denying that AI will impact how we use and work with Azure. This may make us worried on different levels, some may worry that we will be replaced by AI, while others will have privacy concerns. Wherever you are on this scale, the AI revolution is positive for everyone. We have to see ourselves as operators of AI, not as replacements. It’s always DNS Rik Hepworth and his magnificent gray hair was present to explain how it’s always DNS. It’s always DNS in the sense that no matter what you decide to deploy, DNS has to be taken into consideration. Here we learned the importance of centralizing DNS with a HUB and SPOKE configuration. The Azure Private DNS Resolver is taking over as the go-to solution for DNS Conditional Forwarding. By centralizing this in the HUB, it’s possible to scale the environment into as many virtual networks and subscriptions as required. Avoid setting up Private DNS zones in individual SPOKE subscriptions as this is a recipe for management disaster as the environment grows. Orchestration vs Choreography As our daily work rarely involves software architecture, we weren’t sure what to expect from this talk. However, we were pleasantly surprised as Laila Bougria delivered one of the most captivating sessions on building microservice-based application architectures. Orchestration uses a coordination and management system between the different software components. Choreography on the other hand is decentralized offering and in some cases provides easier maintenance. There is no card to trump them all, it all depends on your situation. After this awesome session from Laila, we got more interested in the topic. You can find more about her work here . GPT-4 vs Starcraft II - Strategic Decision Making using Large Language Models AI is everywhere and CloudBrew is not an exception. This session from Alan Smith provided a practical example on how you can integrate GPT-4 into an existing system, which in this case was Starcraft II . Starcraft II is a strategy game released in 2010, where there is a wealth of information on the Internet on how to beat your opponent. All of this information has been picked up by GPT-4 and if you would ask it, based on the state of the game, such as what the opponent is doing, it can devise a strategy in text format to counter this. This was a live demonstration of how the screen output of Starcraft II was fed to GPT-4. GPT-4 returned a set of instructions on what the next actions would be to counter and beat your opponent. These were translated to in game activities and we could watch in awe how GPT-4 was beating the opponent. Granted, in this scenario GPT-4 was playing against the computer, which is also considered a type of AI. AI beating AI feels like we are coming full circle, in a surprisingly positive kind of way 🙂 Seriously securing an Azure PaaS application Joonas Westlin gave a lecture on Azure security for PaaS solutions, and he delivered it with a good dose of humor and relatable real-world examples. We started with the basics: affordable, standard solutions like Private Endpoints and Network Security Groups. Nice and simple, and budget-friendly. But the further you move the security slider up, the more serious it gets. Think WAF (Web Application Firewall) and Application Gateway , which lock down your environment completely — but can also stretch your budget significantly. Joonas presented his story in a way that made you laugh regularly because it was so relatable. His anecdotes struck exactly the right chord, while also providing useful insights. Especially on how to smartly balance security and costs, without feeling like you're running a fortress. Ending with Amazement: Although CloudBrew has only just passed, we can’t help but look back with a smile. The conference, the speaker lineup, and their engaging topics were nothing short of exceptional. The knowledge and insights we gained in just two days would have taken months to acquire elsewhere. We’re already looking forward to CloudBrew 2025 and hope to see you there as well!

Read more
liferay azure
liferay azure
Customer case: setting up Azure B2C with Liferay integration
Reading time 4 min
6 MAY 2025

With the growing need for seamless user experiences and robust security measures, integrating advanced identity management solutions like Azure AD B2C with platforms such as Liferay has become essential. This article explores how ACA Group helped a company successfully implement Azure B2C to enhance their customer portal, ensuring a streamlined and secure experience for their users. From understanding the fundamentals of Azure B2C to tackling the challenges of integration, this case study provides valuable insights into the process and benefits of modern identity management solutions. What is Azure AD B2C? Azure AD B2C is a cloud-based identity provider designed for businesses to manage user identities securely and easily. It focuses on external users like customers, partners, and vendors, offering a scalable solution for login credentials and identity verification. Azure B2C aims to simplify user sign-up and registration processes while providing extensive customization options to tailor the user experience and integrate seamlessly with existing applications. Key features of Azure B2C Supports various identity providers, including Facebook, X, and LinkedIn. Provides a secure framework for managing personal data and ensures compliance with regulations. Manages access to multiple applications with a single account, enhancing security. Improves the overall user experience by recognizing the importance of digital identity in online interactions. Customer case: customer portal authentication Context This case involves a company managing air traffic within Belgian airspace, ensuring the safety, efficiency, and punctuality of flights. They oversee flight management, navigation, communication systems, and meteorological services, working closely with airlines, airports, and international air traffic control centers. Their customer portal serves as a centralized platform for clients to access vital information about operations and services, ensuring transparency and efficient communication. With hundreds of daily users, the portal plays a crucial role in maintaining efficient communication and customer satisfaction. Solution Approach We approached this case methodically and collaboratively. We started with a test design to outline our solution, making sure it matched the customer's needs. We then discussed it with the customer to gather their feedback. After considering their input, we went back to refine our approach. Realizing a tailored solution was necessary, we decided to implement custom policies. This iterative process allowed us to adapt and fine-tune our solution, ensuring it perfectly met the customer's expectations. Challenges Developed custom policies to fully integrate with Liferay, required detailed customization using XML files. The login process was tailored for SAML 2.0 authentication, customizing everything from personal details like names to preferences like language and business phone. Meeting the client's requirements was crucial, so certain fields were mandatory and others had specific formatting needs. Every step, from creating profiles to sending data to Liferay, was meticulously customized to match the project's goals. Although complex, this project was an exciting challenge that showcased our team's problem-solving skills and creativity. Lessons Learned Testing by non-technical users was a game-changer, helping us spot issues early on. Regular updates with the client kept everyone in the loop and allowed us to make timely modifications. By involving non-technical stakeholders and keeping communication open, we quickly addressed concerns and delivered a top-notch solution. This collaborative approach built trust and ensured everyone was on the same page, leading to a successful project outcome. Our Contributions to Azure B2C Working with Azure B2C showed us just how crucial custom policies are for a smooth system. These policies are the backbone of our SAML 2.0 integration, making identity management secure and efficient. We developed a custom B2C login portal to enhance user experience, tailored to fit the organization's needs. This portal simplifies registration and acts as a bridge, transferring user info to Liferay. After registration, user data flows into Liferay, automatically creating a user profile. This integration makes onboarding easy, allowing our customer affairs team to quickly assign account privileges. Creating profiles in both Azure B2C and Liferay keeps data consistent across platforms. Once profiles are created, we verify the accuracy and legitimacy of user information. After verification, users gain access to a secure and personalized customer portal on Liferay, providing a centralized and streamlined experience for all interactions. Optimizing User Journeys By integrating custom policies, SAML 2.0, Azure B2C, and Liferay, we created a smoother, more efficient user experience. This seamless connection automates tasks like user creation and verification, making registration hassle-free. The result? A faster process that saves time, reduces frustration, and boosts user engagement and satisfaction. Conclusion Integrating custom policies, SAML 2.0, Azure B2C, and Liferay creates a solid foundation for secure user sign-up and access management. These tools help organizations deliver personalized, trusted user experiences. Ready to optimize your user journeys? Reach out to our team at hello@acagroup.be . We’d love to help you get started!

Read more
wind mills carbon footprint
wind mills carbon footprint
Reading time 3 min
6 MAY 2025

The world is rapidly changing, both from a technological and environmental point of view. Often, these challenges go hand in hand. For example, through the push towards electric vehicles, smart homes and sustainable energy. But while there has been a longstanding focus on the automotive, manufacturing and agricultural industries, there is no pathway to a cleaner environment without addressing the sizable energy consumption of data centers and cloud computing. The carbon footprint of cloud computing According to the International Energy Agency’s (IEA) latest report , data centers around the world in 2021 used 220 to 320 TWh of electricity, which is around 0.9 to 1.3% of the global electricity demand. In addition, global data transmission networks consumed 260-340 TWh, or 1.1 to 1.4% of electricity. Combined, data centers and transmission networks contribute to 0.9% of energy-related emissions. While these may seem fairly low numbers, the demand for data services is rising exponentially. Global internet traffic surged over the past decade, an evolution that accelerated during the pandemic. Since 2010, the number of internet users across the world has more than doubled and global internet traffic has increased 15-fold , or 30% per year . This means that the carbon footprint of cloud computing is something all companies, large or small, must consider. But what can you do without sacrificing the computing power needed to support innovation and deliver goods and services as promised? Amazon Web Services (AWS) While cloud computing also comes with a footprint, it offers a much more eco-friendly way to operate your IT systems than local servers. That’s why we believe a cloud-first approach is key to make your business more sustainable. Especially when cloud-based technologies are powered with renewable energy. That’s why ACA Group carefully chooses its partnerships and evaluates the environmental impact of those partners. In this context, we have selected AWS as a cloud provider. Combined with our flexible Kubernetes setups, it allows us to choose for the least amount of carbon emissions while still meeting (and even exceeding) the expectations of our customers. It shows that cloud computing needs do not come at the planet’s expense. But why AWS? As the world’s most prominent cloud provider, Amazon Web Services is focused on efficiency and continuous innovation across its global infrastructure. In fact, they are well on their way to powering their operations with 100% renewable energy by 2025. Amazon recently became the world’s largest corporate purchaser of renewable energy ; Their investments supply enough electricity to power 3 million US households for a year. Efficient computing Creating clean energy sources is essential, but no less important is rethinking how computing resources are allocated. In a cloud efficiency report, 451 Research showed that AWS’s infrastructure is 3.6 times more energy efficient than the median of U.S. enterprise data centers they surveyed. Amazon attributes this greater efficiency to, among other things, removing the central uninterruptible power supply from their data design and integrating small battery packs and custom power supplies into the server racks. Tese changes combined reduce energy conversion loss by about 35%. The servers themselves are more efficient as well: their Graviton2 CPUs are extremely power- efficient and offer better performance per watt than any other processor currently in use in Amazon data centers. AWS offers unlimited access to cloud computing and services. While this comes at a price, efficient use of resources not only reduces costs, but also indirectly reduces carbon emissions. How can you achieve this? Build applications that are resource-efficient. Consume resources with the lowest possible footprint. Maximize the output on resources used. Reduce the amount of data and distance traveled across the network. Use resources just-in-time. ➡️ Curious how we at ACA Group set up our cloud stacks for maximum sustainability without giving up power, availability and flexibility? Talk to us here !

Read more
kubernetes aca group
kubernetes aca group
How to build a highly available Atlassian stack on Kubernetes
Reading time 7 min
6 MAY 2025

Within ACA, there are multiple teams working on different (or the same!) projects. Every team has their own domains of expertise, such as developing custom software, marketing and communications, mobile development and more. The teams specialized in Atlassian products and cloud expertise combined their knowledge to create a highly-available Atlassian stack on Kubernetes. Not only could we improve our internal processes this way, we could also offer this solution to our customers! In this blogpost, we’ll explain how our Atlassian and cloud teams built a highly-available Atlassian stack on top of Kubernetes. We’ll also discuss the benefits of this approach as well as the problems we’ve faced along the path. While we’re damn close, we’re not perfect after all 😉 Lastly, we’ll talk about how we monitor this setup. The setup of our Atlassian stack Our Atlassian stack consists of the following products: Amazon EKS Amazon EFS Atlassian Jira Data Center Atlassian Confluence Data Center Amazon EBS Atlassian Bitbucket Data Center Amazon RDS As you can see, we use AWS as the cloud provider for our Kubernetes setup. We create all the resources with Terraform. We’ve written a separate blog post on what our Kubernetes setup exactly looks like. You can read it here ! The image below should give you a general idea. The next diagram should give you an idea about the setup of our Atlassian Data Center. While there are a few differences between the products and setups, the core remains the same. The application is launched as one or more pods described by a StatefulSet. The pods are called node-0 and node-1 in the diagram above. The first request is sent to the load balancer and will be forwarded to either the node-0 pod or the node-1 pod. Traffic is sticky, so all subsequent traffic from that user will be sent to node-1. Both pod-0 and pod-1 require persistent storage which is used for plugin cache and indexes. A different Amazon EBS volume is mounted on each of the pods. Most of the data like your JIRA issues, Confluence spaces, … is stored in a database. The database is shared, node-0 and node-1 both connect to the same database. We usually use PostgreSQL on Amazon RDS. The node-0 and node-1 pod also need to share large files which we don’t want to store in a database, for example attachments. The same Amazon EFS volume is mounted on both pods. When changes are made, for example an attachment is uploaded to an issue, the attachment is immediately available on both pods. We use CloudFront (CDN) to cache static assets and improve the web response times. The benefits of this setup By using this setup, we can leverage the advantages of Docker and Kubernetes and the Data Center versions of the Atlassian tooling. There are a lot of benefits to this kind of setup, but we’ve listed the most important advantages below. It’s a self-healing platform : containers and worker nodes will automatically replace themselves when a failure occurs. In most cases, we don’t even have to do anything and the stack takes care of itself. Of course, it’s still important to investigate any failures so you can prevent them from occurring in the future. Exactly zero downtime deployments : when upgrading the first node within the cluster to a new version, we can still serve the old version to our customers on the second. Once the upgrade is complete, the new version is served from the first node and we can upgrade the second node. This way, the application stays available, even during upgrades. Deployments are predictable : we use the same Docker container for development, staging and production. It’s why we are confident the container will be able to start in our production environment after a successful deploy to staging. Highly available applications: when failure occurs on one of the nodes, traffic can be routed to the other node. This way you have time to investigate the issue and fix the broken node while the application stays available. It’s possible to sync data from one node to the other . For example, syncing the index from one node to the other to fix a corrupt index can be done in just a few seconds, while a full reindex can take a lot longer. You can implement a high level of security on all layers (AWS, Kubernetes, application, …) AWS CloudTrail prevents unauthorized access on AWS and sends an alert in case of anomaly. AWS Config prevents AWS security group changes. You can find out more on how to secure your cloud with AWS Config in our blog post. Terraform makes sure changes on the AWS environment are approved by the team before rollout. Since upgrading Kubernetes master and worker nodes has little to no impact, the stack is always running a recent version with the latest security patches. We use a combination of namespacing and RBAC to make sure applications and deployments can only access resources within their namespace with least privilege . NetworkPolicies are rolled out using Calico. We deny all traffic between containers by default and only allow specific traffic. We use recent versions of the Atlassian applications and implement Security Advisories whenever they are published by Atlassian. Interested in leveraging the power of Kubernetes yourself? You can find more information about how we can help you on our website! {% module_block module "widget_3d4315dc-144d-44ec-b069-8558f77285de" %}{% module_attribute "buttons" is_json="true" %}{% raw %}[{"appearance":{"link_color":"light","primary_color":"primary","secondary_color":"primary","tertiary_color":"light","tertiary_icon_accent_color":"dark","tertiary_text_color":"dark","variant":"primary"},"content":{"arrow":"right","icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"tertiary_icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"text":"Apply the power of Kubernetes"},"target":{"link":{"no_follow":false,"open_in_new_tab":false,"rel":"","sponsored":false,"url":null,"user_generated_content":false}},"type":"normal"}]{% endraw %}{% end_module_attribute %}{% module_attribute "child_css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "definition_id" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "field_types" is_json="true" %}{% raw %}{"buttons":"group","styles":"group"}{% endraw %}{% end_module_attribute %}{% module_attribute "isJsModule" is_json="true" %}{% raw %}true{% endraw %}{% end_module_attribute %}{% module_attribute "label" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "module_id" is_json="true" %}{% raw %}201493994716{% endraw %}{% end_module_attribute %}{% module_attribute "path" is_json="true" %}{% raw %}"@projects/aca-group-project/aca-group-app/components/modules/ButtonGroup"{% endraw %}{% end_module_attribute %}{% module_attribute "schema_version" is_json="true" %}{% raw %}2{% endraw %}{% end_module_attribute %}{% module_attribute "smart_objects" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "smart_type" is_json="true" %}{% raw %}"NOT_SMART"{% endraw %}{% end_module_attribute %}{% module_attribute "tag" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "type" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "wrap_field_tag" is_json="true" %}{% raw %}"div"{% endraw %}{% end_module_attribute %}{% end_module_block %} Apply the power of Kubernetes Problems we faced during the setup Migrating to this stack wasn’t all fun and games. We’ve definitely faced some difficulties and challenges along the way. By discussing them here, we hope we can facilitate your migration to a similar setup! Some plugins (usually older plugins) were only working on the standalone version of the Atlassian application. We needed to find an alternative plugin or use vendor support to have the same functionality on Atlassian Data Center. We had to make some changes to our Docker containers and network policies (i.e. firewall rules) to make sure both nodes of an application could communicate with each other. Most of the applications have some extra tools within the container. For example, Synchrony for Confluence, ElasticSearch for BitBucket, EazyBI for Jira, and so on. These extra tools all needed to be refactored for a multi-node setup with shared data. In our previous setup, each application was running on its own virtual machine. In a Kubernetes context, the applications are spread over a number of worker nodes. Therefore, one worker node might run multiple applications. Each node of each application will be scheduled on a worker node that has sufficient resources available. We needed to implement good placement policies so each node of each application has sufficient memory available. We also needed to make sure one application could not affect another application when it asks for more resources. There were also some challenges regarding load balancing. We needed to create a custom template for nginx ingress-controller to make sure websockets are working correctly and all health checks within the application are reporting a healthy status. Additionally, we needed a different load balancer and URL for our BitBucket SSH traffic compared to our web traffic to the BitBucket UI. Our previous setup contained a lot of data, both on filesystem and in the database. We needed to migrate all the data to an Amazon EFS volume and a new database in a new AWS account. It was challenging to find a way to have a consistent sync process that also didn’t take too long because during migration, all applications were down to prevent data loss. In the end, we were able to meet these criteria and were able to migrate successfully. Monitoring our Atlassian stack We use the following tools to monitor all resources within our setup Datadog to monitor all components created within our stack and to centralize logging of all components. You can read more about monitoring your stack with Datadog in our blog post here . NewRelic for APM monitoring of the Java process (Jira, Confluence, Bitbucket) within the container. If our monitoring detects an anomaly, it creates an alert within OpsGenie . OpsGenie will make sure that this alert is sent to the team or the on-call person that is responsible to fix the problem. If the on-call person does not acknowledge the alert in time, the alert will be escalated to the team that’s responsible for that specific alert. Conclusion In short, we are very happy we migrated to this new stack. Combining the benefits of Kubernetes and the Atlassian Data Center versions of Jira, Confluence and BitBucket feels like a big step in the right direction. The improvements in self-healing, deploying and monitoring benefits us every day and maintenance has become a lot easier. Interested in your own Atlassian Stack? Do you also want to leverage the power of Kubernetes? You can find more information about how we can help you on our website! Our Atlassian hosting offering

Read more
Reading time 7 min
6 MAY 2025

Imagine you could transform your cloud strategy into a finely tuned machine that reduces costs and drives maximum business value. That’s exactly what we did for one of our customers by implementing FinOps with Azure. Through targeted optimizations and a strong focus on organizational alignment, we helped our customer save thousands of euros on their Azure bill, while setting up a sustainable framework to keep cloud costs under control. Curious about how FinOps can help you to optimize working costs and scale better? Read all about it in this blog. Long-running history in CapEx cost payment This large organization has been a customer of ACA Group for a couple of years. For a while now, they are moving more and more workload from on premise into Azure. The finance department was still handling the budgeting, relying on Capital Expenditure (CapEx) , where IT infrastructure costs are paid and known upfront. This is in contrast to Operational Expenditure (OpEx) , where costs fluctuate daily based on the actual usage of the digital resources consumed in Azure. Our customer had allocated a substantial monthly budget for Azure, that was consistently adhered to. As a result, there was no internal trigger to explore FinOps practices. How ACA discovered a substantial IT cost-saving opportunity While ACA was assisting this customer with their workload migration, we noticed a familiar pattern: FinOps had never been considered. Virtual machines for all environments ran 24/7 without Reserved Instances and non-production Storage Accounts were using costly Geo-replication. This triggered us to make a quick overview of potential savings, which we suggested along with a full FinOps exercise. The immediate savings were so compelling that the customer quickly agreed to our proposal. What is a FinOps exercise? When we talk about FinOps, we are referring to the standards set by the FinOps foundation . This is a large project by the Linux Foundation with a huge community of more than 23,000 members and 10,000 businesses. In a FinOps exercise, we guide our customers through two deliverables: FinOps assessment: This focuses on the organizational alignment of our customer, emphasizing that FinOps is a shared responsibility. An engineer deploying a resource in Azure must consider costs like sizing and SKU, while the business department has to ensure adequate budget for projects and resources. This mindset has to extend across the entire organization. Technical evaluation: This focuses on the current setup and how it can be optimized for cost savings. We analyze the entire Azure environment to detect optimization opportunities. Cost savings vs. value maximization The goal of FinOps is not to minimize cloud spend, but to maximize the value our customers gain by using cloud services. This distinction is key, but often misunderstood. Every resource in Azure should be used in a way that delivers the highest possible business value. Maximizing business value also helps minimize the ecological footprint of our customers. It’s an outcome that aligns closely with ACA’s commitment to sustainability . Optimizing the customer's web application Let’s take a look at our customer’s web application running on an Azure App Service. Each user interaction generates a load on the system and value for the business. For simplicity, let’s say the business value is 1 EUR every time a user opens the web application. With thousands of users, the application delivers 1,000 of euros in value. Our job is to ensure the App Service is optimized to handle this demand effectively, maximizing business value. If we need to scale the App Service out, it’s a good thing! As long as we are using the most efficient resource and settings, we increase the capacity and help the customer generate even more value. Selecting key focus points with all stakeholders The FinOps Assessment involves multiple workshops with key stakeholders of our customer. We brought the customer’s Finance, Business, Engineering and Operations together to show how they all play a part in the cloud costs. With over 20 Target Capability Scopes in FinOps, the customer selects a few key areas to focus on for optimization. In this case the customer selected the following: Anomaly Management Anomaly management addresses unexpected or abnormal cloud spending patterns. For example, in 2024, the customer experienced a surge in cost for a virtual machine scale set during a couple of weeks. They realized detection took too long and wanted better controls to prevent this. Rate Optimization Rate optimization ensures the most cost-effective pricing models and discounts are used. Before starting the FinOps exercise, we had already identified potential savings, for example by using Reserved Instances . In addition, we analyzed the rate they were paying for Azure resources. Workload Optimization Workload Optimization ensures resources like App Services, Virtual Machines are used efficiently. For instance, does it make sense for a non-production environment to have the resources running 24/7? Assessing Target Capabilities through workshops Together with the customer we set goals for each Target Capability. For example, they said that Anomaly Management is very important to them and they are aiming to become a Knowledge Leader in that area. During the workshops with all stakeholders, our role was to ask the right questions to assess the selected Target Capability Scopes . For Anomaly Management , it became evident they were still in the early stages, earning a “1/Partial Knowledge” evaluation in that section. Once all the workshops were completed, we were able to compile a final standing on all the Target Capability Scores. This gave the customer a benchmark, meaning that in the next evaluation in 4 months, we will be able to see how far they have come with regards to their targets. Diving into technical details With the workshops behind us, we could now focus on the written report and dive further into the technical details of the customer’s Azure environment. This involved a two part process: Part 1: Data collection We ran informational gathering scripts to extract configuration details and present them in a more readable format. Part 2: Manual review We manually analyzed the outputs from tools like Azure Cost Management and Advisor . Key areas for cost savings Having both the output from the scripts and the data from the portal, we saw that the following areas could yield the largest savings: Implement a Savings Plan / Reserved Instances Reconfigure redundancy for all Storage Accounts , they were all set to: Geo Redundant Storage (GRS) Non production to Local Redundant Storage (LRS) Production to Zone Redundant Storage (ZRS) Schedule shutdown and removal of compute resources for non production Implement Governance framework ( Azure Policies ) to avoid deployment of excessive SKUs. For example, expensive Azure Virtual Machines with nVidia video cards should be prohibited Implement budget thresholds and Anomaly Alerts Implement the FinOps Toolkit along with the PowerBI reporting Leverage Anodot for a single pane of glass across their multiple environments We compiled our findings and recommendations into a written report. Together with the customer we reviewed the report and outlined actionable next steps. 💡 One of the recommendations was to implement the FinOps Toolkit . This is a set of controls, Power BI reports and workbooks which are aligned with the FinOps framework. We were happy to see that they embraced this along with our other recommendations. Looking ahead with renewed focus on business value Going forward, the customer will continue to adopt Azure for their workload. The difference from now on is that they will do it with the mindset of maximizing the business value. In four months, ACA will reassess the status of the FinOps journey and help them measure how far they have come. The long term strategy also involves leveraging the strategic partnership ACA has with Anodot for FinOps . Together, we push the boundaries of FinOps by combining cost efficiency with carbon accountability. This brings a standard toolset and a unified visibility of FinOps across all their environments. ➡️ At the ACA Group we are experts in FinOps! Let us guide you through the FinOps journey to ensure you are unlocking the full potential of your cloud investment. {% module_block module "widget_a0726d77-6dd9-452b-ade6-cf5fd08a91b1" %}{% module_attribute "buttons" is_json="true" %}{% raw %}[{"appearance":{"link_color":"light","primary_color":"primary","secondary_color":"primary","tertiary_color":"light","tertiary_icon_accent_color":"dark","tertiary_text_color":"dark","variant":"primary"},"content":{"arrow":"right","icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"tertiary_icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"text":"Check out FinOps services"},"target":{"link":{"no_follow":false,"open_in_new_tab":false,"rel":"","sponsored":false,"url":null,"user_generated_content":false}},"type":"normal"}]{% endraw %}{% end_module_attribute %}{% module_attribute "child_css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "definition_id" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "field_types" is_json="true" %}{% raw %}{"buttons":"group","styles":"group"}{% endraw %}{% end_module_attribute %}{% module_attribute "isJsModule" is_json="true" %}{% raw %}true{% endraw %}{% end_module_attribute %}{% module_attribute "label" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "module_id" is_json="true" %}{% raw %}201493994716{% endraw %}{% end_module_attribute %}{% module_attribute "path" is_json="true" %}{% raw %}"@projects/aca-group-project/aca-group-app/components/modules/ButtonGroup"{% endraw %}{% end_module_attribute %}{% module_attribute "schema_version" is_json="true" %}{% raw %}2{% endraw %}{% end_module_attribute %}{% module_attribute "smart_objects" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "smart_type" is_json="true" %}{% raw %}"NOT_SMART"{% endraw %}{% end_module_attribute %}{% module_attribute "tag" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "type" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "wrap_field_tag" is_json="true" %}{% raw %}"div"{% endraw %}{% end_module_attribute %}{% end_module_block %}

Read more
Reading time 6 min
6 MAY 2025

In Belgium, there’s a saying, “Every Belgian is born with a brick in their stomach,” reflecting the nation's deep-rooted drive to build homes that last. But this principle doesn’t just apply to houses, it’s equally true for your cloud infrastructure. Without a strong foundation, your Azure workloads risk becoming unstable, inefficient, or even vulnerable. That’s where Microsoft’s Well Architected Framework (WAF) comes in. Read on to discover how this framework’s five pillars can turn your cloud workload into a structure built to last. What is the Well Architected Framework (WAF)? The Well Architected Framework helps you build secure, high-performing, resilient, and efficient infrastructure applications on Azure. By following the guidelines in this framework, you ensure that your cloud infrastructure is following the recommendations and standards set by Microsoft. This framework consists of five pillars: Reliability Security Cost Optimization Operational Excellence Performance Efficiency ( Image source: https://learn.microsoft.com/en-us/azure/well-architected/ ) Each of these pillars offer valuable guidance and best practices, but they also involve tradeoffs. Every decision - whether financial or technical - comes with its own set of considerations. For example, while securing workloads is important, it comes with added costs and potential technical implications. Let’s take a closer look at each of the five pillars of the Well Architected Framework. Reliability Failures are inevitable, no matter how much we wish otherwise. That’s why designing systems with failure in mind is crucial. A workload must survive failures while continuing to deliver services without disruption. This requires more than just designing your workload for failures, it also means setting reliable recovery targets and conducting sufficient testing. First you need to identify the reliability targets. After all, making everything Geo redundant is great - but comes with a cost for the business. Once your reliability targets are identified, the next step is to map redundancy level to the Azure technology. Only considering the compute parts of an application is not enough, you also need to take into account the supporting components, such as network, data and other infrastructure tiers. Deep dive into the Microsoft checklist: https://learn.microsoft.com/en-us/azure/well-architected/reliability/checklist Security All workloads should be built around the zero-trust approach. A secure workload is resilient to attacks while ensuring confidentiality, integrity and availability. Just like availability, confidentiality and integrity come with multiple options - each with its own impact on cost and complexity. For instance, how important is Encryption in Use ? Answering this question can significantly shape your solution. Security isn’t a one-layer fix; it must be applied at every level. While it’s standard practice to route all incoming (ingress) traffic through a firewall, the same must be done for outgoing (egress) traffic. Ensuring all outgoing traffic is approved and routed through a firewall is essential. There are additional ways to secure communication within your Azure environment. Using Private Endpoints is essential for secure communication between application components, offering better protection compared to Service Endpoints , which are cheaper but carry the risk of data exfiltration. Don’t overlook Azure DDoS Protection either. DDoS attacks can target any publicly accessible endpoint, potentially causing downtime and forcing your environment to scale up and out. This not only slows down your workload but also leaves you with a large consumption bill. The comprehensive checklist from Microsoft is available here: https://learn.microsoft.com/en-us/azure/well-architected/security/checklist Cost Optimization Any architecture design and workload is driven by business goals. The focus of this pillar is not about cutting costs to the minimum. It’s about finding the most cost-effective solution. This pillar aligns closely with the FinOps framework which we have covered here . A good first step is to create a cost model to estimate the initial cost, run rates, and ongoing costs. This model provides a baseline to compare the actual cost of the environment on a daily basis. The work doesn’t stop here, it’s essential to set up anomaly alerts that notify you when the expected baseline is exceeded. It’s also important to optimize the scaling of your application. Can your resources scale both out and up? Which approach is the most cost-effective and delivers the best results? Certain applications may hit a performance plateau when scaling up, which is where you add cpu and memory. Perhaps the application can only handle a minor extra load when you reach 256GB of memory. Instead, it may be more beneficial to scale out by adding more instances rather than simply scaling up with additional compute power. The comprehensive checklist from Microsoft is available here: https://learn.microsoft.com/en-us/azure/well-architected/cost-optimization/checklist Operational Excellence The core of Operational Excellence are DevOps practices, which define the operating procedures for development practices, observability and release management. One key goal in this pillar is to reduce the chance of human error. It’s important to approach implementations and workload with a long term vision. Take the distinction between ClickOps and DevOps as an example. While it's tempting to quickly set up resources using the Azure Portal (ClickOps), this builds up technical debt. Instead, adopting a DevOps approach helps you build a more sustainable, efficient, and automated workflow for the future. Read our in depth blog about moving from ClickOps to DevOps for more details. Always use a standardized Infrastructure as Code (IaC) approach. Formalize the way you handle operation tasks with clear documentation, checklists, and automation. This ties into what we covered under Resiliency , but focuses on processes. Make sure you have a strategy to address unexpected rollout issues and recover swiftly. The comprehensive checklist from Microsoft is available here: https://learn.microsoft.com/en-us/azure/well-architected/operational-excellence/checklist Performance Efficiency This pillar is all about your workload’s ability to adapt to changing demand. Your application must be able to handle increased load without compromising the user experience. Think about the thresholds you use to scale your application. How quickly can Azure resources scale up or out ? Consider traffic patterns as there may be high load during certain hours, like in the morning. Perhaps you can schedule scaling in advance to ensure resources are available when needed. The overall recommendation is to make performance a priority at every stage of the design. As you move through each phase, you should regularly test and measure performance. This will provide valuable insights, helping you identify and address potential issues before they become problems. The checklist from Microsoft is valuable: https://learn.microsoft.com/en-us/azure/well-architected/performance-efficiency/checklist Start optimizing your Azure workload today! Our team of experts is ready to assist you in applying the Well Architected Framework to your Azure environment. Let’s ensure your workload is secure, cost-optimized, and ready for the future. {% module_block module "widget_0f5df892-336b-43b8-ad2c-759301a948f2" %}{% module_attribute "buttons" is_json="true" %}{% raw %}[{"appearance":{"link_color":"light","primary_color":"primary","secondary_color":"primary","tertiary_color":"light","tertiary_icon_accent_color":"dark","tertiary_text_color":"dark","variant":"primary"},"content":{"arrow":"right","icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"tertiary_icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"text":"Reach out to us now"},"target":{"link":{"no_follow":false,"open_in_new_tab":false,"rel":"","sponsored":false,"url":null,"user_generated_content":false}},"type":"normal"}]{% endraw %}{% end_module_attribute %}{% module_attribute "child_css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "definition_id" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "field_types" is_json="true" %}{% raw %}{"buttons":"group","styles":"group"}{% endraw %}{% end_module_attribute %}{% module_attribute "isJsModule" is_json="true" %}{% raw %}true{% endraw %}{% end_module_attribute %}{% module_attribute "label" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "module_id" is_json="true" %}{% raw %}201493994716{% endraw %}{% end_module_attribute %}{% module_attribute "path" is_json="true" %}{% raw %}"@projects/aca-group-project/aca-group-app/components/modules/ButtonGroup"{% endraw %}{% end_module_attribute %}{% module_attribute "schema_version" is_json="true" %}{% raw %}2{% endraw %}{% end_module_attribute %}{% module_attribute "smart_objects" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "smart_type" is_json="true" %}{% raw %}"NOT_SMART"{% endraw %}{% end_module_attribute %}{% module_attribute "tag" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "type" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "wrap_field_tag" is_json="true" %}{% raw %}"div"{% endraw %}{% end_module_attribute %}{% end_module_block %}

Read more
clickops devsecops
clickops devsecops
Reading time 6 min
6 MAY 2025

For effective cloud management in today’s digital world, organizations demand speed, security, and efficiency. However, many still rely on a manual configuration approach known as ClickOps , using the Azure portal for deployments. While easy to start with, ClickOps can result in slower deployment times, misconfigurations, and limited scalability. The solution is an Infrastructure as Code (IaC) and DevSecOps mindset. This blog covers: The six key challenges of ClickOps How IaC and DevSecOps solve these challenges Practical steps to secure and scale your Azure environment The challenges of ClickOps (and their DevSecOps solutions) According to the Global DevSecOps report from July 2024 , only 56% of organizations have implemented DevSecOps practices. This leaves many relying on ClickOps, manually deploying infrastructure via the Azure portal GUI. ClickOps offers a low entry barrier, making it tempting for teams to quickly set up infrastructure without any governance framework. While this approach is easy to get started with, it will create growing technical debt and operational challenges over time. Below, we explore the six biggest challenges of ClickOps and how IaC and DevSecOps can overcome them . 1) Technical debt with hidden costs ClickOps may seem like an easy way to deploy resources in Azure. After all, it is just a few clicks in the portal, right? But as organizations scale, this approach becomes a costly bottleneck. For example: Deploying a virtual machine in the Azure portal requires navigating eight tabs, each with important information that has to be filled in correctly before the resource can be deployed. While manageable for a single virtual machine, it becomes increasingly difficult to ensure consistent and error-free entries for larger deployments. Over time, the limitations of ClickOps become painfully clear. Routine tasks, such as adding additional disks to multiple virtual machines with specific configurations are time-consuming and repetitive processes. The solution: Automating deployments with IaC reduces technical debt With DevSecOps and Infrastructure as Code (IaC), deployments are automated and deployed according to the defined security policies. Adjustments like changing or updating resources such as virtual machines, is a matter of updating parameters and initiating the deployment pipeline. 2) Slower time-to-market with repetitive tasks ClickOps involves a lot of manual and repetitive work, and increases the risk for human error. Setting up multiple resources with similar setup, slows time-to-market, especially in cloud environments where speed is crucial. The solution: Streamlined deployment with reusable IaC templates IaC provides reusable libraries and catalogs of pre-configured resources. Teams can deploy environments faster and use more cost efficient setups of cloud resources. 3) Managing multiple environments ClickOps makes it difficult to maintain consistency across different environments, such as test and production. Manual setup often requires manual checks to ensure that environments are identical, which is not only inefficient but also prone to mistakes. The solution: Consistency through IaC automation Infrastructure as Code enables teams to use a test environment as a blueprint for other types of environments such as production. The blueprint avoids manual comparison and ensures that both environments are identical. The same applies with changes on infrastructure. A change can be prepared, tested and validated in a test environment, reducing deployment stress and errors in the production environment. 4) Lack of collaboration and version control In ClickOps, changes to infrastructure often lack version control and transparency. It’s hard for teams to coordinate effectively and track who made which changes. The solution: IaC as the single source of truth Even when working with small teams, IaC acts as the single source of truth. It describes the actual configuration and setup of the cloud environment. Changes are also tracked on who, what and when they were applied. Working with Pull Requests on GIT can enforce teams to request changes before they are applied to the actual environment, creating an extra layer of validation. 5) Disaster recovery limitations In case an environment would be tampered or due to human error be partly or completely corrupt, ClickOps offers no realistic way to rebuild it. Can you imagine having to set up hundreds of Azure resources manually in another region? 🥲 The solution: Building resilience with DevSecOps: IaC and DevSecOps enable you to recreate the complete environment from source code.This approach results in a shorter Recovery Time Objective (RTO) and Recovery Point Objective (RPO) during disaster recovery. 6) Security and compliance risks It is true that configuring new resources through ClickOps is governed by your established framework of Azure policies. Nevertheless, it is important to note that these checks occur only during or after the resource has been created. The solution : Ensuring compliance before deployment Having the configuration of your cloud infrastructure in code allows compliance and security scans directly on the source. Any infrastructure changes are audited, and any non compliances are flagged prior to the actual deployment. Resolving all noncompliance before actual deployment ensures the security posture remains intact. Enforcing an approach where only the CI/CD is given permission to change the infrastructure creates an additional layer of security defense. ClickOps out, DevSecOps in To overcome these challenges, organizations should implement Infrastructure as Code (IaC) and DevSecOps . Together, they automate entire deployments while ensuring security best practices are followed. Choosing the right IaC language When selecting an IaC language, there are two strong options on the table: Bicep : Azure’s native language, seamlessly integrated with Azure and directly backed by Microsoft. New Azure services are immediately supported in Bicep. Terraform : A cloud-agnostic option, widely supported across environments. A popular choice for organisations with multi-cloud needs. While Terraform adoption for new Azure services is fast, it is not always available on the first day of release. The general recommendation is to choose Terraform if you are automating deployments for virtualization environments, multi-cloud scenarios, or on-premises workloads. Microsoft provides an excellent comparison, which is available here . 💡Tip : Tools like Aztfexport can export your current Azure environment into Terraform code. This code can then be reviewed, stored in a repository, and used to provision resources consistently. The environment can be locked to prevent portal-based changes, ensuring all modifications occur through IaC, avoiding configuration drift. IaC and DevSecOps approach: success story at ACA Group For one of our clients, we reverse-engineered their existing setup into Terraform code, creating a reusable template. This IaC and DevSecOps approach reduced misconfigurations by 40% and cut deployment times for new environments by 50%. At the ACA Group, every Azure environment we manage follows IaC and DevSecOps principles. Here's how we approach new and existing environments: Greenfield approach (starting from scratch): Establishing a new landing zone from scratch is straightforward. We utilize governance frameworks, templates, and pipelines fully aligned with the Microsoft Cloud Adoption Framework to ensure compliance and efficiency. Brownfield approach (optimizing existing environments) : Existing setups require a more customized strategy. We use tools like Aztfexport , integrated into our existing workflows, to reverse-engineer the environment into IaC templates and ensure a seamless transition. Preparing for the DevSecOps transformation Transitioning to DevSecOps involves more than just technical change, it is a shift in mindset. Organizations have to evolve internal policies and processes to support IaC practices and shift to an efficient and secure cloud environment. At the ACA Group, we specialize in guiding organizations through this transformation. Whether you’re starting fresh or optimizing an existing Azure environment, we’re happy to help. ➡️ Ready to move beyond ClickOps? {% module_block module "widget_a0f585f9-5198-4517-990c-933ef498b09a" %}{% module_attribute "buttons" is_json="true" %}{% raw %}[{"appearance":{"link_color":"light","primary_color":"primary","secondary_color":"primary","tertiary_color":"light","tertiary_icon_accent_color":"dark","tertiary_text_color":"dark","variant":"primary"},"content":{"arrow":"right","icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"tertiary_icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"text":"Let us help"},"target":{"link":{"no_follow":false,"open_in_new_tab":false,"rel":"","sponsored":false,"url":null,"user_generated_content":false}},"type":"normal"}]{% endraw %}{% end_module_attribute %}{% module_attribute "child_css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "definition_id" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "field_types" is_json="true" %}{% raw %}{"buttons":"group","styles":"group"}{% endraw %}{% end_module_attribute %}{% module_attribute "isJsModule" is_json="true" %}{% raw %}true{% endraw %}{% end_module_attribute %}{% module_attribute "label" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "module_id" is_json="true" %}{% raw %}201493994716{% endraw %}{% end_module_attribute %}{% module_attribute "path" is_json="true" %}{% raw %}"@projects/aca-group-project/aca-group-app/components/modules/ButtonGroup"{% endraw %}{% end_module_attribute %}{% module_attribute "schema_version" is_json="true" %}{% raw %}2{% endraw %}{% end_module_attribute %}{% module_attribute "smart_objects" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "smart_type" is_json="true" %}{% raw %}"NOT_SMART"{% endraw %}{% end_module_attribute %}{% module_attribute "tag" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "type" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "wrap_field_tag" is_json="true" %}{% raw %}"div"{% endraw %}{% end_module_attribute %}{% end_module_block %} Let us help Or talk to our expert Peter right away!

Read more
lock data could world
lock data could world
Integrating NIS2 practices in Azure
Reading time 6 min
6 MAY 2025

Cybersecurity is no longer optional—it’s a cornerstone of every organization’s operational resilience and compliance strategy. With the introduction of the EU's NIS2 directive , European organisations face a pressing need to meet rigorous security standards. While the EU's NIS2 directive sets clear standards for network and information security, translating its mandates into actionable steps can be a challenge—especially for businesses using cloud platforms like Microsoft Azure. Fortunately, Azure offers a range of tools designed to simplify compliance and strengthen your security framework. In this post, we’ll explain how Azure can help you to align with NIS2, breaking down the process into manageable steps to help you secure your environment and stay compliant. What is NIS2? The EU’s NIS2 (Network and Information Security Directive 2) is a cybersecurity directive introduced by the European Union to enhance the resilience and security of critical infrastructure and essential services across member states. It replaces the original NIS Directive from 2015 and has been incorporated into the national laws of each EU member state since October 18th, 2024. NIS2 standardizes cybersecurity practices for a wide range of sectors and organizations, from digital services, healthcare and energy, to transportation and public administration. Non-compliance can result in high fines and even reputational damage, so for affected organizations adherence is crucial. Challenges of NIS2 NIS2 covers a broad range of cyber security objectives, including governance, risk management and incident response. With so many objectives, it can be a daunting task for organisations to translate them into actionable steps. Are your infrastructure resources located in the public cloud Azure? Do you know Azure offers important tools in order to audit and ensure your environment is NIS2-compliant. How to start the implementation of NIS2? The first step for implementing NIS2 is to break the directive into smaller manageable controls. Once this has been done, the work of mapping the controls to the best technology and process can start. In this blog, we will break down a specific set of controls and map them to a technology with a focus on Microsoft Azure and Microsoft Entra ID . Microsoft Entra ID: identity and access management for NIS2 compliance One of the core requirements of NIS2 is ensuring only authorised personnel have access to critical systems and data . With Microsoft Entra ID , you have a very robust set of identity and access management (IAM) tools. Multi Factor Authentication Multi Factor Authentication (MFA) has become the de facto standard security method. It requires a user to provide at least two verification factors to access the environment. Conditional Access Policies Conditional Access Policies are often overlooked because they seem optional, but they are essential for ensuring a secure environment. They define the conditions users must meet to access the environment. They may grant or deny access based on parameters such as: IP address (location) User Group Membership in Entra ID Device Posture and compliance Just-in-Time Access Additional safety measures such as Just-in-Time-Access , which grants specific rights for a limited time, should also be configured. As an administrator, you should maintain read-only access by default, using Just-In-Time-Access to temporarily elevate privileges during approved change windows. Security Information and Event Management All the safety measures in the world mean nothing if you don’t have an effective way to continuously monitor and respond accordingly. There are several SIEM (Security Information and Event Management) solutions available. You can adopt Microsoft Sentinel which has both hybrid and cloud native support. Microsoft Sentinel At ACA, we recommend closely monitoring your secure score in Microsoft Entra ID tenant. It provides a quick, clear overview of your compliance progress. Azure tools for NIS2 compliance The Azure platform provides many tools and technologies to help you with NIS2 compliance. Microsoft Defender for Cloud Microsoft Defender for Cloud is an excellent tool to provide real-time threat detection along with security recommendations for different Azure resources such as VM’s, SQL databases, storage and many more. This also includes monitoring for vulnerabilities, policy compliance and security misconfigurations. Azure policy Azure policy is the foundation for keeping your infrastructure compliant, establishing the foundations of your governance framework for your entire landing zone. Before provisioning your first workloads in Azure, ensure your Azure policies are configured and compliant with the NIS2 directive. Do you already have workloads in Azure? It makes policy enforcement a little bit more complex, as they are often set up without strict enforcement. In such cases, the policies have to run in ‘audit’ mode. This is where any non-compliance is flagged but not enforced. This approach lets organizations review and assess the impact before fully enforcing policies. How to manage response and recovery in NIS2? NIS2 is not only about security, but also has a compliance section on “ Response and Recovery ”. This is where you can leverage Azure Backup , Azure Site Recovery and of course, Infrastructure as Code . Focusing further on the actual data, we need to consider the different types of data confidentially. Encryption is key here as it ensures that your data is only accessible to authorized individuals and systems. There are three main types of encryption available: 1. Data at Rest When you store a file in an Azure Storage Account , this is encrypted using Service-side encryption (SSE) automatically. There are different types of encryption at Rest in Azure, depending on which service you use. 2. Data in Transit When data is transmitted over the network there are different ways to encrypt it with the most common being Transport Layer Security (TLS). This is the primary method used when connecting to and interacting with Azure services. 3. Data in Use Data used for processing, such as when stored in memory, can also be encrypted. This is oftentimes overlooked because it’s more complex and the implementation varies depending on which service you use. If you use virtual machines in Azure, there’s a whole area covering Confidential Computing . In case someone would try to read the memory of the host, it is encrypted and unreadable. Struggling with NIS2 compliance? The journey towards NIS2 compliance is exciting, but can also be complex. The technological aspects alone involve numerous controls across various public cloud technologies. However, achieving true compliance requires a balanced approach that integrates the technologies with robust processes and governance. ➡️ At ACA, we care deeply about security. Wherever you are in the NIS2 journey, you can count on us to guide you towards NIS2 compliance success. Questions about NIS2 compliance? {% module_block module "widget_978e9903-327a-46e1-b6ad-b0d5e972d0de" %}{% module_attribute "buttons" is_json="true" %}{% raw %}[{"appearance":{"link_color":"light","primary_color":"primary","secondary_color":"primary","tertiary_color":"light","tertiary_icon_accent_color":"dark","tertiary_text_color":"dark","variant":"primary"},"content":{"arrow":"right","icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"tertiary_icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"text":"Contact our NIS2 experts"},"target":{"link":{"no_follow":false,"open_in_new_tab":false,"rel":"","sponsored":false,"url":null,"user_generated_content":false}},"type":"normal"}]{% endraw %}{% end_module_attribute %}{% module_attribute "child_css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "definition_id" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "field_types" is_json="true" %}{% raw %}{"buttons":"group","styles":"group"}{% endraw %}{% end_module_attribute %}{% module_attribute "isJsModule" is_json="true" %}{% raw %}true{% endraw %}{% end_module_attribute %}{% module_attribute "label" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "module_id" is_json="true" %}{% raw %}201493994716{% endraw %}{% end_module_attribute %}{% module_attribute "path" is_json="true" %}{% raw %}"@projects/aca-group-project/aca-group-app/components/modules/ButtonGroup"{% endraw %}{% end_module_attribute %}{% module_attribute "schema_version" is_json="true" %}{% raw %}2{% endraw %}{% end_module_attribute %}{% module_attribute "smart_objects" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "smart_type" is_json="true" %}{% raw %}"NOT_SMART"{% endraw %}{% end_module_attribute %}{% module_attribute "tag" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "type" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "wrap_field_tag" is_json="true" %}{% raw %}"div"{% endraw %}{% end_module_attribute %}{% end_module_block %}

Read more
kubernetes setup
kubernetes setup
What does our Kubernetes setup at ACA look like?
Reading time 6 min
6 MAY 2025

At ACA, we live and breathe Kubernetes. We set up new projects with this popular container orchestration system by default, and we’re also migrating existing customers to Kubernetes. As a result, the amount of Kubernetes clusters the ACA team manages, is growing rapidly! We’ve had to change our setup multiple times to accommodate for more customers, more clusters, more load, less maintenance and so on. From an Amazon ECS to a Kubernetes setup In 2016, we had a lot of projects that were running in Docker containers. At that point in time, our Docker containers were either running in Amazon ECS or on Amazon EC2 Virtual Machines running the Docker daemon. Unfortunately, this setup required a lot of maintenance. We needed a tool that would give us a reliable way to run these containers in production. We longed for an orchestrator that would provide us high availability, automatic cleanup of old resources, automatic container scheduling and so much more. → Enter Kubernetes ! Kubernetes proved to be the perfect candidate for a container orchestration tool. It could reliably run containers in production and reduce the amount of maintenance required for our setup. Creating a Kubernetes-minded approach Agile as we are, we proposed the idea for a Kubernetes setup for one of our next projects. The customer saw the potential of our new approach and agreed to be part of the revolution. At the beginning of 2017, we created our first very own Kubernetes cluster. At this stage, there were only two certainties: we wanted to run Kubernetes and it would run on AWS . Apart from that, there were still a lot of questions and challenges. How would we set up and manage our cluster? Can we run our existing docker containers within the cluster? What type of access and information can we provide the development teams? We’ve learned that in the end, the hardest task was not the cluster setup. Instead, creating a new mindset within ACA Group to accept this new approach, and involving the development teams in our next-gen Kubernetes setup proved to be the harder task at hand. Apart from getting to know the product ourselves and getting other teams involved as well, we also had some other tasks that required our attention: we needed to dockerize every application, we needed to be able to setup applications in the Kubernetes cluster that were high available and if possible also self-healing, and clustered applications needed to be able to share their state using the available methods within the selected container network interface. Getting used to this new way of doing things in combination with other tasks, like setting up good monitoring, having a centralized logging setup and deploying our applications in a consistent and maintainable way, proved to be quite challenging. Luckily, we were able to conquer these challenges and about half a year after we’d created our first Kubernetes cluster, our first production cluster went live (August 2017). These were the core components of our toolset anno 2017: Terraform would deploy the AWS VPC, networking components and other dependencies for the Kubernetes cluster Kops for cluster creation and management An EFK stack for logging was deployed within the Kubernetes cluster Heapster, influxdb and grafana in combination with Librato for monitoring within the cluster Opsgenie for alerting Nice! … but we can do better: reducing costs, components and downtime Once we had completed our first setup, it became easier to use the same topology and we continued implementing this setup for other customers. Through our infrastructure-as-code approach (Terraform) in combination with a Kubernetes cluster management tool (Kops), the effort to create new clusters was relatively low. However, after a while, we started to notice some possible risks related to this setup. The amount of work required for the setup and the impact of updates or upgrades on our Kubernetes stack was too large. At the same time, the number of customers that wanted their very own Kubernetes cluster was growing. So, we needed to make some changes to reduce maintenance effort on the Kubernetes part of this setup to keep things manageable for ourselves. Migration to Amazon EKS and Datadog At this point the Kubernetes service from AWS (Amazon EKS) became generally available. We were able to move all things that are managed by Kops to our Terraform code, making things a lot less complex. As an extra benefit, the Kubernetes master nodes are now managed by EKS. This means we now have less nodes to manage and EKS also provides us cluster upgrades with a touch of the button. Apart from reducing the workloads on our Kubernetes management plane, we’ve also reduced the number of components within our cluster. In the previous setup we were using an EFK (ElasticSearch, Fluentd and Kibana) stack for our logging infrastructure. For our monitoring, we were using a combination of InfluxDB, Grafana, Heapster and Librato. These tools gave us a lot of flexibility but required a lot of maintenance effort, since they all ran within the cluster. We’ve replaced them all with Datadog agent, reducing our maintenance workloads drastically. Upgrades in 60 minutes Furthermore, because of the migration to Amazon EKS and the reduction in the number of components running within the Kubernetes cluster, we were able to reduce the cost and availability impact of our cluster upgrades. With the current stack, using Datadog and Amazon EKS, we can upgrade a Kubernetes cluster within an hour. If we were to use the previous stack, it would take us about 10 hours on average. So where are we now? We currently have 16 Kubernetes clusters up and running , all running the latest available EKS version. Right now, we want to spread our love for Kubernetes wherever we can. Multiple project teams within ACA Group are now using Kubernetes, so we are organizing workshops to help them get up to speed with the technology quickly. At the same time, we also try to catch up with the latest additions to this rapidly changing platform. That’s why we’ve attended the Kubecon conference in Barcelona and shared our opinions in our Kubecon Afterglow event. What’s next? Even though we are very happy with our current Kubernetes setup, we believe there’s always room for improvement . During our Kubecon Afterglow event, we’ve had some interesting discussions with other Kubernetes enthusiasts. These discussions helped us defining our next steps, bringing our Kubernetes setup to an even higher level. Some things we’d like to improve in the near future: add service mesh to our Kubernetes stack, 100% automatic worker node upgrades without application downtime. Of course, these are just a few focus points. We’ll implement many new features and improvements whenever they are released! What about you? Are you interested in your very own Kubernetes cluster? Which improvements do you plan on making to your stack or Kubernetes setup? Or do you have an unanswered Kubernetes question we might be able to help you with? Contact us at cloud@aca-it.be and we will help you out! {% module_block module "widget_7e6bdbd6-406c-4a0a-8393-27a28f436c6d" %}{% module_attribute "buttons" is_json="true" %}{% raw %}[{"appearance":{"link_color":"light","primary_color":"primary","secondary_color":"primary","tertiary_color":"light","tertiary_icon_accent_color":"dark","tertiary_text_color":"dark","variant":"primary"},"content":{"arrow":"right","icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"tertiary_icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"text":"Our Kubernetes services"},"target":{"link":{"no_follow":false,"open_in_new_tab":false,"rel":"","sponsored":false,"url":{"content_id":null,"href":"https://www.acagroup/be/en/services/kubernetes","href_with_scheme":"https://www.acagroup/be/en/services/kubernetes","type":"EXTERNAL"},"user_generated_content":false}},"type":"normal"}]{% endraw %}{% end_module_attribute %}{% module_attribute "child_css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "definition_id" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "field_types" is_json="true" %}{% raw %}{"buttons":"group","styles":"group"}{% endraw %}{% end_module_attribute %}{% module_attribute "isJsModule" is_json="true" %}{% raw %}true{% endraw %}{% end_module_attribute %}{% module_attribute "label" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "module_id" is_json="true" %}{% raw %}201493994716{% endraw %}{% end_module_attribute %}{% module_attribute "path" is_json="true" %}{% raw %}"@projects/aca-group-project/aca-group-app/components/modules/ButtonGroup"{% endraw %}{% end_module_attribute %}{% module_attribute "schema_version" is_json="true" %}{% raw %}2{% endraw %}{% end_module_attribute %}{% module_attribute "smart_objects" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "smart_type" is_json="true" %}{% raw %}"NOT_SMART"{% endraw %}{% end_module_attribute %}{% module_attribute "tag" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "type" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "wrap_field_tag" is_json="true" %}{% raw %}"div"{% endraw %}{% end_module_attribute %}{% end_module_block %}

Read more