We learn & share

ACA Group Blog

Read more about our thoughts, views, and opinions on various topics, important announcements, useful insights, and advice from our experts.

Featured

8 MAY 2025
Reading time 5 min

In the ever-evolving landscape of data management, investing in platforms and navigating migrations between them is a recurring theme in many data strategies. How can we ensure that these investments remain relevant and can evolve over time, avoiding endless migration projects? The answer lies in embracing ‘Composability’ - a key principle for designing robust, future-proof data (mesh) platforms. Is there a silver bullet we can buy off-the-shelf? The data-solution market is flooded with data vendor tools positioning themselves as the platform for everything, as the all-in-one silver bullet. It's important to know that there is no silver bullet. While opting for a single off-the-shelf platform might seem like a quick and easy solution at first, it can lead to problems down the line. These monolithic off-the-shelf platforms often end up inflexible to support all use cases, not customizable enough, and eventually become outdated.This results in big complicated migration projects to the next silver bullet platform, and organizations ending up with multiple all-in-one platforms, causing disruptions in day-to-day operations and hindering overall progress. Flexibility is key to your data mesh platform architecture A complete data platform must address numerous aspects: data storage, query engines, security, data access, discovery, observability, governance, developer experience, automation, a marketplace, data quality, etc. Some vendors claim their all-in-one data solution can tackle all of these. However, typically such a platform excels in certain aspects, but falls short in others. For example, a platform might offer a high-end query engine, but lack depth in features of the data marketplace included in their solution. To future-proof your platform, it must incorporate the best tools for each aspect and evolve as new technologies emerge. Today's cutting-edge solutions can be outdated tomorrow, so flexibility and evolvability are essential for your data mesh platform architecture. Embrace composability: Engineer your future Rather than locking into one single tool, aim to build a platform with composability at its core. Picture a platform where different technologies and tools can be seamlessly integrated, replaced, or evolved, with an integrated and automated self-service experience on top. A platform that is both generic at its core and flexible enough to accommodate the ever-changing landscape of data solutions and requirements. A platform with a long-term return on investment by allowing you to expand capabilities incrementally, avoiding costly, large-scale migrations. Composability enables you to continually adapt your platform capabilities by adding new technologies under the umbrella of one stable core platform layer. Two key ingredients of composability Building blocks: These are the individual components that make up your platform. Interoperability: All building blocks must work together seamlessly to create a cohesive system. An ecosystem of building blocks When building composable data platforms, the key lies in sourcing the right building blocks. But where do we get these? Traditional monolithic data platforms aim to solve all problems in one package, but this stifles the flexibility that composability demands. Instead, vendors should focus on decomposing these platforms into specialized, cost-effective components that excel at addressing specific challenges. By offering targeted solutions as building blocks, they empower organizations to assemble a data platform tailored to their unique needs. In addition to vendor solutions, open-source data technologies also offer a wealth of building blocks. It should be possible to combine both vendor-specific and open-source tools into a data platform tailored to your needs. This approach enhances agility, fosters innovation, and allows for continuous evolution by integrating the latest and most relevant technologies. Standardization as glue between building blocks To create a truly composable ecosystem, the building blocks must be able to work together, i.e. interoperability. This is where standards come into play, enabling seamless integration between data platform building blocks. Standardization ensures that different tools can operate in harmony, offering a flexible, interoperable platform. Imagine a standard for data access management that allows seamless integration across various components. It would enable an access management building block to list data products and grant access uniformly. Simultaneously, it would allow data storage and serving building blocks to integrate their data and permission models, ensuring that any access management solution can be effortlessly composed with them. This creates a flexible ecosystem where data access is consistently managed across different systems. The discovery of data products in a catalog or marketplace can be greatly enhanced by adopting a standard specification for data products. With this standard, each data product can be made discoverable in a generic way. When data catalogs or marketplaces adopt this standard, it provides the flexibility to choose and integrate any catalog or marketplace building block into your platform, fostering a more adaptable and interoperable data ecosystem. A data contract standard allows data products to specify their quality checks, SLOs, and SLAs in a generic format, enabling smooth integration of data quality tools with any data product. It enables you to combine the best solutions for ensuring data reliability across different platforms. Widely accepted standards are key to ensuring interoperability through agreed-upon APIs, SPIs, contracts, and plugin mechanisms. In essence, standards act as the glue that binds a composable data ecosystem. A strong belief in evolutionary architectures At ACA Group, we firmly believe in evolutionary architectures and platform engineering, principles that seamlessly extend to data mesh platforms. It's not about locking yourself into a rigid structure but creating an ecosystem that can evolve, staying at the forefront of innovation. That’s where composability comes in. Do you want a data platform that not only meets your current needs but also paves the way for the challenges and opportunities of tomorrow? Let’s engineer it together Ready to learn more about composability in data mesh solutions? {% module_block module "widget_f1f5c870-47cf-4a61-9810-b273e8d58226" %}{% module_attribute "buttons" is_json="true" %}{% raw %}[{"appearance":{"link_color":"light","primary_color":"primary","secondary_color":"primary","tertiary_color":"light","tertiary_icon_accent_color":"dark","tertiary_text_color":"dark","variant":"primary"},"content":{"arrow":"right","icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"tertiary_icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"text":"Contact us now!"},"target":{"link":{"no_follow":false,"open_in_new_tab":false,"rel":"","sponsored":false,"url":{"content_id":230950468795,"href":"https://25145356.hs-sites-eu1.com/en/contact","href_with_scheme":null,"type":"CONTENT"},"user_generated_content":false}},"type":"normal"}]{% endraw %}{% end_module_attribute %}{% module_attribute "child_css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "definition_id" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "field_types" is_json="true" %}{% raw %}{"buttons":"group","styles":"group"}{% endraw %}{% end_module_attribute %}{% module_attribute "isJsModule" is_json="true" %}{% raw %}true{% endraw %}{% end_module_attribute %}{% module_attribute "label" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "module_id" is_json="true" %}{% raw %}201493994716{% endraw %}{% end_module_attribute %}{% module_attribute "path" is_json="true" %}{% raw %}"@projects/aca-group-project/aca-group-app/components/modules/ButtonGroup"{% endraw %}{% end_module_attribute %}{% module_attribute "schema_version" is_json="true" %}{% raw %}2{% endraw %}{% end_module_attribute %}{% module_attribute "smart_objects" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "smart_type" is_json="true" %}{% raw %}"NOT_SMART"{% endraw %}{% end_module_attribute %}{% module_attribute "tag" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "type" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "wrap_field_tag" is_json="true" %}{% raw %}"div"{% endraw %}{% end_module_attribute %}{% end_module_block %}

Read more
We learn & share

ACA Group Blog

Read more about our thoughts, views, and opinions on various topics, important announcements, useful insights, and advice from our experts.

Featured

8 MAY 2025
Reading time 5 min

In the ever-evolving landscape of data management, investing in platforms and navigating migrations between them is a recurring theme in many data strategies. How can we ensure that these investments remain relevant and can evolve over time, avoiding endless migration projects? The answer lies in embracing ‘Composability’ - a key principle for designing robust, future-proof data (mesh) platforms. Is there a silver bullet we can buy off-the-shelf? The data-solution market is flooded with data vendor tools positioning themselves as the platform for everything, as the all-in-one silver bullet. It's important to know that there is no silver bullet. While opting for a single off-the-shelf platform might seem like a quick and easy solution at first, it can lead to problems down the line. These monolithic off-the-shelf platforms often end up inflexible to support all use cases, not customizable enough, and eventually become outdated.This results in big complicated migration projects to the next silver bullet platform, and organizations ending up with multiple all-in-one platforms, causing disruptions in day-to-day operations and hindering overall progress. Flexibility is key to your data mesh platform architecture A complete data platform must address numerous aspects: data storage, query engines, security, data access, discovery, observability, governance, developer experience, automation, a marketplace, data quality, etc. Some vendors claim their all-in-one data solution can tackle all of these. However, typically such a platform excels in certain aspects, but falls short in others. For example, a platform might offer a high-end query engine, but lack depth in features of the data marketplace included in their solution. To future-proof your platform, it must incorporate the best tools for each aspect and evolve as new technologies emerge. Today's cutting-edge solutions can be outdated tomorrow, so flexibility and evolvability are essential for your data mesh platform architecture. Embrace composability: Engineer your future Rather than locking into one single tool, aim to build a platform with composability at its core. Picture a platform where different technologies and tools can be seamlessly integrated, replaced, or evolved, with an integrated and automated self-service experience on top. A platform that is both generic at its core and flexible enough to accommodate the ever-changing landscape of data solutions and requirements. A platform with a long-term return on investment by allowing you to expand capabilities incrementally, avoiding costly, large-scale migrations. Composability enables you to continually adapt your platform capabilities by adding new technologies under the umbrella of one stable core platform layer. Two key ingredients of composability Building blocks: These are the individual components that make up your platform. Interoperability: All building blocks must work together seamlessly to create a cohesive system. An ecosystem of building blocks When building composable data platforms, the key lies in sourcing the right building blocks. But where do we get these? Traditional monolithic data platforms aim to solve all problems in one package, but this stifles the flexibility that composability demands. Instead, vendors should focus on decomposing these platforms into specialized, cost-effective components that excel at addressing specific challenges. By offering targeted solutions as building blocks, they empower organizations to assemble a data platform tailored to their unique needs. In addition to vendor solutions, open-source data technologies also offer a wealth of building blocks. It should be possible to combine both vendor-specific and open-source tools into a data platform tailored to your needs. This approach enhances agility, fosters innovation, and allows for continuous evolution by integrating the latest and most relevant technologies. Standardization as glue between building blocks To create a truly composable ecosystem, the building blocks must be able to work together, i.e. interoperability. This is where standards come into play, enabling seamless integration between data platform building blocks. Standardization ensures that different tools can operate in harmony, offering a flexible, interoperable platform. Imagine a standard for data access management that allows seamless integration across various components. It would enable an access management building block to list data products and grant access uniformly. Simultaneously, it would allow data storage and serving building blocks to integrate their data and permission models, ensuring that any access management solution can be effortlessly composed with them. This creates a flexible ecosystem where data access is consistently managed across different systems. The discovery of data products in a catalog or marketplace can be greatly enhanced by adopting a standard specification for data products. With this standard, each data product can be made discoverable in a generic way. When data catalogs or marketplaces adopt this standard, it provides the flexibility to choose and integrate any catalog or marketplace building block into your platform, fostering a more adaptable and interoperable data ecosystem. A data contract standard allows data products to specify their quality checks, SLOs, and SLAs in a generic format, enabling smooth integration of data quality tools with any data product. It enables you to combine the best solutions for ensuring data reliability across different platforms. Widely accepted standards are key to ensuring interoperability through agreed-upon APIs, SPIs, contracts, and plugin mechanisms. In essence, standards act as the glue that binds a composable data ecosystem. A strong belief in evolutionary architectures At ACA Group, we firmly believe in evolutionary architectures and platform engineering, principles that seamlessly extend to data mesh platforms. It's not about locking yourself into a rigid structure but creating an ecosystem that can evolve, staying at the forefront of innovation. That’s where composability comes in. Do you want a data platform that not only meets your current needs but also paves the way for the challenges and opportunities of tomorrow? Let’s engineer it together Ready to learn more about composability in data mesh solutions? {% module_block module "widget_f1f5c870-47cf-4a61-9810-b273e8d58226" %}{% module_attribute "buttons" is_json="true" %}{% raw %}[{"appearance":{"link_color":"light","primary_color":"primary","secondary_color":"primary","tertiary_color":"light","tertiary_icon_accent_color":"dark","tertiary_text_color":"dark","variant":"primary"},"content":{"arrow":"right","icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"tertiary_icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"text":"Contact us now!"},"target":{"link":{"no_follow":false,"open_in_new_tab":false,"rel":"","sponsored":false,"url":{"content_id":230950468795,"href":"https://25145356.hs-sites-eu1.com/en/contact","href_with_scheme":null,"type":"CONTENT"},"user_generated_content":false}},"type":"normal"}]{% endraw %}{% end_module_attribute %}{% module_attribute "child_css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "definition_id" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "field_types" is_json="true" %}{% raw %}{"buttons":"group","styles":"group"}{% endraw %}{% end_module_attribute %}{% module_attribute "isJsModule" is_json="true" %}{% raw %}true{% endraw %}{% end_module_attribute %}{% module_attribute "label" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "module_id" is_json="true" %}{% raw %}201493994716{% endraw %}{% end_module_attribute %}{% module_attribute "path" is_json="true" %}{% raw %}"@projects/aca-group-project/aca-group-app/components/modules/ButtonGroup"{% endraw %}{% end_module_attribute %}{% module_attribute "schema_version" is_json="true" %}{% raw %}2{% endraw %}{% end_module_attribute %}{% module_attribute "smart_objects" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "smart_type" is_json="true" %}{% raw %}"NOT_SMART"{% endraw %}{% end_module_attribute %}{% module_attribute "tag" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "type" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "wrap_field_tag" is_json="true" %}{% raw %}"div"{% endraw %}{% end_module_attribute %}{% end_module_block %}

Read more

All blog posts

Lets' talk!

We'd love to talk to you!

Contact us and we'll get you connected with the expert you deserve!

Lets' talk!

We'd love to talk to you!

Contact us and we'll get you connected with the expert you deserve!

Lets' talk!

We'd love to talk to you!

Contact us and we'll get you connected with the expert you deserve!

Lets' talk!

We'd love to talk to you!

Contact us and we'll get you connected with the expert you deserve!

machine learning
machine learning
Reading time 4 min
6 MAY 2025

Whether we unlock our phones with facial recognition, shout voice commands to our smart devices from across the room or get served a list of movies we might like… machine learning has in many cases changed our lives for the better. However, as with many great technologies, it has its dark side as well. A major one being the massive, often unregulated, collection and processing of personal data. Sometimes it seems that for every positive story, there’s a negative one about our privacy being at risk . It’s clear that we are forced to give privacy the attention it deserves. Today I’d like to talk about how we can use machine learning applications without privacy concerns and worrying that private information might become public . Machine learning with edge devices By placing the intelligence on edge devices on premise, we can ensure that certain information does not leave the sensor that captures it. An edge device is a piece of hardware that is used to process data closely to its source. Instead of sending videos or sound to a centralized processor, they are dealt with on the machine itself. In other words, you avoid transferring all this data to an external application or a cloud-based service. Edge devices are often used to reduce latency. Instead of waiting for the data to travel across a network, you get an immediate result. Another reason to employ an edge device is to reduce the cost of bandwidth. Devices that are using a mobile network might not operate well in rural areas. Self-driving cars, for example, take full advantage of both these reasons. Sending each video capture to a central server would be too time-consuming and the total latency would interfere with the quick reactions we expect from an autonomous vehicle. Even though these are important aspects to consider, the focus of this blog post is privacy. With the General Data Protection Regulation (GDPR) put in effect by the European Parliament in 2018, people have become more aware of how their personal information is used . Companies have to ask consent to store and process this information. Even more, violations of this regulation, for instance by not taking adequate security measures to protect personal data, can result in large fines. This is where edge devices excel. They can immediately process an image or a sound clip without the need for external storage or processing. Since they don’t store the raw data, this information becomes volatile. For instance, an edge device could use camera images to count the number of people in a room. If the camera image is processed on the device itself and only the size of the crowd is forwarded, everybody’s privacy remains guaranteed. Prototyping with Edge TPU Coral, a sub-brand of Google, is a platform that offers software and hardware tools to use machine learning. One of the hardware components they offer is the Coral Dev Board . It has been announced as “ Google’s answer to Raspberry Pi ”. The Coral Dev Board runs a Linux distribution based on Debian and has everything on board to prototype machine learning products. Central to the board is a Tensor Processing Unit (TPU) which has been created to run Tensorflow (Lite) operations in a power-efficient way. You can read about Tensorflow and how it helps enable fast machine learning in one of our previous blog posts . If you look closely at a machine learning process, you can identify two stages. The first stage is training a model from examples so that it can learn certain patterns. The second stage is to apply the model’s capabilities to new data. With the dev board above, the idea is that you train your model on cloud infrastructure. It makes sense, since this step usually requires a lot of computing power. Once all the elements of your model have been learned, they can be downloaded to the device using a dedicated compiler. The result is a little machine that can run a powerful artificial intelligence algorithm while disconnected from the cloud. Keeping data local with Federated Learning The process above might make you wonder about which data is used to train the machine learning model. There are a lot of publicly available datasets you can use for this step. In general these datasets are stored on a central server. To avoid this, you can use a technique called Federated Learning. Instead of having the central server train the entire model, several nodes or edge devices are doing this individually. Each node sends updates on the parameters they have learned, either to a central server (Single Party) or to each other in a peer-to-peer setup (Multi Party). All of these changes are then combined to create one global model. The biggest benefit to this setup is that the recorded (sensitive) data never leaves the local node . This has been used for example in Apple’s QuickType keyboard for predicting emojis , from the usage of a large number of users. Earlier this year, Google released TensorFlow Federated to create applications that learn from decentralized data. Takeaway At ACA we highly value privacy, and so do our customers. Keeping your personal data and sensitive information private is (y)our priority. With techniques like federated learning, we can help you unleash your AI potential without compromising on data security. Curious how exactly that would work in your organization? Send us an email through our contact form and we’ll soon be in touch.

Read more
0auth 2 man laptop
0auth 2 man laptop
What the heck is OAuth 2?
Reading time 9 min
6 MAY 2025

In this blog post, I would like to give you a high level overview of the OAuth 2 specification. When I started to learn about this, I got lost very quickly in all the different aspects that are involved. To make sure you don’t have to go through the same thing, I’ll explain OAuth 2 as if you don’t even have a technical background. Since there is a lot to cover, let’s jump right in! The core concepts of security When it comes to securing an application, there are 2 core concepts to keep in mind: authentication and authorization . Authentication With authentication, you’re trying to answer the question “Who is somebody?” or “Who is this user?” You have to look at it from the perspective of your application or your server. They basically have stranger danger. They don’t know who you are and there is no way for them to know that unless you prove your identity to them. So, authentication is the process of proving to the application that you are who you claim to be . In a real world example, this would be providing your ID or passport to the police when they pull you over to identify yourself. Authentication is not part of the standard OAuth 2 specification. However, there is an extension to the specification called Open ID Connect that handles this topic. Authorization Authorization is the flip side of authentication. Once a user has proven who they are, the application needs to figure out what a user is allowed to do . That’s essentially what the authorization process does. An easy way to think about this is the following example. If you are a teacher at a school, you can access information about the students in your class. However, if you are the principal of the school you probably have access to the records of all the students in the school. You have a larger access because of your job title. OAuth 2 Roles To fully understand OAuth 2, you have to be aware of the following 4 actors that make up the specification: Resource Owner Resource Server Authorization Server Client / Application As before, let’s explain it with a very basic example to see how it actually works. Let’s say you have a jacket. Since you own that jacket, you are the Resource Owner and the jacket is the Resource you want to protect. You want to store the jacket in a locker to keep it safe. The locker will act as the Resource Server . You don’t own the Resource Server but it’s holding on to your things for you. Since you want to keep the jacket safe from being stolen by someone else, you have to put a lock on the locker. That lock will be the Authorization Server . It handles the security aspects and makes sure that only you are able to access the jacket or potentially someone else that you give permission. If you want your friend to retrieve your jacket out of the locker, that friend can be seen as the Client or Application actor in the OAuth flow. The Client is always acting on the user’s behalf. Tokens The next concept that you’re going to hear about a lot is tokens.There are various types of tokens, but all of them are very straightforward to understand. The 2 types of tokens that you encounter the most are access tokens and refresh tokens . When it comes to access tokens you might have heard about JWT tokens, bearer tokens or opaque tokens. Those are really just implementation details that I’m not going to cover in this article. In essence, an access token is something you provide to the resource server in order to get access to the items it is holding for you. For example, you can see access tokens as paper tickets you buy at the carnival. When you want to get on a ride, you present your ticket to the person in the booth and they’ll let you on. You enjoy your ride and afterwards your ticket expires. Important to note is that whoever has the token, owns the token . So be very careful with them. If someone else gets a hold on your token, he or she can access your items on your behalf! Refresh tokens are very similar to access tokens. Essentially, you use them to get more access tokens. While access tokens are typically short lived, refresh tokens tend to have a longer expiry date. To go back to our carnival example, a refresh token could be your parents credit card that can be used to buy more carnival tickets for you to spend on rides. Scopes The next concept to cover are scopes. A scope is basically a description of things that a person can do in an application. You can see it as a job role in real life (e.g a principal or teacher in a high school). Certain scopes can grant you more permissions than others. I know I said I wasn’t going to get into technical details, but if you’re familiar with Spring Security, then you can compare scopes with what Spring Security calls roles. A scope matches one-on-one with the concept of a role. The OAuth specification does not specify how a scope should look like but often they are dot separated Strings like blog.write . Google on the other hand uses URLs as a scope. As an example: to allow read only access to someone’s calendar, they will provide the scope https://www.googleapis.com/auth/calendar.readonly . Grant types Grant types are typically where things start to get confusing for people. Let’s first start with showing the most common used grant types: Client Credentials Authorization Code Device Code Refresh Password Implicit Client Credentials is a grant type used very frequently when 2 back-end services need to communicate with each other in a secure way. The next one is the Authorization Code grant type, which is probably the most difficult grant type to fully grap. You use this grant type whenever you want users to login via a browser based login form. If you have ever used the ‘Log in with Facebook’ or ‘Log in with Google’ button on a website, then you’ve already experienced an Authorization Code flow without even knowing it! Next up is the Device Code grant type, which is fairly new in the OAuth 2 scene. It’s typically used on devices that have limited input capabilities, like a TV. For example, if you want to log in to Netflix, instead of providing your username and password; it will pop-up a link that displays a code, which you have to fill in using the mobile app. The Refresh grant type most often goes hand in hand with the Authorization Code flow. Since access tokens are short lived, you don’t want your users to be bothered with logging in each time the access token expires. So there’s this refresh flow that utilizes refresh tokens to acquire new access tokens whenever they’re about to expire. The last 2 grant types are Password and Implicit . These grant types are less secure options that are not recommended when building new applications. We’ll touch on them briefly in the next section, which explains the above grant types in more detail. Authorization flows An authorization flow contains one or more steps that have to be executed in order for a user to get authorized by the system. There are 4 authorization flows we’ll discuss: Client Credentials flow Password flow Authorization Code flow Implicit flow Client Credentials flow The Client Credentials flow is the simplest flow to implement. It works very similar to how a traditional username/password login works. Use this flow only if you can trust the client/application, as the client credentials are stored within the application. Don’t use this for single page apps (SPAs) or mobile apps, as malicious users can deconstruct the app to get ahold of the credentials and use them to get access to secured resources. In most use cases, this flow is used to communicate securely between 2 back-end systems. So how does the Client Credentials flow work? Each application has a client ID and secret that are registered on the authorization server. It presents those to the authorization server to get an access token and uses it to get the secure resource from the resource server. If at some point the access token expires, the same process repeats itself to get a new token. Password flow The Password flow is very similar to the Client Credentials flow, but is very insecure because there’s a 3rd actor involved being an actual end user. Instead of a secure client that we trust presenting an ID and secret to the authorization provider, we now have a user ‘talking’ to a client. In a Password flow, the user provides their personal credentials to the client. The client then uses these credentials to get access tokens from the authorization server. This is the reason why a Password flow is not secure, as we must absolutely be sure that we can trust the client to not abuse the credentials for malicious reasons. Exceptions where this flow could still be used are command line applications or corporate websites where the end user has to trust the client apps that they use on a daily basis. But apart from this, it’s not recommended to implement this flow. Authorization Code Flow This is the flow that you definitely want to understand, as it’s the flow that’s used the most when securing applications with OAuth 2. This flow is a bit more complicated than the previously discussed flows. It’s important to understand that this flow is confidential, secure and browser based . The flow works by making a lot of HTTP redirects, which is why a browser is an important actor in this flow. There’s also a back-channel request (called like this because the user is not involved in this part of the flow) in which the client or application talks directly to the authorization server. In this flow, the user typically has to approve the scopes or permissions that will be granted to the application. An example could be a 3rd party application that asks if it’s allowed to have access to your Facebook profile picture after logging in with the ‘Log in with Facebook’ button. Let’s apply the Authorization Code flow to our ‘jacket in the closet’ example to get a better understanding. Our jacket is in the locker and we want to lend it to a friend. Our friend goes to the (high-tech) locker. The locker calls us, as we are the Resource Owner. This call is one of those redirects we talked about earlier. At this point, we establish a secure connection to the locker, which acts as an authorization server. We can now safely provide our credentials to give permission to unlock the lock. The authorization server then provides a temporary code called OAuth code to our friend. The friend then uses that OAuth code to obtain an access code to open the locker and get my jacket. Implicit flow The Implicit flow is basically the same as the Authorization Code flow, but without the temporary OAuth code. So after logging in, the authorization server will immediately send back an access token without requiring a back-channel request. This is less secure, as the token could be intercepted via a man-in-the-middle attack. Conclusion OAuth 2 may look daunting at first because of all the different actors involved. Hopefully, you now have a better understanding of how they interact with each other. With this knowledge in mind, it will be much easier to grasp the technical details once you start delving into them.

Read more
Reading time 4 min
6 MAY 2025

There are only a few days left until the GDPR goes into effect and an alarming amount of people are panicking and wondering how they can save their marketing and sales from it. But why does everyone feel like that, when in fact, the GDPR is actually saving them? GDPR won’t crush your marketing sales dreams The most important thing we’ve done at ACA IT-Solutions concerning the GPDR was finding that much needed clarity. We have been fortunate enough to work with an amazing external GDPR consultant (if you are reading this Jean-Pierre, thank you for all the help!), who made it all very clear to us and which made me realize this: GDPR will not crush your entire marketing and sales dreams if your company is customer-centric and focuses on delivering value. I am not saying that it wasn’t difficult for us. Being compliant meant a lot of work and will still mean hard work in the future. But we needed to make changes, which actually make a lot of sense. Here’s why I think GDPR is actually a superhero to marketing and sales. Why the GDPR is a good thing for marketing sales The new rules focus on a few key aspects that will improve and evolve marketing and sales to a higher level. 1. Customer Centricity Customer Centricity and GDPR go hand in hand. GDPR isn’t here to sabotage all marketing initiatives or to just make our lives a little bit harder. It makes us focus on values such as transparency, data quality and respect for the people we contact. The contacts in our databases will genuinely be interested in what an enterprise has to say. CTR rates of your next emailing will probably go through the roof! You might have to process the sting of losing a large chunk of your database in the initial phase, but after a while you’ll be more than happy with the results. And the opportunity to be the superhero of actually useful and interesting content. 😉 2. Privacy GDPR-compliant companies are able to guarantee a feeling of trust to people when it comes to their privacy. It’s a huge plus for them if you are not only involved in their privacy, but are also able to convey how exactly you deal with the matter and which measures you take. Think of all the heated privacy scandals of the last few months, such as the Facebook Data Scandal . It’s the perfect example to show that privacy leaves no one untroubled. A new era has begun when it comes to making personal data publicly available! 3. Build and maintain trust The honesty and transparency that is required in GDPR-compliant communication allows marketers to once again build and maintain a relationship of trust with prospects and customers. Companies will need to respect the wishes of individuals and need to think about when, why and how people can be contacted. Instead of consumers being suspicious of marketing and sales efforts, we can now guarantee and actually show people our true intentions. 4. Data Quality The dialogue marketing stemming from this new legislation provides individuals more than ever with a voice and makes it easier for them to contact a company. The GDPR also increases the quality of your data. Companies will not only look whether data is correct, but also at the way they gather and process it. This is unmistakably a big benefit for the quality of the CRM you’ve built up. After all, qualitative and GDPR-compliant data take you to the next level when it comes to the use and maintenance of data within the different departments of company. In short, it makes us consider ‘doing business responsibly’ in a whole other light. 5. Security The new legislation and its data security requirements have established worldwide awareness concerning the importance of investments in security and privacy. Businesses all over the world are: integrating IT-governances, examining the security of their data, thinking about Privacy by Design , preventing data breaches, making data risk assessments, ... Instead of being the violator, organizations now have the opportunity to be the protector of personal data and privacy. An important step that was long overdue. The GDPR has the potential to make us better in every way In my honest opinion, the only real conclusion I can make is that the GDPR has made companies not only think about their responsibility in data privacy and security, but has also led to companies taking real action. GDPR is simply an evolution that can make any organization stronger, smarter and more self-aware. ACA’s commitment to the GDPR I already mentioned we first looked for clarity concerning the GDPR at ACA. After understanding the legislation, we needed to fully commit ourselves to it. Of course, marketing and sales aren’t the only domains within ACA that commit to the new legislation and our view on privacy and security. That is why we created an internal GDPR mission statement , which we hold in high regard throughout our company’s activities. “Faithful to its core values, ACA Group continuously strives to be an honest and discrete leader in data privacy protection, treating all personal data in our ecosystem in an ethical, respectful and pragmatic way.” — Ronny Ruyters , CEO at ACA Group Interested in our GDPR policies? Go and have a look at our renewed legal page and don’t hesitate to contact me if you have any remarks or questions or just want to chat.

Read more
A day in the life of a Data Protection Officer
A day in the life of a Data Protection Officer
Reading time 4 min
5 MAY 2025

In our last blog post about GDPR, we looked at the state of GDPR 8 months after it went into effect. Today, we’ll look at what the content of the job of a Data Protection Officer exactly is. What could a Data Protection Officer (DPO) possibly do besides looking at implementation methods for a European regulation, or answer questions from his customers about the same topic? A day in the life of a DPO, what’s that like? Data protection impact assessment A typical day starts at 8:30am in the offices of a customer where meetings (one after the other) take up the whole morning. Preparation for these meetings is key. There’s professionals in front of you: CFOs, legal counsels, CIOs, CEOs, ICT development ICT infrastructure managers, GDPR coordinators, … These people know their business, so you better come prepared! A recent example of such a morning is with a customer where we need to finalize a data protection impact assessment (DPIA). A DPIA is a way to assess the privacy risks of data processing beforehand. The methodology we use is the CNIL application approach. That day, we discuss the consequences of the ‘DPO validation step’ which I prepared the day before. The meeting’s attendants are the COO, the HR director and myself and although the DPIA did not produce a ‘high or very high risk’ for the assessed processing activity, we find that we do need to define certain actions or mitigations for some smaller risks related to some flaws we found in the process. Being the Data Protection Officer, I had defined the required actions to mitigate each of the documented risks that we found and these now need to be discussed, approved and added to the action list with deadlines and responsibilities. Compromise is key... It is worth noting that a DPO only has an advisory function and does not have the mandate to take decisions. However, if, in this case, the COO or HR director would not agree with one or more of my proposed to-dos and we can’t agree to an alternative with the same result, the company needs to document and motivate the reason(s) why they didn’t follow the DPO’s advice. Fortunately, we had a good meeting with a very good discussion on one of the mitigating actions with an interesting compromise as a result. This is why the discussion is so important: an external Data Protection Officer needs to understand that the knowledge of the business processes, the business risks, the business value and commercial proposition is far better known by the company than by themselves and it’s mandatory to listen to the customer. But, and this is a very important but, it doesn’t mean that we can bend the rules! In this case, we came up with a valid compromise but in other cases (with another customer) we hadn’t, which implied my advice wasn’t accepted and the required documented motivation was written. As the meeting came to an end sooner than I expected, I had some time left. The marketing manager took this opportunity to discuss the possible impact of the GDPR on the next marketing campaign that was still under development. The campaign itself was very nice and creative, but since interactivity with the (potential) customer was a key part of it, the GDPR indeed had a certain impact. This meeting took a bit longer than “just a quick question”. 😊 ... and so is context! For that particular day I went to our office for the afternoon. When I’m at the office, I mostly prepare customer meetings, review Data Protection Agreements, prepare policies, presentations, trainings (e.g. Privacy by design for IT development) and DSAR (Data Subject Access Request) concepts. Additionally, I answer questions from our customers: "I have been asked to... Can I do this?" "I would like to add this functionality to our website. Does the GDPR have any impact on it?" "We would like to implement an MDM tool. Is that OK?" "An ex-employee sent a DSAR and would like to receive this specific information. Do we need to give it to them?" Of course, these are just a few examples. In reality, there are many more questions of all types all from different companies with different processes, different culture and policies. The same question may have different answers, depending on the situation or company. Knowing the legislation (and this means more than only the GDPR) is a basic requirement but unfortunately, that’s not enough. The interpretation for specific situations and knowing how to explain these within different types of companies in such a way that people accept it is one of the more challenging aspects. After all, not everybody loves the GDPR… 💔 Data Protection Officer: a varied and challenging job Being a Data Protection Officer is a very interesting, challenging job if you’re interested in business processes, data security, lifelong learning, lively discussions and sharing legal views or interpretations. While a lot of the job revolves around the GDPR, it is much more varied than that. I hope I’ve been able to give you some insight in what a DPO does from day to day!

Read more
Kickstart your next project with a pre-built web application architecture
Kickstart your next project with a pre-built web application architecture
Reading time 6 min
20 JAN 2023

Starting a new web project can be a daunting task with many different components to consider and to configure. For developers, having access to a starting point for building web apps, with all the necessary files and configurations already set up, can certainly come in handy. Not only does it save a lot of time and effort compared to when you need to build everything from scratch, it also increases productivity and makes customers happy because they can see tangible results much faster. At ACA Group, we do a lot of similar implementations and the following requirements are common to most web application projects: A great user experience: a fast, responsive and snappy frontend that is flexible enough to implement any sort of user interaction Reliable and performant processing: a solid database and backend solution that is easily extendable, testable, maintainable and understandable for any engineer User authentication and security: a robust and mature authentication server that also has SSO and user federation, and integrates with a lot of different providers Simple and secure deployment: yet easy to develop without too much overhead Our answer to these recurring requirements is a flexible software base that works out of the box. With a few lines in the terminal, you can spin up a new project that has all of the above functionalities in a basic state, waiting to be expanded and built upon. The figure below illustrates the base of the architecture we often use for small and medium-load web applications, and the different services that play a role. Of course, there are other components in play, but they are more often implemented on a case-by-case basis. Backend Let’s start with the brains of the web application – the backend. For our Python team, it is only natural to use this language to build the backbone of the application. FastAPI offers a lot of flexibility in terms of how you implement business logic and design patterns. It is also one of the highest performing backend solutions you can choose in Python; it has great documentation and is backed by a solid community. As a popular choice for projects involving data analytics, machine learning or AI, a Python backend makes it easier to bring cutting-edge technologies closer to the user. Frontend To design the user experience — or the frontend — we prefer Angular , a mature and well-explored JavaScript framework that is widely used throughout the industry. It is designed to easily create single-page, interactive web applications that can run in any modern web browser. Angular also has an established reputation for good performance and scalability, reducing the risk of running into scalability issues on larger projects. Another advantage is the fact that Angular is structured and looks a lot like backend code, making it easier to understand for non-frontend developers. Database and Storage For data storage, PostgreSQL is a widely-used and reliable database management system (DBMS) that is well suited for various applications, including web development. It is known for its performance, particularly when it comes to handling large amounts of data. It can process complex queries efficiently and has a reputation for scaling well as the size of the database increases. It is also feature-rich and has several options for indexing and query optimization. Security and Authentication Our secure authentication server is built on Keycloak , a mature IAM solution that helps organizations secure their applications and services. It is not only open-source, but also sponsored by the world’s enterprise open source leader RedHat. It provides a single access point for users to authenticate themselves and authorize access to various resources; and it supports a wide range of authentication mechanisms, such as username and password, two-factor authentication, and social login. Infrastructure The next piece of the Puzzle is NGinx , which orchestrates and distributes all incoming traffic across the services. It’s a powerful and flexible web server and reverse proxy often used to handle incoming client requests in a secure and high-performance manner. It is known for its ability to handle a large number of concurrent connections with low resource usage, and is particularly efficient when serving static content like images, CSS, and JavaScript files. Nginx can forward requests from clients to one or more services, easily directing traffic to the appropriate component of the web application and distributing the load across multiple servers or services, even if they perform the same role. This also means that all different services communicate exclusively through NGinx with SSL/TLS protocols, encrypting all traffic and securing sensitive data. Deployment Finally, Docker facilitates deployment and development. By containerizing the various components of the app, such as the backend or the database, it becomes much easier to deploy the app on different hosting environments. This is particularly important when clients have different requirements in terms of hosting machines, infrastructure, and so on. With Docker, the app’s services can be packaged in a standardized way and then deployed consistently in different environments. Docker also has benefits for managing the app in production. By placing components in containers, you can easily scale up or down, roll out updates and rollbacks, and monitor the health of the app. This can help to improve app reliability and maintainability over time. For developers, Docker also makes it easier to test the app in a variety of environments, collaborate with team members, and automate tasks like building, testing and deploying the app. Kickstart a new project 👊 The purpose of this architecture is to provide a starting point for building a web application with all the required components already configured. We’ve packaged it in a template that includes everything you need to get started, so you don’t have to build a starting architecture from scratch. Instead, you can use the template as a foundation and then customize it to fit your specific needs. To use this template, we’ve chosen a tool called Cookiecutter. It only needs to be installed once by the person setting up the initial repository to create a new project based on a template of the architecture above. As part of this process, a few values are asked in order to customize the template, such as the name of the project, the email address of the admin, what features you want to enable, etc. Once you’ve used Cookiecutter to create the project directory, it will contain everything you need to build and run the web application. To start working on the app, you can run a simple Docker command, and the web application will be up and running in no time. This enables live development on any part of the application with hot reload, and makes deployment as simple as a few clicks. Conclusion All things considered, a pre-built web application architecture like the one described in this blog can be a valuable tool for saving time and effort on any new project. By providing a solid foundation for building a web application, it can help teams get an MVP up and running quickly, without having to start from scratch. In addition to saving time and effort, the combination of the technologies above allow you to be confident that your app will be well-equipped to handle a wide range of needs.

Read more
How to secure your cloud with AWS Config
How to secure your cloud with AWS Config
How to secure your cloud with AWS Config
Reading time 6 min
26 FEB 2020

AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. This can be used for: security: validate security best practices on your AWS Account compliance: report on deviations on configuration for AWS resources based upon best practices or architectural principles and guidelines efficiency: report on lost or unused resources in your AWS Account In this blog post, I’d like to detail how to monitor your cloud resources with this tool. This first part discusses AWS Config account setup, enabling notifications when resources are not compliant, and deployment. Why use AWS Config? AWS is the main cloud platform we use at ACA. We manage multiple accounts in AWS to host all sorts of applications for ourselves and for our customers. Over the years, we set up more and more projects in AWS. This led to a lot of accounts being created, which in turn use a lot of cloud resources. Naturally, this means that keeping track of all these resources becomes increasingly challenging as well. AWS Config helped us deal with this challenge. We use it to inventorize and monitor all the resources in our entire AWS organization . It also allows us to set compliance rules for our resources that need to be conform in every account. For example: an Elastic IP should not be left unused or an EC2 security group should not allow all incoming traffic without restrictions. This way, we’re able to create a standard for all our AWS accounts. Having AWS Config enabled in your organization gives us a couple of advantages. We always have an up-to-date inventory of all the resources in our accounts. It allows to inspect the change history of all our resources 24/7. It gives us the possibility to create organization rules and continuously check if our resources are compliant. If that’s not the case, we instantly get a notification. Setting up AWS Config for a single account In this first part of my AWS Config blog, I want to show how to set up AWS Config in a single account. In a future blog post, I’ll explain more about you can do this for an entire AWS organization. The image below shows an overview of the setup in a single account, containing the AWS Config recorder, the AWS Config rules, and the S3 bucket. The AWS Config recorder is the main component of the set-up. You can turn on the default recorder in the AWS console. By default, it will record all resource types. You can find more information about all the available resource types on this page . When you start recording, all the the AWS resources are stored in the S3 bucket as configuration items. Recording these configuration items is not free. At the point of writing it costs $0.003 per recorded configuration item. This cost is generated when the configuration item is first recorded or when something changes to it or one of its relationships . In the settings of the AWS Config recorder, you can also specify how long these configuration items should be stored in the S3 bucket. The AWS Config rules are the most important part of your setup. These rules can be used as compliancy checks to make sure the resources in your account are configured as intended. It’s possible to create custom rules or choose from a large set of AWS managed rules . In our setup at ACA, we chose to only use AWS managed rules since they fitted all our needs. In the image below, you can see one of the rules we deployed. Just like recording configuration items, running rule evaluations costs money. At the moment of writing this is $0.001 for the first 100.000 rule evaluations per region, $.0008 from 100.000 – 500.000 and after that $.0005. There are a lot of rules available with different benefits to your AWS account. These are some of the AWS managed rules we configured: Rules that improve security AccessKeysRotated: checks if the Access keys of an IAM user are rotated within a specified amount of days IamRootAccessKeyCheck: checks if a root account has access keys assigned to it, which isn’t recommended S3BucketServerSideEncryptionEnabled: checks if default encryption for a S3 bucket is enabled Rules that detect unused resources (cost reduction) Ec2VolumeInuseCheck: checks if an EBS volume is being used EipAttached: checks if an Elastic IP is being used Rules that detect resource optimizations VpcVpn2TunnelsUp: checks if a VPN connection has two tunnels available Setting up notifications when resources are not compliant AWS Config rules check configuration items. If a configuration item doesn’t pass the rule requirements, it is marked as ‘non compliant’. Whenever this happens, you want to be notified so you can take the appropriate actions to fix it. In the image below, you can see the way we implemented the notifications for our AWS Config rules. To start with notifications, CloudTrail should be enabled and there should be a trail that logs all activity in the account. Now CloudWatch is able to pick up the CloudTrail events. In our setup, we created 5 CloudWatch event rules that send notifications according to priority. This makes it possible for us to decide what the priority level of the alert for each AWS Config rule should be. The image below shows an example of this. In the ‘Targets’ section, you can see the SNS topic which receives the messages of the CloudWatch event rule. Opsgenie has a separate subscription for each of the SNS topics (P1, P2, P3, P4 P5). This way, we receive notifications when compliance changes happen and also see the severity by looking at the priority level from our Opsgenie alert. Deploying your AWS Config At ACA, we try to always manage our AWS infrastructure with Terraform. This is no different for AWS Config. This is our deployment workflow: We manage everything AWS Config related in Terraform. Here’s an example of one of the AWS Config rules in Terraform, in which the rule_identifier attribute value can be found in the documentation of the AWS Config managed rules: resource "aws_config_config_rule" "mfa_enabled_for_iam_console_access" { name = "MfaEnabledForIamConsoleAccess" description = "Checks whether AWS Multi-Factor Authentication (MFA) is enabled for all AWS Identity and Access Management (IAM) users that use a console password. The rule is compliant if MFA is enabled." rule_identifier = "MFA_ENABLED_FOR_IAM_CONSOLE_ACCESS" maximum_execution_frequency = "One_Hour" excluded_accounts = "${var.aws_config_organization_rules_excluded_accounts}" } The Terraform code is version controlled with Git. When the code needs to be deployed, Jenkins does a checkout of the Git repository and deploys it to AWS with Terraform. Takeaway With AWS Config we’re able to get more insights in our AWS cloud resources. AWS Config improves our security , avoids keeping resources around that are not being used and makes sure our resources are being configured in an optimal way. Besides these advantages, it also provides us with an inventory of all our resources and their configuration history, which we can inspect at any time. This concludes this blog post on the AWS Config topic. In a future part I want to explain how to set it up for an AWS organization. If you found this topic interesting and you got a question or if you would like to know more about our AWS Config setup, then please reach out to us at cloud@aca-it.be {% module_block module "widget_1f2727bf-c08a-40a0-9306-0cb030d1f763" %}{% module_attribute "buttons" is_json="true" %}{% raw %}[{"appearance":{"link_color":"light","primary_color":"primary","secondary_color":"primary","tertiary_color":"light","tertiary_icon_accent_color":"dark","tertiary_text_color":"dark","variant":"primary"},"content":{"arrow":"right","icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"tertiary_icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"text":"I want to automatically secure my cloud"},"target":{"link":{"no_follow":false,"open_in_new_tab":false,"rel":"","sponsored":false,"url":{"content_id":217528923385,"href":"https://25145356.hs-sites-eu1.com/en/services/cloud","href_with_scheme":null,"type":"CONTENT"},"user_generated_content":false}},"type":"normal"}]{% endraw %}{% end_module_attribute %}{% module_attribute "child_css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "definition_id" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "field_types" is_json="true" %}{% raw %}{"buttons":"group","styles":"group"}{% endraw %}{% end_module_attribute %}{% module_attribute "isJsModule" is_json="true" %}{% raw %}true{% endraw %}{% end_module_attribute %}{% module_attribute "label" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "module_id" is_json="true" %}{% raw %}201493994716{% endraw %}{% end_module_attribute %}{% module_attribute "path" is_json="true" %}{% raw %}"@projects/aca-group-project/aca-group-app/components/modules/ButtonGroup"{% endraw %}{% end_module_attribute %}{% module_attribute "schema_version" is_json="true" %}{% raw %}2{% endraw %}{% end_module_attribute %}{% module_attribute "smart_objects" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "smart_type" is_json="true" %}{% raw %}"NOT_SMART"{% endraw %}{% end_module_attribute %}{% module_attribute "tag" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "type" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "wrap_field_tag" is_json="true" %}{% raw %}"div"{% endraw %}{% end_module_attribute %}{% end_module_block %}

Read more