We learn & share

ACA Group Blog

Read more about our thoughts, views, and opinions on various topics, important announcements, useful insights, and advice from our experts.

Featured

8 MAY 2025
Reading time 5 min

In the ever-evolving landscape of data management, investing in platforms and navigating migrations between them is a recurring theme in many data strategies. How can we ensure that these investments remain relevant and can evolve over time, avoiding endless migration projects? The answer lies in embracing ‘Composability’ - a key principle for designing robust, future-proof data (mesh) platforms. Is there a silver bullet we can buy off-the-shelf? The data-solution market is flooded with data vendor tools positioning themselves as the platform for everything, as the all-in-one silver bullet. It's important to know that there is no silver bullet. While opting for a single off-the-shelf platform might seem like a quick and easy solution at first, it can lead to problems down the line. These monolithic off-the-shelf platforms often end up inflexible to support all use cases, not customizable enough, and eventually become outdated.This results in big complicated migration projects to the next silver bullet platform, and organizations ending up with multiple all-in-one platforms, causing disruptions in day-to-day operations and hindering overall progress. Flexibility is key to your data mesh platform architecture A complete data platform must address numerous aspects: data storage, query engines, security, data access, discovery, observability, governance, developer experience, automation, a marketplace, data quality, etc. Some vendors claim their all-in-one data solution can tackle all of these. However, typically such a platform excels in certain aspects, but falls short in others. For example, a platform might offer a high-end query engine, but lack depth in features of the data marketplace included in their solution. To future-proof your platform, it must incorporate the best tools for each aspect and evolve as new technologies emerge. Today's cutting-edge solutions can be outdated tomorrow, so flexibility and evolvability are essential for your data mesh platform architecture. Embrace composability: Engineer your future Rather than locking into one single tool, aim to build a platform with composability at its core. Picture a platform where different technologies and tools can be seamlessly integrated, replaced, or evolved, with an integrated and automated self-service experience on top. A platform that is both generic at its core and flexible enough to accommodate the ever-changing landscape of data solutions and requirements. A platform with a long-term return on investment by allowing you to expand capabilities incrementally, avoiding costly, large-scale migrations. Composability enables you to continually adapt your platform capabilities by adding new technologies under the umbrella of one stable core platform layer. Two key ingredients of composability Building blocks: These are the individual components that make up your platform. Interoperability: All building blocks must work together seamlessly to create a cohesive system. An ecosystem of building blocks When building composable data platforms, the key lies in sourcing the right building blocks. But where do we get these? Traditional monolithic data platforms aim to solve all problems in one package, but this stifles the flexibility that composability demands. Instead, vendors should focus on decomposing these platforms into specialized, cost-effective components that excel at addressing specific challenges. By offering targeted solutions as building blocks, they empower organizations to assemble a data platform tailored to their unique needs. In addition to vendor solutions, open-source data technologies also offer a wealth of building blocks. It should be possible to combine both vendor-specific and open-source tools into a data platform tailored to your needs. This approach enhances agility, fosters innovation, and allows for continuous evolution by integrating the latest and most relevant technologies. Standardization as glue between building blocks To create a truly composable ecosystem, the building blocks must be able to work together, i.e. interoperability. This is where standards come into play, enabling seamless integration between data platform building blocks. Standardization ensures that different tools can operate in harmony, offering a flexible, interoperable platform. Imagine a standard for data access management that allows seamless integration across various components. It would enable an access management building block to list data products and grant access uniformly. Simultaneously, it would allow data storage and serving building blocks to integrate their data and permission models, ensuring that any access management solution can be effortlessly composed with them. This creates a flexible ecosystem where data access is consistently managed across different systems. The discovery of data products in a catalog or marketplace can be greatly enhanced by adopting a standard specification for data products. With this standard, each data product can be made discoverable in a generic way. When data catalogs or marketplaces adopt this standard, it provides the flexibility to choose and integrate any catalog or marketplace building block into your platform, fostering a more adaptable and interoperable data ecosystem. A data contract standard allows data products to specify their quality checks, SLOs, and SLAs in a generic format, enabling smooth integration of data quality tools with any data product. It enables you to combine the best solutions for ensuring data reliability across different platforms. Widely accepted standards are key to ensuring interoperability through agreed-upon APIs, SPIs, contracts, and plugin mechanisms. In essence, standards act as the glue that binds a composable data ecosystem. A strong belief in evolutionary architectures At ACA Group, we firmly believe in evolutionary architectures and platform engineering, principles that seamlessly extend to data mesh platforms. It's not about locking yourself into a rigid structure but creating an ecosystem that can evolve, staying at the forefront of innovation. That’s where composability comes in. Do you want a data platform that not only meets your current needs but also paves the way for the challenges and opportunities of tomorrow? Let’s engineer it together Ready to learn more about composability in data mesh solutions? {% module_block module "widget_f1f5c870-47cf-4a61-9810-b273e8d58226" %}{% module_attribute "buttons" is_json="true" %}{% raw %}[{"appearance":{"link_color":"light","primary_color":"primary","secondary_color":"primary","tertiary_color":"light","tertiary_icon_accent_color":"dark","tertiary_text_color":"dark","variant":"primary"},"content":{"arrow":"right","icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"tertiary_icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"text":"Contact us now!"},"target":{"link":{"no_follow":false,"open_in_new_tab":false,"rel":"","sponsored":false,"url":{"content_id":230950468795,"href":"https://25145356.hs-sites-eu1.com/en/contact","href_with_scheme":null,"type":"CONTENT"},"user_generated_content":false}},"type":"normal"}]{% endraw %}{% end_module_attribute %}{% module_attribute "child_css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "definition_id" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "field_types" is_json="true" %}{% raw %}{"buttons":"group","styles":"group"}{% endraw %}{% end_module_attribute %}{% module_attribute "isJsModule" is_json="true" %}{% raw %}true{% endraw %}{% end_module_attribute %}{% module_attribute "label" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "module_id" is_json="true" %}{% raw %}201493994716{% endraw %}{% end_module_attribute %}{% module_attribute "path" is_json="true" %}{% raw %}"@projects/aca-group-project/aca-group-app/components/modules/ButtonGroup"{% endraw %}{% end_module_attribute %}{% module_attribute "schema_version" is_json="true" %}{% raw %}2{% endraw %}{% end_module_attribute %}{% module_attribute "smart_objects" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "smart_type" is_json="true" %}{% raw %}"NOT_SMART"{% endraw %}{% end_module_attribute %}{% module_attribute "tag" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "type" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "wrap_field_tag" is_json="true" %}{% raw %}"div"{% endraw %}{% end_module_attribute %}{% end_module_block %}

Read more
We learn & share

ACA Group Blog

Read more about our thoughts, views, and opinions on various topics, important announcements, useful insights, and advice from our experts.

Featured

8 MAY 2025
Reading time 5 min

In the ever-evolving landscape of data management, investing in platforms and navigating migrations between them is a recurring theme in many data strategies. How can we ensure that these investments remain relevant and can evolve over time, avoiding endless migration projects? The answer lies in embracing ‘Composability’ - a key principle for designing robust, future-proof data (mesh) platforms. Is there a silver bullet we can buy off-the-shelf? The data-solution market is flooded with data vendor tools positioning themselves as the platform for everything, as the all-in-one silver bullet. It's important to know that there is no silver bullet. While opting for a single off-the-shelf platform might seem like a quick and easy solution at first, it can lead to problems down the line. These monolithic off-the-shelf platforms often end up inflexible to support all use cases, not customizable enough, and eventually become outdated.This results in big complicated migration projects to the next silver bullet platform, and organizations ending up with multiple all-in-one platforms, causing disruptions in day-to-day operations and hindering overall progress. Flexibility is key to your data mesh platform architecture A complete data platform must address numerous aspects: data storage, query engines, security, data access, discovery, observability, governance, developer experience, automation, a marketplace, data quality, etc. Some vendors claim their all-in-one data solution can tackle all of these. However, typically such a platform excels in certain aspects, but falls short in others. For example, a platform might offer a high-end query engine, but lack depth in features of the data marketplace included in their solution. To future-proof your platform, it must incorporate the best tools for each aspect and evolve as new technologies emerge. Today's cutting-edge solutions can be outdated tomorrow, so flexibility and evolvability are essential for your data mesh platform architecture. Embrace composability: Engineer your future Rather than locking into one single tool, aim to build a platform with composability at its core. Picture a platform where different technologies and tools can be seamlessly integrated, replaced, or evolved, with an integrated and automated self-service experience on top. A platform that is both generic at its core and flexible enough to accommodate the ever-changing landscape of data solutions and requirements. A platform with a long-term return on investment by allowing you to expand capabilities incrementally, avoiding costly, large-scale migrations. Composability enables you to continually adapt your platform capabilities by adding new technologies under the umbrella of one stable core platform layer. Two key ingredients of composability Building blocks: These are the individual components that make up your platform. Interoperability: All building blocks must work together seamlessly to create a cohesive system. An ecosystem of building blocks When building composable data platforms, the key lies in sourcing the right building blocks. But where do we get these? Traditional monolithic data platforms aim to solve all problems in one package, but this stifles the flexibility that composability demands. Instead, vendors should focus on decomposing these platforms into specialized, cost-effective components that excel at addressing specific challenges. By offering targeted solutions as building blocks, they empower organizations to assemble a data platform tailored to their unique needs. In addition to vendor solutions, open-source data technologies also offer a wealth of building blocks. It should be possible to combine both vendor-specific and open-source tools into a data platform tailored to your needs. This approach enhances agility, fosters innovation, and allows for continuous evolution by integrating the latest and most relevant technologies. Standardization as glue between building blocks To create a truly composable ecosystem, the building blocks must be able to work together, i.e. interoperability. This is where standards come into play, enabling seamless integration between data platform building blocks. Standardization ensures that different tools can operate in harmony, offering a flexible, interoperable platform. Imagine a standard for data access management that allows seamless integration across various components. It would enable an access management building block to list data products and grant access uniformly. Simultaneously, it would allow data storage and serving building blocks to integrate their data and permission models, ensuring that any access management solution can be effortlessly composed with them. This creates a flexible ecosystem where data access is consistently managed across different systems. The discovery of data products in a catalog or marketplace can be greatly enhanced by adopting a standard specification for data products. With this standard, each data product can be made discoverable in a generic way. When data catalogs or marketplaces adopt this standard, it provides the flexibility to choose and integrate any catalog or marketplace building block into your platform, fostering a more adaptable and interoperable data ecosystem. A data contract standard allows data products to specify their quality checks, SLOs, and SLAs in a generic format, enabling smooth integration of data quality tools with any data product. It enables you to combine the best solutions for ensuring data reliability across different platforms. Widely accepted standards are key to ensuring interoperability through agreed-upon APIs, SPIs, contracts, and plugin mechanisms. In essence, standards act as the glue that binds a composable data ecosystem. A strong belief in evolutionary architectures At ACA Group, we firmly believe in evolutionary architectures and platform engineering, principles that seamlessly extend to data mesh platforms. It's not about locking yourself into a rigid structure but creating an ecosystem that can evolve, staying at the forefront of innovation. That’s where composability comes in. Do you want a data platform that not only meets your current needs but also paves the way for the challenges and opportunities of tomorrow? Let’s engineer it together Ready to learn more about composability in data mesh solutions? {% module_block module "widget_f1f5c870-47cf-4a61-9810-b273e8d58226" %}{% module_attribute "buttons" is_json="true" %}{% raw %}[{"appearance":{"link_color":"light","primary_color":"primary","secondary_color":"primary","tertiary_color":"light","tertiary_icon_accent_color":"dark","tertiary_text_color":"dark","variant":"primary"},"content":{"arrow":"right","icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"tertiary_icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"text":"Contact us now!"},"target":{"link":{"no_follow":false,"open_in_new_tab":false,"rel":"","sponsored":false,"url":{"content_id":230950468795,"href":"https://25145356.hs-sites-eu1.com/en/contact","href_with_scheme":null,"type":"CONTENT"},"user_generated_content":false}},"type":"normal"}]{% endraw %}{% end_module_attribute %}{% module_attribute "child_css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "definition_id" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "field_types" is_json="true" %}{% raw %}{"buttons":"group","styles":"group"}{% endraw %}{% end_module_attribute %}{% module_attribute "isJsModule" is_json="true" %}{% raw %}true{% endraw %}{% end_module_attribute %}{% module_attribute "label" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "module_id" is_json="true" %}{% raw %}201493994716{% endraw %}{% end_module_attribute %}{% module_attribute "path" is_json="true" %}{% raw %}"@projects/aca-group-project/aca-group-app/components/modules/ButtonGroup"{% endraw %}{% end_module_attribute %}{% module_attribute "schema_version" is_json="true" %}{% raw %}2{% endraw %}{% end_module_attribute %}{% module_attribute "smart_objects" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "smart_type" is_json="true" %}{% raw %}"NOT_SMART"{% endraw %}{% end_module_attribute %}{% module_attribute "tag" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "type" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "wrap_field_tag" is_json="true" %}{% raw %}"div"{% endraw %}{% end_module_attribute %}{% end_module_block %}

Read more

All blog posts

Lets' talk!

We'd love to talk to you!

Contact us and we'll get you connected with the expert you deserve!

Lets' talk!

We'd love to talk to you!

Contact us and we'll get you connected with the expert you deserve!

Lets' talk!

We'd love to talk to you!

Contact us and we'll get you connected with the expert you deserve!

Lets' talk!

We'd love to talk to you!

Contact us and we'll get you connected with the expert you deserve!

woman thinking
woman thinking
Reading time 7 min
8 MAY 2025

In software development, assumptions can have a serious impact and we should always be on the look-out. In this blog post, we talk about how to deal with assumptions when developing software. Imagine…you’ve been driving to a certain place A place you have been driving to every day for the last 5 years, taking the same route, passing the same abandoned street, where you’ve never seen another car. Gradually you start feeling familiar with this route and you assume that as always you will be the only car on this road. But then at a given moment in time, a car pops up right in front of you… there had been a side street all this time, but you had never noticed it, or maybe forgot all about it. You hit the brakes and fortunately come to a stop just in time. Assumption nearly killed you. Fortunately in our job, the assumptions we make are never as hazardous to our lives as the assumptions we make in traffic. Nevertheless, assumptions can have a serious impact and we should always be on the look-out. Imagine… you create websites Your latest client is looking for a new site for his retirement home because his current site is outdated and not that fancy. So you build a Fancy new website based on the assumption that Fancy means : modern design, social features, dynamic content. The site is not the success he had anticipated … strange … you have build exactly what your client wants. But did you build what the visitors of the site want? The average user is between 50 – 65 years old, looking for a new home for their mom and dad. They are not digital natives and may not feel at home surfing on a fancy, dynamic website filled with twitter feeds and social buttons. All they want is to have a good impression of the retirement home and to get reassurance of the fact that they will take good care of their parents. The more experienced you’ll get, the harder you will have to watch out not to make assumptions and to double-check with your client AND the target audience . Another well known peril of experience is “ the curse of knowledge “. Although it sounds like the next Pirates of the Caribbean sequel, the curse of knowledge is a cognitive bias that overpowers almost everyone with expert knowledge in a specific sector. It means better-informed parties find it extremely difficult to think about problems from the perspective of lesser-informed parties. You might wonder why economists don’t always succeed in making the correct stock-exchange predictions. Everyone with some cash to spare can buy shares. You don’t need to be an expert or even understand about economics. And that’s the major reason why economists are often wrong. Because they have expert knowledge, they can’t see past this expertise and have trouble imagining how lesser informed people will react to changes in the market. The same goes for IT. That’s why we always have to keep an eye out, we don’t stop putting ourselves in the shoes of our clients. Gaining insight in their experience and point of view is key in creating the perfect solution for the end user. So how do we tackle assumptions …? I would like to say “Simple” and give you a wonderful oneliner … but as usual … simple is never the correct answer. To manage the urge to switch to auto-pilot and let the Curse of Knowledge kick in, we’ve developed a methodology based on several Agile principles which forces us to involve our end user in every phase of the project, starting when our clients are thinking about a project, but haven’t defined the solution yet. And ending … well actually never. The end user will gain new insights, working with your solution, which may lead to new improvements. In the waterfall methodology at the start of a project an analysis is made upfront by a business analist. Sometimes the user is involved of this upfront analysis, but this is not always the case. Then a conclave of developers create something in solitude and after the white smoke … user acceptance testing (UAT) starts. It must be painful for them to realise after these tests that the product they carefully crafted isn’t the solution the users expected it to be. It’s too late to make vigorous changes without needing much more time and budget. An Agile project methodology will take you a long way. By releasing testable versions every 2 to 3 weeks, users can gradually test functionality and give their feedback during development of the project. This approach will incorporate the user’s insights, gained throughout the project and will guarantee a better match between the needs of the user and the solution you create for their needs. Agile practitioners are advocating ‘continuous deployment’; a practice where newly developed features will be deployed immediately to a production environment instead of in batches every 2 to 3 weeks. This enables us to validate the system (and in essence its assumptions) in the wild, gain valuable feedback from real users, and run targeted experiments to validate which approach works best. Combining our methodology with constant user involvement will make sure you eliminate the worst assumption in IT: we know how the employees do their job and what they need … the peril of experience! Do we always eliminate assumptions? Let me make it a little more complicated: Again… imagine: you’ve been going to the same supermarket for the last 10 years, it’s pretty safe to assume that the cereal is still in the same aisle, even on the same shelf as yesterday. If you would stop assuming where the cereal is … this means you would lose a huge amount of time, browsing through the whole store. Not just once, but over and over again. The same goes for our job. If we would do our job without relying on our experience, we would not be able to make estimations about budget and time. Every estimation is based upon assumptions. The more experienced you are, the more accurate these assumptions will become. But do they lead to good and reliable estimations? Not necessarily… Back to my driving metaphor … We take the same road to work every day. Based upon experience I can estimate it will take me 30 minutes to drive to work. But what if they’ve announced traffic jams on the radio and I haven’t heard the announcement… my estimation will not have been correct. At ACA Group, we use a set of key practices while estimating. First of all, it is a team sport. We never make estimations on our own, and although estimating is serious business, we do it while playing a game: Planning poker. Let me enlighten you; planning poker is based upon the principle that we are better at estimating in group. So we read the story (chunk of functionality) out loud, everybody takes a card (which represent an indication of complexity) and puts them face down on the table. When everybody has chosen a card, they are all flipped at once. If there are different number shown, a discussion starts on the why and how. Assumptions, that form the basis for one’s estimate surface and are discussed and validated. Another estimation round follows, and the process continues till consensus is reached. The end result; a better estimate and a thorough understanding of the assumptions surrounding the estimate. These explicit assumptions are there to be validated by our stakeholders; a great first tool to validate our understanding of the scope.So do we always eliminate assumptions? Well, that would be almost impossible, but making assumptions explicit eliminates a lot of waste. Want to know more about this Agile Estimation? Check out this book by Mike Cohn . Hey! This is a contradiction… So what about these assumptions? Should we try to avoid them? Or should we rely on them? If you assume you know everything … you will never again experience astonishment. As Aristotle already said : “It was their wonder, astonishment, that first led men to philosophize”. Well, a process that validates the assumptions made through well conducted experiments and rapid feedback has proven to yield great results. So in essence, managing your assumptions well, will produce wonderful things. Be aware though that the Curse of Knowledge is lurking around the corner waiting for an unguarded moment to take over. Interested in joining our team? Interested in meeting one of our team members? Interested in joining our team? We are always looking for new motivated professionals to join the ACA team! {% module_block module "widget_3ad3ade5-e860-4db4-8d00-d7df4f7343a4" %}{% module_attribute "buttons" is_json="true" %}{% raw %}[{"appearance":{"link_color":"light","primary_color":"primary","secondary_color":"primary","tertiary_color":"light","tertiary_icon_accent_color":"dark","tertiary_text_color":"dark","variant":"primary"},"content":{"arrow":"right","icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"tertiary_icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"text":"View career opportunities"},"target":{"link":{"no_follow":false,"open_in_new_tab":false,"rel":"","sponsored":false,"url":{"content_id":229022099665,"href":"https://25145356.hs-sites-eu1.com/en/jobs","href_with_scheme":null,"type":"CONTENT"},"user_generated_content":false}},"type":"normal"}]{% endraw %}{% end_module_attribute %}{% module_attribute "child_css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "definition_id" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "field_types" is_json="true" %}{% raw %}{"buttons":"group","styles":"group"}{% endraw %}{% end_module_attribute %}{% module_attribute "isJsModule" is_json="true" %}{% raw %}true{% endraw %}{% end_module_attribute %}{% module_attribute "label" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "module_id" is_json="true" %}{% raw %}201493994716{% endraw %}{% end_module_attribute %}{% module_attribute "path" is_json="true" %}{% raw %}"@projects/aca-group-project/aca-group-app/components/modules/ButtonGroup"{% endraw %}{% end_module_attribute %}{% module_attribute "schema_version" is_json="true" %}{% raw %}2{% endraw %}{% end_module_attribute %}{% module_attribute "smart_objects" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "smart_type" is_json="true" %}{% raw %}"NOT_SMART"{% endraw %}{% end_module_attribute %}{% module_attribute "tag" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "type" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "wrap_field_tag" is_json="true" %}{% raw %}"div"{% endraw %}{% end_module_attribute %}{% end_module_block %}

Read more
Mob programming in a meeting room
Mob programming in a meeting room
Reading time 4 min
8 MAY 2025

ACA does a lot of projects. In the last quarter of 2017, we did a rather small project for a customer in the financial industry. The deadline for the project was at the end of November and our customer was getting anxious near the end of September. We were confident we could pull off the job on time though and decided to try out an experiment. We got the team together in one room and started mob programming . Mob what? We had read an article that explains the concept of mob programming. In short, mob programming means that the entire team sits together in one room and works on one user story at a time. One person is the ‘driver’ and does the coding for a set amount of time. When that time has passed, the keyboard switches to another team member. We tried the experiment with the following set-up: Our team was relatively small and only had 4 team members. Since the project we were working on was relatively small, we could only assing 4 people. The user stories handled were only a part of the project. Because this was en experiment, we did not want the project - as small as it was - to be mobbed completely. Hence, we chose one specific epic and implemented those user stories in the mob. We did not work on the same computer. We each had a separate laptop and checked in our code to a central versioning system instead of switching the keyboard. This wasn't really a choice we made, just something that happened. We switched every 20 minutes. The article we referred to talks about 12, but we thought that would be too short and decided to go with 20 minutes instead. Ready, set, go! We spent more than a week inside a meeting room where we could, in turn, connect our laptops to one big screen. The first day of the experiment, we designed. We stood at the whiteboard for hours deciding on the architecture of the component we were going to build. On the same day, our mob started implementing the first story. We really took off! We flew through the user story, calling out to our customer proxy when some requirements were not clear. Near the end of the day, we were exhausted. Our experiment had only just started and it was already so intense. The next days, we continued implementing the user stories. In less than a week, we had working software that we could show to our customer. While it wasn’t perfect yet and didn’t cover all requirements, our software was able to conduct a full, happy path flow after merely 3 days. Two days later, we implemented enhancements and exception cases discussed through other user stories. Only one week had passed since our customer started getting anxious and we had implemented so much we could show him already. Finishing touches Near the end of the project, we only needed to take care of some technicalities. One of those was making our newly-built software environment agnostic. If we would have finished this user story with pair programming, one pair would know all the technical details of the software. With mob programming, we did not need to showcase it to the rest of the team. The team already knew. Because we switched laptops instead of keyboards, everyone had done the setup on their own machine. Everyone knew the commands and the configuration. It was knowledge sharing at its best! Other technicalities included configuring our software correctly. This proved to be a boring task for most of the navigators. At this point, we decided the mob experiment had gone far enough. We felt that we were not supposed to do tasks like these with 4 people at the same time. At least, that’s our opinion. Right before the mob disbanded, we planned an evaluation meeting. We were excited and wanted to do this again, maybe even at a bigger scale. Our experience with mob programming The outcome of our experiment was very positive. We experienced knowledge sharing at different levels. Everyone involved knew the complete functionality of the application and we all knew the details of the implementation. We were able to quickly integrate a new team member when necessary, while still working at a steady velocity. We already mentioned that we were very excited before, during and after the experiment. This had a positive impact on our team spirit. We were all more engaged to fulfill the project. The downside was that we experienced mob programming as more exhausting. We felt worn out after a day of being together, albeit in a good way! Next steps Other colleagues noticed us in our meeting room programming on one big screen. Conversations about the experiment started. Our excitement was contagious: people were immediately interested. We started talking about doing more experiments. Maybe we could do mob programming in different teams on different projects. And so it begins… Have you ever tried mob programming? Or are you eager to try? Let’s exchange tips or tricks! We’ll be happy to hear from you!

Read more
Reading time 4 min
8 MAY 2025

OutSystems: a catalyst for business innovation In today's fast-paced business landscape, organisations must embrace innovative solutions to stay ahead. There are a lot of strategic technological trends that address crucial business priorities such as digital immunity, composability, AI, platform engineering, Low-Code , and sustainability. OutSystems , the leading Low-Code development platform , has become a game-changer in supporting organisations to implement these trends efficiently and sustainably. OutSystems enhances cyber security As organisations increasingly rely on digital systems, cyber threats pose a significant risk. Additionally, digital engagement with customers, employees, and partners, plays a vital role in a company's well-being. The immunity and resilience of an organisation is now as strong and stable as its core digital systems. Any unavailability can result in a poor user experience, revenue loss, safety issues, and more. OutSystems provides a robust and secure platform that helps build digital immune systems , safeguarding against evolving cybersecurity challenges. With advanced threat detection, continuous monitoring, secure coding practices , and AI code-scanning, OutSystems ensures applications are resilient and protected. Furthermore, the platform covers most of the security aspects for project teams, enabling them to focus on delivering high value to end customers while best practices are recommended by the platform through code analysis using built-in patterns. OutSystems simplifies cloud-native infrastructure management Cloud-native architecture has emerged as a vital component for modern application development. The OutSystems Developer Cloud Platform enables teams to easily create and deploy cloud-native applications, leveraging the scalability and flexibility of cloud infrastructure through Kubernetes . It allows companies to: Optimise resource utilisation Auto-scale application runtimes Reduce operational costs Adopt sustainable practices (serverless computing, auto-scaling, …) All this without the need for prior infrastructure investment nor the deep technical knowledge required to operate it and the typical burdens associated. OutSystems: gateway to AI and automation AI and hyper-automation have become essential business tools for assisting in content creation, virtual assistants, faster coding, document analysis, and more. OutSystems empowers professional developers to be more productive by infusing AI throughout the application lifecycle. Developers benefit from AI-assisted development, natural language queries, and even Generative AI. Once ready with your development, transporting an app to the test or production environment only takes a few clicks. The platform highly automates the process and even performs all the necessary validations and dependency checks to ensure unbreakable deployments. OutSystems seamlessly integrates with AI capabilities from major cloud providers like Amazon, Azure (OpenAI), and Google, allowing project teams to leverage generative AI, machine learning, natural language processing , and computer vision . By making cutting-edge technologies more accessible, OutSystems accelerates digital transformation and creates sustainable competitive advantages. OutSystems enables composable architecture for agility Composable architecture and business apps, characterised by modular components, enable rapid adaptation to changing business needs. OutSystems embraces this trend by providing a cloud-native Low-Code platform using and supporting this type of architecture. It enables teams to easily build composable technical and business components. With the visual modelling approach of Low-Code, a vast library of customizable pre-built components and a micro-service-based application delivery model, OutSystems promotes high reusability and flexibility. This composable approach empowers organisations to: Respond rapidly to changing business needs Experiment with new ideas Create sustainable, scalable, and resilient solutions OutSystems enables the creation of business apps that can be easily integrated, replaced, or extended, supporting companies on their journey towards composability and agility. OutSystems facilitates self-service and close collaboration Platform engineering, which emphasises collaboration between development and operations teams, drives efficiency and scalability. OutSystems provides a centralised Low-Code platform embracing this concept at its core by being continuously extended with new features, tools and accelerators. Furthermore the platform facilitates the entire application development lifecycle until operations . Including features like Version control Automated deployment Continuous integration and delivery (CI/CD) Logging Monitoring Empowering organisations to adopt agile DevOps practices. With OutSystems, cross-functional teams can collaborate seamlessly, enabling faster time-to-market and improved software quality. By supporting platform engineering principles, OutSystems helps organisations achieve sustainable software delivery and operational excellence. OutSystems drives sustainability in IT OutSystems leads the way in driving sustainability in IT through its green IT Low-Code application development platform and strategic initiatives. By enabling energy-efficient development, streamlining application lifecycle management, leveraging a cloud-native infrastructure , and promoting reusability , OutSystems sets an example for the industry. Organisations can develop paperless processes, automate tasks, modernise legacy systems, and simplify IT landscapes using OutSystems 3 to 4 times faster, reducing overall costs and ecological footprint. By embracing OutSystems, companies can align their IT operations with a greener future, contribute to sustainability, and build a more resilient planet. Wrapping it up In the era of digital transformation and sustainability, OutSystems is a powerful ally for organisations, delivering essential business innovations, such as … High-performance Low-Code development Cloud-native architecture AI and automation Robust security measures Collaborative DevOps practices Take the OutSystems journey to align with IT trends, deliver exceptional results, and contribute to a sustainable and resilient future. Eager to start with OutSystems? Let us help

Read more
iterative product discovery
iterative product discovery
Reading time 10 min
8 MAY 2025

Suppose you’re a Product Owner in an Agile delivery team, ready to get working on your product. In order to do that, you’ll need to give your development team a goal and a list of things to work on. If you use SCRUM as the framework, you’d call that list of things to work on a Product Backlog . From the SCRUM guide: The Product Backlog is an ordered list of everything that is known to be needed in the product. It is the single source of requirements for any changes to be made to the product. The Product Backlog lists all features, functions, requirements, enhancements, and fixes that constitute the changes to be made to the product in future releases. Product Backlog items have the attributes of a description, order, estimate, and value. Product Backlog items often include test descriptions that will prove its completeness when “Done”. Okay, so you need to start with a backlog, prioritize as you see fit and then build the highest priority items first. SCRUM – and Agile development in general – is both incremental and iterative (see here or here for a good explanation of those terms). Let’s illustrate this with an example. When a customer orders a car – and we use SCRUM to implement it – we might iterate on the chassis and incrementally build the car, like this: However, as a Product Owner ( the role you take as a Product Manager), you’re left with a few questions: Where does the backlog come from? How can I be sure that it solves a need of the end user? How can I be sure that the development team will actually deliver my interpretation of the requirement? How can I be sure that I adapt to changes and still give my architect a clear (less evolving) set of parameters to work on? In this blog post, we’ll try to gradually answer those questions. Focus on the customer's underlying need Let’s start with the second question on our list: how can I be sure that it solves a need of the end user? Henrik Kniberg correctly points out: We start with the same context – the customer ordered a car. But this time we don’t just build a car. Instead we focus on the underlying need the customer wants fulfilled . Turns out that his underlying need is “I need to get from A to B faster”, and a car is just one possible solution to that. Compared to our previous example of a customer ordering a car, it would now look like this: Said plainly, this means that you first need to unravel the real problem and adapt as you see fit . The unraveling of the problem is iterative in nature: as long as you don’t get confirmation that you’re solving a real need, you need to change and adapt. You present the end user with a best guess of what a solution might look like. As long the user is not satisfied, you have to dig deeper and come up with alternatives. Ultimately, you hit the sweet spot and get confirmation that you understood the problem. This confirmation can be based on real usage data, qualitative insights or a combination of both. SCRUM doesn’t indicate how you should implement Henrik’s idea. My understanding is that there is a preference for developing software and focusing the sprint on ending with an increment . This means working software is used as an artefact during the iteration towards defining the problem. The drawback, however, is that the team needs to re-work already implemented increments, as they need to adapt to new insights. I checked the software development wastes and identified the potential ones: Estimation not possible due to poor analysis/information gathering (Partially Done Work). Technical complexity not analyzed properly (Partially Done Work), because it doesn’t make sense to do so during an iteration. Wrong choice of technology or solution (Defects), because the non-functional requirements are a moving target. Redesigning due to missed requirements (Defects). These are still concerns to be addressed which can be solved by a proper product discovery step . Let’s first revisit the questions a Product Owner might be left with and indicate which ones we’ve solved: Where does this backlog come from? How can I be sure that it solves a need of the end user? ✅ We start with an idea and iterate until we find a solution that fulfills a need. We keep doing this until we get a confirmation. How can I be sure that the development team will actually deliver my interpretation of the requirement? How can I be sure that I adapt to changes and still give my architect a clear (less evolving) set of parameters to work on? ❌ This makes the problem bigger: the development team now needs to do (at least some) more rework to cover up all the changes. Discover your user's needs In order to eliminate the potential wastes with Henrik’s example, you need to split the iterative part (finding out what the user needs) and the building of the software (crafting a solution for the problem). Finding out what the user needs implies a so-called discovery step . Marty Cagan , the “most influential person in the product space”, agrees : In Henrik’s example, the team is working to simultaneously determine the right product to build, and at the same time to build that product. And they have one main tool at their disposal to do that: the engineers. So what you see is a progressively developed product, with an important emphasis on getting something we can test on real users at each iteration. Henrik is emphasizing that they are building to learn, but they are doing this with the only tool they think they have: engineers writing code. If you read carefully, Henrik does mention that the engineers don’t actually need to be building products – they could be building prototypes, but with most teams I meet, this point is lost on them because they don’t understand that there are many forms of prototypes , most of which are not meant to be created by engineers. That’s right: writing software might be a good solution to find out what the user wants, but more often than not, it is the most expensive way to get there. With a discovery step, you can focus on the objectives of a product: is it valuable, usable, feasible and viable? This step is mostly iterative. The end result is a clear definition of the problem and hence a possible solution to that problem; one that has a high probability of solving a user’s need. Because you took a discovery step, this means you can describe the real problem to the engineering team. You can also describe the problem in more detail and also describe what kind of properties you expect from the end solution, e.g. a fitness function which places the solution in context. The development teams then figures out how to build that solution. This is the delivery step and it is mostly incremental. Product discovery track versus delivery track With the product discovery and delivery steps, there are two tracks the team must consider: a product discovery track and a delivery track . Let’s revisit the Product Owner’s questions a third time and indicate which ones we’ve solved: Where does this backlog come from? How can I be sure that it solves a need of the end user? ✅ ✅ We start with and idea and iterate until we found a solution that fulfills a need. We should really spend more time with our end users to really understand their problem. We should try to do this as fast as possible so that the engineering time doesn’t run out of work. How can I be sure that the development team will actually deliver my interpretation of the requirement? In terms of problem description, I’m able to provide more detail and give more context. How can I be sure that I adapt to changes and still give my architect a clear (less evolving) set of parameters to work on? The engineers really know it’s going to be a car before they can come up with a suitable architecture up to the task, and they may in fact choose to implement this architecture with a different delivery strategy than one designed to learn fast. As the risks are tackled in discovery and the necessary delivery work becomes clear, that delivery work progresses much faster than it would otherwise. Dual Track: product discovery and delivery happen simultaneously Discovery work focuses on fast learning and validation. But how do you chain that together with delivery work? When a project starts , you might do discovery first and then – somewhat later – let the engineering team start. It becomes like this : That looks a lot like Waterfall ! But it doesn’t have to be like this. Jeff Patton , a veteran Product Manager and writer of the book User Story Mapping , says it is all happening at once (Dual Track). Instead of a waterfall, we get a loop. A discovery learning loop looks a bit like this: It starts by describing what we believe is the problem we’re trying to solve and for whom, the solution to solve the problem, and how we’d measure its success. Product discovery loops chain together like this: Discovery work uses irregular cycle lengths. It’s ‘lean’ in the sense that we’re trying to make the discovery cycle as short as possible. Ideas in discovery mutate and very often get abandoned, which is the best move forward into more deliberate development cycles. Takeaway In order to eliminate potential risks, you need to be aware of both the discovery and delivery steps . The discovery step focuses on the objectives of a product: value, usability, feasibility and viability and is mostly iterative. The delivery step focuses on how to build a product and is mostly incremental. While the steps are separate, the tracks happen at the same time, which we call Dual Track. Jeff Patton’s article on Dual Track also mentions a few important points. These two tracks are two types of work performed by the same team. Discovery must check regularly with the team on feasibility, correct implementation and technical constraints that affect design decision. Delivery uses insights from discovery during development. Discovery focuses on the riskiest assumptions first, i.e. user problems of which we have low confidence they solve a real user need (or the need is unknown). Delivery focuses on solutions for which we have high confidence that it solves a user’s problem. The confidence might come from validated learning during discovery. Or we start – e.g. in the beginning of a cycle – with backlog items for which we think no discovery is needed, such as technical work or features of which we already have high confidence they solve a user problem. The two tracks are done in parallel : Discovery / Iteration : focus on the riskiest assumptions (problems of which we have the least knowledge of) and try to define them with the least effort possible. Delivery / Increment : focus on a well-defined solution (we have high confidence the end user needs it) and tackle the ones with highest value first. The tracks don’t stop as soon as you’ve discovered one problem to fix. The next problem is already looking around the corner. In our example with the customer ordering a car (“I need to get from A to B faster”), we might also think the user needs shelter, which can range from a simple hut to a castle. A final remark: this blog post started with SCRUM as the delivery framework, but the method ology doesn’t really matter: it could be Kanban, Crystal XP and even Waterfall as long as the team is able to keep pace in the two tracks. But of course, since the discovery part focuses on learning fast , an Agile methodology suits better. Left-over questions Let’s revisit the Product Owner’s questions one last time and indicate which ones are solved now. Where does this backlog come from? ✅ The backlog comes from assumptions which are validated during discovery. However, there is a new question: how do we identify the list of assumptions to focus on? Where does the list of Opportunities come from? ❓ How can I be sure that it solves a need of the end user? ✅✅✅ We start with an idea and iterate until we found a solution that fulfills a need. I should really spend more time with my end users and really really understand their problem. I should try to make this as fast as possible, so that the engineering time does not run out of work. Do this in parallel, with the whole team. How can I be sure that the development team will actually deliver my interpretation of the requirement? ✅ In terms of problem description, I can give more detail and give more context. Involve (at least part) of your engineering team in discovery. This way, they get in touch with the real end user get a better understanding of the user’s real world. How can I be sure that I adapt to changes and still give my architect a clear (less evolving) set of parameters to work on? ✅ The engineers really know it’s going to be a car before they can come up with a suitable architecture up to the task. They may in fact choose to implement this architecture with a different delivery strategy than one designed to learn fast. The risks are tackled in discovery and the necessary delivery work becomes clear. The delivery work can now progress much faster than it would otherwise

Read more
rockstar
rockstar
Reading time 4 min
6 MAY 2025

Estimating the effort necessary to develop certain functionalities when writing software gives your customers some certainty and predictability. That being said: making software development estimates is usually not the most popular part of a developer’s job. However, we’ve found a way to gamify development estimates and make them a lot more fun, without sacrificing accuracy. In this blog post, we’ll teach you how to make software development estimates fun with rockstar planning poker. Making estimates for predictability Whenever a feature is clearly defined, it gets split up into user stories . Here’s an example of such a user story: “As a user of this service, I want to invite my friends so that we might enjoy the service together.” Before we start the development of any user story, we estimate the effort we think it’ll take. This way, we’re able to estimate its complexity in a fairly detailed manner and give our customers a certain level of predictability in advance. To be able to do so, we measure how many days it takes us to complete one story point . Story points are a unit of measure for expressing an estimate of the overall effort that’s required to fully implement a user story. The effort is the average number of days one team member needs to complete a story point over a certain time period. The effort multiplied with our team’s capacity gives us an idea of the team’s story throughput , the amount of story points a team can develop over a given period of time. If you extrapolate the story throughput, you can get a clear predictability of the scope you can realize with a team over time. At this stage in the development process, we don’t quite know the intricate details of a user story yet. However, we’ve already done our ‘homework’ and know enough to accurately estimate the complexity of the development of the user story. Estimating with planning poker Planning poker is an ideal way to get to detailed estimates. This way of estimating was described by Mike Cohn in his book Agile Estimating and Planning . During a planning poker session, a user story is estimated by the team that will be working on it. First, the product manager explains what we want to achieve with the user story. Then, the team discusses about what exactly needs to be done to get there, until they reach a consensus about the story. After that, every team member uses ‘planning cards’ to individually estimate the effort required to complete the story. At the count of three, every team member simultaneously turns around their planning card to reveal their estimation in story points. If there are any major differences, the team continues to discuss the story’s complexity until a new consensus is reached. At ACA, we use a series of custom cards to denote a story’s complexity in story points. We have cards with the numbers 0.5 – 1 – 1.5 – 2.5 and 4. However, over time we’ve noticed that stories estimated to be 2.5 or 4 story points introduce more workload and uncertainty, which in turn comprises the predictability towards the customer. Now, all stories that are estimated to be more than 1.5 story points are split up into smaller parts. We’ve therefore limited the numbers on our cards to just 0.5 – 1 and 1.5. So what about rockstar planning poker? Most technical people aren’t very fond of making estimates. Estimation session are tiring and demand a lot of energy, even when using planning poker to gamify the planning process. To liven up those sessions, we’ve been using something we call rockstar planning poker for a few years now. Instead of using cards to denote a story points, we use our hands. Just like in ‘rock, paper, scissors’, we all count to three and then all show our hands to make one of the following signs. Pinkie The universal rockstar signal to order a beer, especially in the lovely student town of Leuven in Belgium. This signal denotes 0.5 story points. Index finger The rockstar way of saying hello! This signal denotes 1 story point. Little finger and index finger The universal way of letting everyone know to rock on. Used to denote a complexity of 1.5 story points. Middle finger The universal signal for … This signal is used to say that the user story needs further clarification or splitting up in smaller parts. Takeaway Rockstar planning poker is an ideal way to keep things fun, and keeping things fun ensures more involvement and work of a high quality. Rockstar planning poker doesn’t necessarily yield better results when it comes to estimating effort, but it has livened up our teams’ estimation sessions. All you need is your hands! So, if you’re tired of those exhausting estimation sessions, why not try rockstar planning poker to spice it up a bit? Good luck, have fun and let us know how you did!

Read more
asteroids
asteroids
Reading time 4 min
6 MAY 2025

Destroying complexity one Epic at a time Remember the Atari classic Asteroids? If you’re familiar with it, you might already feel a twinge of nostalgia. If not, here’s the picture: You’re in space, piloting a ship surrounded by massive, slow-moving asteroids. Easy targets, right? But every time you shoot one, it shatters into smaller, faster fragments, much harder to shoot and dodge. When you first start out, you might opt for a strategy that focuses first on shooting all the largest asteroids. They are slower and easier to hit. But, soon enough, you’re dodging tiny, chaotic rocks flying everywhere. You lose control and then 💥 BOOM! Game over. Sounds familiar? It’s a lot like losing control when managing complex projects. There is a better way! And it’s a strategy that works in software development as well as in the Asteroids game. In software development, we call it Progressive Elaboration . Breaking down big problems: the Asteroids approach to agile To reach a high score in Asteroids, you’ll want to focus on one large asteroid. You systematically break it down in smaller parts and try to clear them. This will make sure that you’ll always stay in control. The total number of asteroids flying around you, is kept to a minimum and the other large asteroids are easy to follow and avoid. When you completely eliminated the first asteroid, you can target the next big asteroid. This is also how we approach large, complex challenges in software development. When faced with multiple Epics (a large, complex problem), tackling multiple at once can lead to chaos and a high Cognitive load . By using Progressive Elaboration , we focus on one item that brings the highest value, break it down into manageable pieces (Features or User Stories) and prioritize them. Next, we can focus on the piece with the highest priority. This way, we keep everything manageable and avoid that 'Game Over'-feeling when a project spirals out of control. How to apply Progressive Elaboration: 3 techniques Here are three examples on how to start applying Progressive Elaboration — or, as we call it here, the Asteroids approach — to break down complexity and focus on delivering value: 1. User Story Mapping Use this technique ( reference card ) when you are launching a new product or a larger end-to-end service. Step 1: Map the customer journey. Define each step of the customer journey and identify all the personas involved. Step 2: List potential features. For each step, list the possible features to enable the user in that step of the customer journey. Step 3: Identify the smallest viable features. Focus on the smallest possible set of features to build towards the E2E workflow. It should either deliver value to the customer, provide insights to the Product Owner, or reduce development risks. Step 4: Target and tackle. You’ve identified the first Asteroids to focus on. Tackle them one by one, splitting them when needed until you reach User Story level. Step 5: Deliver, learn and repeat. Consistently re-evaluate, keeping the big picture in mind while breaking down and focusing on one asteroid at a time. 2. Upstream Kanban This technique is especially useful when the core E2E workflow of your product has already been implemented and the Product Team wants to improve or expand it. The Upstream Kanban board is your visualization of all the “Asteroids” that are flying around your project. At any time, we have a lot of possible features we can build. We call these “Options”. These Options must be prioritized. When the team has available capacity to start shooting the next Asteroid, it goes into “Discovery”. This is where we use the Progressive Elaboration Technique. We break the Option down into smaller parts and we prioritize. Then one by one, aligned with the priority, we can do the actual delivery (= detailed analysis and development). Step 1: Visualize “asteroids” on a Kanban board. List out the Options that the business or users would like to explore. Step 2: Prioritize Options by Value. Sort based on outcome or potential value. Step 3: Pull items when capacity allows. When ready, pull an item into Discovery—where it’s analyzed and broken down. Step 4: Move to delivery once clear. When an item is well-defined, send it downstream for development. 3. Mindmapping This technique is flexible and effective, even for smaller tasks like creating a blog post. Step 1: Use a mind mapping tool or paper. Write down the big “asteroid” you want to tackle, break it down into smaller parts, and prioritize. Step 2: Focus on one item at a time. Repeat the breakdown process until you have clear, actionable items. Step 3: Track the big picture. The mind map keeps you grounded in the larger goal while you handle immediate tasks. Keep complexity under control and deliver value By adopting these techniques, you’ll face fewer “game-over” moments in your projects. You’ll keep complexity under control, delivering value one manageable chunk at a time — just like breaking down asteroids before they overwhelm your ship. So, what’s your next big asteroid? How will you apply these techniques to make your projects more manageable and deliver more value? Are you interested in learning more? 🎁 We’re offering 5 free brainstorming sessions with an Agile Coach. Claim your spot now! And if you need a break, play a remake of the original Asteroids game here !

Read more
Domain Driven Design 2023
Domain Driven Design 2023
Reading time 7 min
6 MAY 2025

In June 2023, four ACA colleagues participated in the Foundations Day of the multi-day event of Domain-Driven Design Europe (DDD Europe). At the Meervaart theater in Amsterdam, they attended various talks offering a practical perspective on DDD concepts. The complete report is provided below. Following a refreshing coffee for the early birds and the welcoming remarks by the organizers, a packed program of eight talks with live coding ensued. This blog post offers a concise overview of all presentations, followed by an evaluation across different areas: speaker inspiration, relevance to Domain-Driven Design (DDD), utilized analysis techniques, and technical usability of the information. Reviews Baking Domain Concepts into Code This session was led by Paul Rayner, an Australian residing in the United States. He delivered a structured and engaging session with fresh insights into Domain-Driven Design and coding, showcasing a strong preference for Test-Driven Design. He delved into the concept of extended warranties, demonstrating it step by step in Ruby code using Event Storming. As his narrative progressed and complexity increased, he adeptly addressed questions arising from the Event Storming with a Test-Driven approach. His concluding message was clear: "Don't be afraid to refactor code when starting from a fundamental Test-Driven method!" Even for those with limited knowledge of DDD, the logic in his reasoning during the live coding was easy to follow. Although he isn't scheduled as a speaker for the upcoming edition of DDD Europe, if you ever have the chance to attend a session by Paul Rayner, don't hesitate: it's highly recommended. 🎥 Link to the video Score card (1-5): Inspiration: 3 Relevance to DDD: 5 Analysis techniques: 4 Technical usability: 4 Model Mitosis: A Dynamic Pattern to Deal with Model Tension This entertaining session was delivered by Julien Topçu and Josian Chevalier. They presented a session within the context of the Mandalorian from the Star Wars saga, demonstrating several typical pitfalls in a development process. It was a fully scripted session where the live coding was supposedly done by an AI named Chat3PO. However, they did employ some interesting concepts such as a Shared Kernel and an Anti-Corruption Layer. Since it was a scripted session, little time was wasted on writing code. This allowed them to easily switch to domain overviews, keeping the story coherent. This session is definitely worth recommending because it elucidates an interesting evolution within a domain. In the last five minutes, they also provided a good summary of the evolution they underwent in their scenario and referred to the Hive Pattern where further development of this concept is utilized. 🎥 Link to the video Score card (1-5): Inspiration: 5 Relevance to DDD: 4 Analysis techniques: 4 Technical usability: 3 TDD DDD from the Ground Up - Chris Simon During the session "TDD DDD from the Ground Up", several principles of Domain-Driven Design (DDD) were highlighted through a live coding example of Test-Driven Development (TDD). The experienced speaker, Chris Simon, shared his extensive knowledge on this topic. Thanks to his thorough preparation, profound understanding of the subject, and pleasant pace, this session was suitable for an audience of developers and analysts looking to explore the basic principles of TDD. While the TDD portion was well-executed, there was a lack of a clear connection with DDD. Consequently, the session felt more like a TDD demo, with only limited attention to DDD principles. Additionally, the viewing experience was negatively affected by the low quality of the projected code, combined with the presence of ample sunlight in the room. For those seeking to establish a solid foundation in Test-Driven Development, this session was certainly recommended. However, if you're more interested in Domain-Driven Design, there are better sessions available that offer more depth. 🎥 Link to the video Score card (1-5): Inspiration: 4 Relevance to DDD: 2 Analysis techniques: 1 Technical usability: 4 The Escher School of Fish: Modelling with Functions During "The Escher School of Fish: Modeling with Functions", Einar Høst demonstrated the concept of function modeling through live coding in the functional programming language Elm. The example showcased numerous vector transformations to achieve image manipulation. The presentation certainly appealed to developers. It was fascinating to witness the impressive results that can be achieved through relatively simple vector transformations. However, for analysts, the session likely offered less immediate value. Since the presentation was highly technical, we missed a clear link to Domain-Driven Design. Despite the visually stunning results, the session seemed to stray from the context of a DDD conference. 🎥 Link to the video Score card (1-5): Inspiration: 2 Relevance to DDD: 1 Analysis techniques: 1 Technical usability: 3 TDD: Beyond the Intro Part One Two The two-part talk "TDD: Beyond the Intro" by Romeu Moura suggested a deeper dive into Test-Driven Development, and that's exactly what we got. Moura started with the basics of TDD but quickly went beyond the "red-green-refactor" cycle. He viewed TDD as a Socratic dialogue, a method of reasoning using tests as "field notes." His narrative was profound and clear, although the pace was a bit slow at times, making it challenging to stay fully engaged. Despite the DDD context of the conference, Moura paid little to no attention to Domain-Driven Design. The focus was more on the general principles of TDD and how they can be applied in practice. The talk also included a live coding segment, where Moura built a simple fizzbuzz application using TDD. While this was a useful illustration of the discussed principles, it wasn't necessarily essential to understanding the talk. Overall, this was a valuable talk for individuals with a technical background seeking to learn more about TDD: in-depth, clear, and practically applicable. Despite the limited references to DDD, it didn't diminish the quality of the presentation. 🎥 Link to the video (Part One) 🎥 Link to the video (Part Two) Score card (1-5): Inspiration: 4 Relevance to DDD: 1 Analysis techniques: 1 Technical usability: 4 Refactoring to a really small but useful model island The speaker of this session, Yves Lorphelin, is an experienced software engineer with a passion for Domain-Driven Design. He shared his enthusiasm and expertise in an inspiring manner. The talk was clear and well-structured, with concrete examples and illustrations. Lorphelin addressed audience questions with knowledge and humor. The talk was clearly rooted in the principles of Domain-Driven Design. Lorphelin demonstrated how refactoring towards a small model island can help reduce the complexity of a software project and better organize the code around core domain models. Lorphelin discussed several analysis techniques that can be used to identify core domain models. These techniques include: Bounded context analysis: Identifying areas of the application with their own domain logic. Ubiquitous language: Developing a common language shared by all stakeholders. Event storming: Modeling domain logic through events. The talk was aimed at software developers and architects with a basic knowledge of Domain-Driven Design. Lorphelin delved into the technical details of refactoring towards a small model island, but he also maintained an abstract level to keep the talk accessible to a broad audience. Score card (1-5): Inspiration: 3 Relevance to DDD: 3 Analysis techniques: 4 Technical usability: 3 Living in your own bubble - Jacob Duijzer: From legacy to Domain Driven Design In this session, agile coach Jacob Duijzer explained how he tackled Legacy code for a project within the agriculture domain. He provided deeper insights into the domain using the Domain Storytelling technique. He outlined the new guidelines applicable to the existing system, which not only was complex and outdated but also lacked documentation, unit tests, and experienced domain experts. Jacob Duijzer demonstrated how he had domain experts explain the domain using examples (known as 'specification by example'). Following this, he explained 'the bubble context' and how it can be utilized to implement new business rules without impacting the existing system. Finally, he outlined the pros and cons of a bubble context. 🎥 Link to the video Score card (1-5): Inspiration: 3 Relevance to DDD: 5 Analysis techniques: 4 Technical usability: 3 Functional Domain Modelling - Marco Emrich and Ferdinand Ade This presentation brought a touch of theater to DDD 2023, starring Marco (as the developer) and Ferdi (as the customer/Product Owner). The story unfolded as follows… Ferdi, active in the wine sector, wanted to build an application to recommend the best wine to his customers, tailored to their tastes. It was delightful to see Ferdi thinking aloud about his expectations. Marco immediately tried to translate this into code using 'types'. As Ferdi described step by step how a 'tasting' unfolds, Marco immediately coded it in Ferdi's language. For example, Marco initially chose the term 'wine selection', only to later change it to 'wine cellar'. After all, Ferdi kept referring to their wine cellar, not their selection. Ferdi continuously looked over Marco's shoulder. Together, they added, removed, and renamed elements. Anything unclear was postponed. Gradually, a shared language emerged between the developer and the customer. From this entertaining session, we learned that it's beneficial to model together with the customer using event storming or domain storytelling, although the outcome may be abstract. 'Functional Domain Modeling' can be the icing on the cake. The result is explicit, specific, and provides a solid starting point for implementation. 🎥 Link to the video Score card (1-5): Inspiration: 3 Relevance to DDD: 3 Analysis techniques: 4 Technical usability: 3 Conclusion After attending the Foundations Day of DDD Europe 2023, our four colleagues returned home with a wealth of new insights that they will apply in practice. The theme of live coding provided developers with an excellent opportunity to gain deeper insights into DDD principles. Additionally, many related techniques were covered, such as Test-Driven Development. Not every session had a clear link to DDD, which was a bit of a drawback for analysts. Nevertheless, the atmosphere throughout the day was exceptionally positive and inspiring. The Meervaart theater proved to be the perfect setting for attendees to network and exchange experiences before and after the sessions. We're already looking forward to the next edition of DDD Europe!

Read more
i love shots
i love shots
Reading time 7 min
6 MAY 2025

An anthology of painful truths (and how to cope) Rookie mistakes As with many Product Managers I know, the job was handed to me like a hot potato on a burning silver platter. By lack of other volunteers. By accident. And with little context. Moreover, I had just rolled into the tech world at the time. So obviously, I had no leads on what to think, feel and do. And then I stumbled upon Horowitz’ Good Product Manager/Bad Product Manager article. The infamous CEO phrase, " a good product manager is the CEO of the product" immediately got to me. I vividly remember a sudden feeling of pride and empowerment rushing through my veins when I read it… I was going to change the world! If I could meet “past” me now, I’d slap her in the face. Because it wasn’t long before I started to struggle with empowerment, responsibility, authority and decision making (or the lack thereof). And thus came the inevitable black day when I found out that… well, Product Managers are not the CEO of anything . I felt awfully forlorn when this hit me. Lied to. At loss. How was I supposed to change the world when I didn’t have the mandate to actually do so? This just in: it’s not easy and there’s not one answer. At all. I learned there will just always be days when you’ll seem to be shovelling a big pile of p00p from left to right. But there will always be a silver lining too. I’ve put together a list of painful realities about empowerment in Product Management that unraveled before my weary eyes. To soften the blow, I’ve injected some silver rays of hope. Just to make sure you’re not commuting home in tears with your resignation letter in your hands after you read this. The house always wins As a Product Manager you spend all of your time working on Very Important Stuff. Talking to customers, exploring and validating brilliant (or not so brilliant) new ideas, crunching numbers of analysis reports, trying to translate what the development guys are telling you, etc… All this knowledge you gather on a daily basis puts you in the perfect position to advise the decision-makers in the building. And luckily, people acknowledge this. Therefore they invite you at the decision-making table to present your esteemed recommendations. So when the invite comes in, you go and pour your heart and soul into crafting an inspiring and convincing exposé. You work for days and nights and then some. Until finally the day comes when you get to showcase this masterpiece. You seize the moment. You speak with fiery passion. You ooze pride, confidence and vision and you paint compelling pictures. And when you’re done everyone looks at you like you’re see-through. Uncomfortable silence fills the room. After a few awkward questions and dito answers the meeting goes on. At the end of it, the C*O, VP Ninja and Something Director all agree. We’re going to do exactly the opposite of what you just presented. High fives are given, fists are bumped, great meeting everyone! Let’s go get ‘em. What on earth just happened? When I first experienced this kind of folly, I thought it was me. I’m a rather petite woman. I also don’t wear pencil skirts or two-piece suits. Heck, a customer once even asked me to serve him coffee at an event where I was going to be presenting the product’s strategy because he thought I was the waitress. So obviously the first thing I did was question myself. Guess what? It wasn’t me. And it isn’t you. It’s the system, stupid. No matter how hard we work on getting the facts straight; no matter how well we empathise with our stakeholders and show instead of tell… at the end of the day, it’s out of our hands. The house always wins. Some days we’ll land exactly where we wanted to and others we’ll have to suck it up and get in line. Don’t worry though. We can (and should ceaselessly strive to) change the system from within. Just keep reading. “He knows nothing; and he thinks he knows everything. That points clearly to a political career.” George B. Shaw really nailed it with this quote. It took a while for me to admit, but it actually relates quite well to Product Managers. One of the definitions of Product Management, as spelled out by Martin Eriksson, is “the intersection between the functions business, technology and user experience”. A.k.a the Venn diagram we’ve all once used to explain what we do. This diagram is not telling you we are experts in all those functions. It’s telling you we deeply care about each of them. That we surround ourselves with experts in the fields we are less experienced in. And that we don’t know about all the details, but we do see the overall picture. So… Do we know everything? If we take it literally, we don’t. But speaking for myself: I sure like to think that I do, because I get the bigger picture…You got me there, G. And then the politics. Oh, the politics. I personally loathe company politics. It drives me nuts. It’s bonkers. Overhead and just plain noise if you ask me. But love it or hate it, it’s the way the world goes round. While I’ve always tried to tell myself I steer away from it, I just have to admit it woud not help my career if I actually did. As we found out earlier, us Product Managers have very little authority. Consequently we have to rely on others to make the decisions we want them to make. So we really have no choice but to dive knee-deep in the office politics. We can’t ever overlook or bypass people. Not even if we think they’re incompetent wankers. We have to play the game and work the folks in both the formal and informal organisation charts. This all implies that — whether we like it or not —Product Managers are politicians in many ways. Diplomats. Tacticians. Mediators. For me this was a bitter pill to swallow. Because let’s be real: no one likes politicians. Politics (especially the office kind) are exhausting. There’s nothing sexy about it. So to unburden myself from the woes of being a politician, I try to stick to some ground rules: Put your money where your mouth is Don’t wear a suit (it would get dirty in the trenches when you’re spending valuable time with the civilians) Don’t call your co-workers, users, or other valuable people civilians. We’re all in this mess together Hold on to your beliefs for dear life Call yourself an influencer. It has a millenial ring to it, so why not? Cloudy with a 99.9% chance of having too many cooks in the kitchen Product Managers come in different colours, flavours, role descriptions and even job titles. Regardless of labels, there’s one thing we all have in common: we have to ‘manage our stakeholders’. There’s a lot of great content out there describing what that means and how to approach it . Truth of the matter is: if you’re in any type of Product role, chances are high you’re working on something very visible. Something everyone cares about. Stakeholders are the ones that can make or break your work. They are the ones we should focus on, internally. So let’s definitely not stop learning how to work with and alongside your stakeholders. Otherwise we might not succeed as Product Managers. However. However. (It’s a BIG however so I just had to put a second one here.) What I rarely read about is all those other people. The ones who are not stakeholders but who all feel they should have a say. I’m talking e-mail threads about the shade of blue of a button in which people are added and added until the “to:” field contains more text than the body. I’m talking people spreading vague product ideas (of their own) as facts and thus causing confusion, disarray and misalignment. I’m talking meetings with 10 people in the room while there should have been 4… At one point I’ve felt like even the office dog was meddling with my stuff. And that her smelly input was more relevant than some of the e-mails I was deleting. I mean receiving. This stuff happens on a nearly-daily basis and it sucks. Big time. But it’s something we have to deal with. If you’re like me, you’ll have days where you just want to take your coat and leave, not wanting to deal with this cr@p. Just remember: These people, however annoying, (mostly) mean well Step up once in a while and call an end to the never-ending e-mail thread or meeting attendee madness. We’re perfectly positioned to notice when chaos is hurting the day-to-business. Use this position to kill the chaos. Everyone, especially you, will feel better. If people keep interfering with your work instead of theirs, something else might be wrong. Take a pillow, place it over your nose and mouth and shout some profanities. Then start a few conversations to see what’s going on and how you can solve it together. If all of the above fails: take your coat and leave. Tomorrow is another day. Soooo… who is calling the shots, you ask? It depends. One thing’s for sure: some days being a Product Manager is all about sporting a pair of big balls. Other days it’s about putting your ego in the fridge and going with the flow. To quote one of my favourite TV shows of all times: “Welcome to Product Management. It sucks. You’re gonna love it! T his blog post was originally written by Samia over at Medium. You can check out the post here and Samia’s profile here .

Read more
0auth 2 man laptop
0auth 2 man laptop
What the heck is OAuth 2?
Reading time 9 min
6 MAY 2025

In this blog post, I would like to give you a high level overview of the OAuth 2 specification. When I started to learn about this, I got lost very quickly in all the different aspects that are involved. To make sure you don’t have to go through the same thing, I’ll explain OAuth 2 as if you don’t even have a technical background. Since there is a lot to cover, let’s jump right in! The core concepts of security When it comes to securing an application, there are 2 core concepts to keep in mind: authentication and authorization . Authentication With authentication, you’re trying to answer the question “Who is somebody?” or “Who is this user?” You have to look at it from the perspective of your application or your server. They basically have stranger danger. They don’t know who you are and there is no way for them to know that unless you prove your identity to them. So, authentication is the process of proving to the application that you are who you claim to be . In a real world example, this would be providing your ID or passport to the police when they pull you over to identify yourself. Authentication is not part of the standard OAuth 2 specification. However, there is an extension to the specification called Open ID Connect that handles this topic. Authorization Authorization is the flip side of authentication. Once a user has proven who they are, the application needs to figure out what a user is allowed to do . That’s essentially what the authorization process does. An easy way to think about this is the following example. If you are a teacher at a school, you can access information about the students in your class. However, if you are the principal of the school you probably have access to the records of all the students in the school. You have a larger access because of your job title. OAuth 2 Roles To fully understand OAuth 2, you have to be aware of the following 4 actors that make up the specification: Resource Owner Resource Server Authorization Server Client / Application As before, let’s explain it with a very basic example to see how it actually works. Let’s say you have a jacket. Since you own that jacket, you are the Resource Owner and the jacket is the Resource you want to protect. You want to store the jacket in a locker to keep it safe. The locker will act as the Resource Server . You don’t own the Resource Server but it’s holding on to your things for you. Since you want to keep the jacket safe from being stolen by someone else, you have to put a lock on the locker. That lock will be the Authorization Server . It handles the security aspects and makes sure that only you are able to access the jacket or potentially someone else that you give permission. If you want your friend to retrieve your jacket out of the locker, that friend can be seen as the Client or Application actor in the OAuth flow. The Client is always acting on the user’s behalf. Tokens The next concept that you’re going to hear about a lot is tokens.There are various types of tokens, but all of them are very straightforward to understand. The 2 types of tokens that you encounter the most are access tokens and refresh tokens . When it comes to access tokens you might have heard about JWT tokens, bearer tokens or opaque tokens. Those are really just implementation details that I’m not going to cover in this article. In essence, an access token is something you provide to the resource server in order to get access to the items it is holding for you. For example, you can see access tokens as paper tickets you buy at the carnival. When you want to get on a ride, you present your ticket to the person in the booth and they’ll let you on. You enjoy your ride and afterwards your ticket expires. Important to note is that whoever has the token, owns the token . So be very careful with them. If someone else gets a hold on your token, he or she can access your items on your behalf! Refresh tokens are very similar to access tokens. Essentially, you use them to get more access tokens. While access tokens are typically short lived, refresh tokens tend to have a longer expiry date. To go back to our carnival example, a refresh token could be your parents credit card that can be used to buy more carnival tickets for you to spend on rides. Scopes The next concept to cover are scopes. A scope is basically a description of things that a person can do in an application. You can see it as a job role in real life (e.g a principal or teacher in a high school). Certain scopes can grant you more permissions than others. I know I said I wasn’t going to get into technical details, but if you’re familiar with Spring Security, then you can compare scopes with what Spring Security calls roles. A scope matches one-on-one with the concept of a role. The OAuth specification does not specify how a scope should look like but often they are dot separated Strings like blog.write . Google on the other hand uses URLs as a scope. As an example: to allow read only access to someone’s calendar, they will provide the scope https://www.googleapis.com/auth/calendar.readonly . Grant types Grant types are typically where things start to get confusing for people. Let’s first start with showing the most common used grant types: Client Credentials Authorization Code Device Code Refresh Password Implicit Client Credentials is a grant type used very frequently when 2 back-end services need to communicate with each other in a secure way. The next one is the Authorization Code grant type, which is probably the most difficult grant type to fully grap. You use this grant type whenever you want users to login via a browser based login form. If you have ever used the ‘Log in with Facebook’ or ‘Log in with Google’ button on a website, then you’ve already experienced an Authorization Code flow without even knowing it! Next up is the Device Code grant type, which is fairly new in the OAuth 2 scene. It’s typically used on devices that have limited input capabilities, like a TV. For example, if you want to log in to Netflix, instead of providing your username and password; it will pop-up a link that displays a code, which you have to fill in using the mobile app. The Refresh grant type most often goes hand in hand with the Authorization Code flow. Since access tokens are short lived, you don’t want your users to be bothered with logging in each time the access token expires. So there’s this refresh flow that utilizes refresh tokens to acquire new access tokens whenever they’re about to expire. The last 2 grant types are Password and Implicit . These grant types are less secure options that are not recommended when building new applications. We’ll touch on them briefly in the next section, which explains the above grant types in more detail. Authorization flows An authorization flow contains one or more steps that have to be executed in order for a user to get authorized by the system. There are 4 authorization flows we’ll discuss: Client Credentials flow Password flow Authorization Code flow Implicit flow Client Credentials flow The Client Credentials flow is the simplest flow to implement. It works very similar to how a traditional username/password login works. Use this flow only if you can trust the client/application, as the client credentials are stored within the application. Don’t use this for single page apps (SPAs) or mobile apps, as malicious users can deconstruct the app to get ahold of the credentials and use them to get access to secured resources. In most use cases, this flow is used to communicate securely between 2 back-end systems. So how does the Client Credentials flow work? Each application has a client ID and secret that are registered on the authorization server. It presents those to the authorization server to get an access token and uses it to get the secure resource from the resource server. If at some point the access token expires, the same process repeats itself to get a new token. Password flow The Password flow is very similar to the Client Credentials flow, but is very insecure because there’s a 3rd actor involved being an actual end user. Instead of a secure client that we trust presenting an ID and secret to the authorization provider, we now have a user ‘talking’ to a client. In a Password flow, the user provides their personal credentials to the client. The client then uses these credentials to get access tokens from the authorization server. This is the reason why a Password flow is not secure, as we must absolutely be sure that we can trust the client to not abuse the credentials for malicious reasons. Exceptions where this flow could still be used are command line applications or corporate websites where the end user has to trust the client apps that they use on a daily basis. But apart from this, it’s not recommended to implement this flow. Authorization Code Flow This is the flow that you definitely want to understand, as it’s the flow that’s used the most when securing applications with OAuth 2. This flow is a bit more complicated than the previously discussed flows. It’s important to understand that this flow is confidential, secure and browser based . The flow works by making a lot of HTTP redirects, which is why a browser is an important actor in this flow. There’s also a back-channel request (called like this because the user is not involved in this part of the flow) in which the client or application talks directly to the authorization server. In this flow, the user typically has to approve the scopes or permissions that will be granted to the application. An example could be a 3rd party application that asks if it’s allowed to have access to your Facebook profile picture after logging in with the ‘Log in with Facebook’ button. Let’s apply the Authorization Code flow to our ‘jacket in the closet’ example to get a better understanding. Our jacket is in the locker and we want to lend it to a friend. Our friend goes to the (high-tech) locker. The locker calls us, as we are the Resource Owner. This call is one of those redirects we talked about earlier. At this point, we establish a secure connection to the locker, which acts as an authorization server. We can now safely provide our credentials to give permission to unlock the lock. The authorization server then provides a temporary code called OAuth code to our friend. The friend then uses that OAuth code to obtain an access code to open the locker and get my jacket. Implicit flow The Implicit flow is basically the same as the Authorization Code flow, but without the temporary OAuth code. So after logging in, the authorization server will immediately send back an access token without requiring a back-channel request. This is less secure, as the token could be intercepted via a man-in-the-middle attack. Conclusion OAuth 2 may look daunting at first because of all the different actors involved. Hopefully, you now have a better understanding of how they interact with each other. With this knowledge in mind, it will be much easier to grasp the technical details once you start delving into them.

Read more
developers aca group
developers aca group
Reading time 5 min
6 MAY 2025

Today’s web applications and websites must be available 24/7 from anywhere in the world and have to be usable and pleasant to use from any device or screen size. In addition, they need to be secure, flexible and scalable to meet spikes in demand. In this blog, we introduce you to the modern web application’s architecture and we brush a bit on different back-end and front-end frameworks and how they work together. When people compare solutions used for building web applications and websites, there usually is a sort of pitting one against the other. Here, we will go against this flow and try to frame the differences, so that you can decide whether one, the other, or both fit the use case you have in mind. An essential concept that must be noted is that back-end frameworks, such as Flask or FastAPI , and front-end frameworks, such as React or Vue JS , are two fundamentally different technologies that solve different, although related, problems. Setting them against one another is therefore not a good approach. These days, when you are looking to build a slightly more complex web application or website solution, you often need solid frameworks that address bits of both front-end and back-end sides to achieve what you’re looking for. The specifics of your application will determine what those bits are and whether it’s worth investing in using only one of the two technologies, or both in tandem. Purpose of a back-end framework A back-end framework is the “brains” of your web application. It should take care of most, if not all, computation, data management and model manipulation tasks. Let’s take the example of FastAPI. While this back-end web framework is primarily used for developing RESTful APIs, it can also be applied for developing complete web applications if coupled with a front-end engine such as Jinja2. Using only FastAPI and some templating would be ideal if you want a standalone API for other developers to interact with. Another good purpose would be a website or web app that offers dashboards and insights on data inputs (charts based on files that you upload, etc.) without functionalities that depend on quick user interactions. Below you find an example of an application built entirely with a Python back end and Jinja2 as a templating engine. Click here to get some more information about the project, source code, etc. The issue you might find when creating a complete web app or website with FastAPI is that the entire logic of the program is pushed to the back-end, and the only job for the browser and the device on the client’s side is to render the HTML/CSS/JS response sent to it. The time between when the request from the browser is made for displaying something and when the user sees it, could then vary wildly based on a lot of factors. Think of server load, the speed of the user’s internet, the server’s memory usage or CPU efficiency, the complexity of the requested task, ... Purpose of a front-end framework So far, the back-end can take care of all the operations that we might want our web app to have, but there is no way for it to really interact with the user. A front-end framework takes care of the user experience - UI elements like buttons, a landing page, an interactive tutorial, uploading a file - basically any interaction with the user will go through the front-end framework. Taking a look at React or Vue JS — these are front-end frameworks for developing dynamic websites and single page applications. However, they need some back-end technology (like FastAPI, Flask or NodeJS) to provide a RESTful API so that what they show can be dynamic and interactive. Using only React would happen in situations where there are already existing data sources that you can interact with (public APIs, external data providers, cloud services, etc.) and all you want to create is the user interaction with those services. But we can already see here that, in theory, combining the strengths of a solid back-end framework – such as Flask, FastAPI, or NodeJS – with a good front-end framework is an option, and a very good one on top of that. Examples of that combination are the BBC World Service News websites rendered using a React-based Single Page Application with a NodeJS back-end (Express). Click here for a detailed breakdown of the project’s GitHub page. In these cases, front-end frameworks attempt to delegate some (or a lot) of the tasks of the back end to the client-side. Only the computationally heavy parts remain on the server, while everything that is left and fast to execute is done in the browser on the client’s device. This ensures a good user experience, “snappiness” and is basically a sort of decentralization of parts of the web application’s execution, lowering the load and responsibilities of the server. Combining the two 🤝 Today, the architecture of well-built and scalable web applications consists of a client-side framework that maintains a state, comprising a state of the user interface and a state of the data model. Those states represent respectively UI elements that form the visual backbone of an application, and data elements linked to what kind of data or models (for example a user) are used throughout the application. Any change in the data model state triggers a change in the UI state of the application. Changes in the data models are caused by either an event coming directly from the user (like a mouse click) or a server-side event (like the server saying there is a new notification for the user). Combining all these factors makes for a great user experience that gets closer to a desktop application rather than an old-school, sluggish website. Ready for more? In our next blog , we explain the strengths of Python and NodeJS, and how you should choose between them.

Read more
firebase aca group mobile
firebase aca group mobile
An introduction to Firebase
Reading time 4 min
6 MAY 2025

Developing applications is becoming easier and easier thanks to the rise of things such as low-code platforms. However, for fully-fledged developers too, it’s easier than ever to start creating all kinds of applications thanks to platforms such as Firebase. In this blog post, we’ll give a brief introduction to Firebase, what features it has and give some real-life examples of an application we created with Firebase. What is Firebase? I’ll start the introduction to Firebase by shortly explaining what it exactly is. Firebase is a platform, developed by Google, for creating mobile and web applications. The platform provides a serverless development experience. Truly, it’s like a Swiss knife for app developers, since it provides all necessary things to build iOS, Android or web apps, like back-end infrastructure, monitoring, user engagement and much more. Setting up a project in Firebase requires only minimal effort, since it takes away some unnecessary complexity by providing a project setup to start developing. That leaves more time for building the actual application! Another big advantage of using Firebase is that it’s essentially an app development platform that provides products that work together flawlessly. Here’s an example: together with a colleague, we used Firebase to create a simple web application. We created an Angular app that could read and write data to a database and sent email notifications. Normally, we would have had to spend time on setting up servers for hosting the application, create an API, create a database, and more. However, because we’d chosen Firebase as a development platform, it was as simple as typing firebase init into a console. In no time we were able to start developing the actual web application without thinking about everything that comes with it. Cloud Firestore Cloud Firestore is a no-SQL, real-time database designed to handle the toughest workloads from the world’s most popular apps. Cloud Firestore is built on the Google Cloud Platform Database infrastructure, which enables features like multi-regional data replication and multi-document transactions. It keeps your data in sync across client apps through realtime listeners and offers offline support for mobile and web, allowing its users to build responsive apps that work regardless of network latency. Moreover, Cloud Firestore seamlessly integrates with other Firebase and Google Cloud Platform products. When creating our simple web application, my colleague and I used Cloud Firestore for storing, reading and writing data. To interact with the Cloud Firestore provided by Firebase, we used AngularFireStore from AngularFire. AngularFireStore is a tool for facilitating the interaction between Firebase and Angular even more. One thing we noticed when using Cloud Firestore, is that it is a real-time database. That means that as soon as the database updates, it notifies any and all devices that are interested. In our case, we had a table that updated as soon as the database had a new or updated field. All without needing to resort to rocket science! Another benefit of using Cloud Firestore is that users only need to define the data structure just once. Let’s say we want a new field on our data object. It suffices to just add this field to your objects interface, without having to add this field to a variety of places. Cloud Functions In order to have some back-end functionality, Firebase provides something called Cloud Functions . Cloud Functions are JavaScript functions that are executed on specific event emitters. For example, we used Cloud Functions to add mail functionality to our simple web app. The Cloud Function was listening on a specific http-request. Whenever an event triggered this request, an email was sent using nodemailer. We used an additional Cloud Function for handling captcha requests. As you can see, there are a lot of use cases for Cloud Functions. They can not only listen to http-requests, but also to Firestore triggers. It is possible to execute additional logic when the data is created, updated or deleted, e.g. to increment a counter each time a new record is saved in the Firestore. Another benefit is that these Cloud Functions are fully isolated from the client, so these can not be reverse-engineered and thus provide better security. It’s also possible to build an API with these Cloud Functions should you want to customize, but keep the quotas in mind . Firebase Hosting Our web app needed to be hosted as well. We chose Firebase Hosting to provide a static web host. What’s cool about Firebase Hosting is that it caches your web app on SSDs around the world in order to provide a low latency experience to everyone regardless of their location. Another benefit is that it provides free SSL certificates without too much configuration. So, what does it cost? Firebase provides 2 plans: a Spark Plan (free) and a Blaze Plan (pay as you go). There are some features that require a Blaze Plan, such as scheduled Cloud Functions. You can find more info and a detailed view about the costs on the pricing plans page here . Conclusion I hope this introduction to Firebase has taught you something new. Firebase provides everything you need to build high quality mobile or web apps. Firebase is more than just Cloud Functions, Firestore and hosting. It also provides solutions for authentication, file storage, automated tests, machine learning and so much more. All these benefits come with one thing to keep in mind: the infrastructure is based on Google (cloud). That being said, we didn’t find this to be an issue. The time and effort you save by not having to set up all these functionalities by yourself, as well as providing a future-proof solution, makes using Firebase a no-brainer for a lot of people.

Read more
Apache Kafka in a nutshell
Reading time 5 min
6 MAY 2025

Apache Kafka is a highly flexible streaming platform. It focuses on scalable, real-time data pipelines that are persistent and very performant. But how does it work, and what do you use it for? How does Apache Kafka work? For a long time, applications were built using a database where ‘things’ are stored. Those things can be an order, a person, a car … and the database stores them with a certain state. Unlike this approach, Kafka doesn’t think in terms of ‘things’ but in terms of ‘events’ . An event has also a state, but it is something that happened in an indication of time. However, it’s a bit cumbersome to store events in a database. Therefore, Kafka uses a log : an ordered sequence of events that’s also durable. Decoupling If you have a system with different source systems and target systems, you want to integrate them with each other. These integrations can be tedious because they have their own protocols, different data formats, different data structures, etc. So within a system of 5 source and 5 target systems, you’ll likely have to write 25 integrations. It can become very complicated very quickly. And this is where Kafka comes in. With Kafka, the above integration scheme looks like this: So what does that mean? It means that Kafka helps you to decouple your data streams. Source systems only have to publish their events to Kafka, and target systems consume the events from Kafka. On top of the decoupling, Apache Kafka is also very scalable, has a resilient architecture, is fault-tolerant, is distributed and is highly performant. Topics and partitions A topic is a particular stream of data and is identified by a name. Topics consist of partitions. Each message in a partition is ordered and gets an incremental ID called an offset. An offset only has a meaning within a specific partition. Within a partition, the order of the messages is guaranteed. But when you send a message to a topic, it is randomly assigned to a partition. So if you want to keep the order of certain messages, you’ll need to give the messages a key. A message with a key is always assigned to the same partition. Messages are also immutable. If you need to change them, you’ll have to send an extra ‘update-message’. Brokers A Kafka cluster is composed of different brokers. Each broker is assigned an ID, and each broker contains certain partitions. When you connect to a broker in the cluster, you’re automatically connected to the whole cluster. As you can see in the illustration above, topic 1/partition 1 is replicated in broker 2. Only one broker can be a leader for a topic/partition. In this example, broker 1 is the leader and broker 2 will automatically sync the replicated topic/partitions. This is what we call an ‘in sync replica’ (ISR). Producers A producer sends the messages to the Kafka cluster to write them to a specific topic. Therefore, the producer must know the topic name and one broker. We already established that you automatically connect to the entire cluster when connecting to a broker. Kafka takes care of the routing to the correct broker. A producer can be configured to get an acknowledgement (ACK) of the data write: ACK=0: producer will not wait for acknowledgement ACK=1: producer will wait for the leader broker’s acknowledgement ACK=ALL: producer will wait for the leader broker’s and replica broker’s acknowledgement Obviously, a higher ACK is much safer and guarantees no data loss. On the other hand, it’s less performant. Consumers A consumer reads data from a topic. Therefore, the consumer must know the topic name and one broker. Like the producers, when connecting to one broker, you’re connected to the whole cluster. Again, Kafka takes care of the routing to the correct broker. Consumers read the messages from a partition in order, taking the offset into account. If consumers read from multiple partitions, they read them in parallel. Consumer groups Consumers are organized into groups, i.e. consumer groups. These groups are useful to enhance parallelism. Within a consumer group, each consumer reads from an exclusive partition. This means that, in consumer group 1, both consumer 1 and consumer 2 cannot read from the same partition. A consumer group can also not have more consumers than partitions, because some consumers will not have a partition to read from. Consumer offset When a consumer reads a message from the partition, it commits the offset every time. In the case of a consumer dying or network issues, the consumer knows where to continue when it’s back online. Why we don't use a message queue There are some differences between Kafka and a message queue. Some main differences are that after a consumer of a message queue receives a message, it’s removed from the queue, while Kafka doesn’t remove the messages/events. This allows you to have multiple consumers on a topic that can read the same messages, but execute different logic on them. Since the messages are persistent, you can also replay them. When you have multiple consumers on a message queue, they generally apply the same logic to the messages and are only useful to handle load. Use cases for Apache Kafka There are many use cases for Kafka. Let’s look at some examples. Parcel delivery telemetry When you order something on a web shop, you’ll probably get a notification from the courier service with a tracking link. In some cases, you can actually follow the driver in real-time on a map. This is where Kafka comes in: the courier’s van has a GPS built in that sends its coordinates regularly to a Kafka cluster. The website you’re looking at listens to those events and shows you the courier’s exact position on a map in real-time. Website activity tracking Kafka can be used for tracking and capturing website activity. Events such as page views, user searches, etc. are captured in Kafka topics. This data is then used for a range of use cases like real-time monitoring, real-time processing or even loading this data into a data lake for further offline processing and reporting. Application health monitoring Servers can be monitored and set to trigger alarms in case of system faults. Information from servers can be combined with the server syslogs and sent to a Kafka cluster. Through Kafka, these topics can be joined and set to trigger alarms based on usage thresholds, containing full information for easier troubleshooting of system problems before they become catastrophic. Conclusion In this blog post, we’ve broadly explained how Apache Kafka works, and for what this incredible platform can be used. We hope you learned something new! If you have any questions, please let us know. Thanks for reading!

Read more