We learn & share

ACA Group Blog

Read more about our thoughts, views, and opinions on various topics, important announcements, useful insights, and advice from our experts.

Featured

8 MAY 2025
Reading time 5 min

In the ever-evolving landscape of data management, investing in platforms and navigating migrations between them is a recurring theme in many data strategies. How can we ensure that these investments remain relevant and can evolve over time, avoiding endless migration projects? The answer lies in embracing ‘Composability’ - a key principle for designing robust, future-proof data (mesh) platforms. Is there a silver bullet we can buy off-the-shelf? The data-solution market is flooded with data vendor tools positioning themselves as the platform for everything, as the all-in-one silver bullet. It's important to know that there is no silver bullet. While opting for a single off-the-shelf platform might seem like a quick and easy solution at first, it can lead to problems down the line. These monolithic off-the-shelf platforms often end up inflexible to support all use cases, not customizable enough, and eventually become outdated.This results in big complicated migration projects to the next silver bullet platform, and organizations ending up with multiple all-in-one platforms, causing disruptions in day-to-day operations and hindering overall progress. Flexibility is key to your data mesh platform architecture A complete data platform must address numerous aspects: data storage, query engines, security, data access, discovery, observability, governance, developer experience, automation, a marketplace, data quality, etc. Some vendors claim their all-in-one data solution can tackle all of these. However, typically such a platform excels in certain aspects, but falls short in others. For example, a platform might offer a high-end query engine, but lack depth in features of the data marketplace included in their solution. To future-proof your platform, it must incorporate the best tools for each aspect and evolve as new technologies emerge. Today's cutting-edge solutions can be outdated tomorrow, so flexibility and evolvability are essential for your data mesh platform architecture. Embrace composability: Engineer your future Rather than locking into one single tool, aim to build a platform with composability at its core. Picture a platform where different technologies and tools can be seamlessly integrated, replaced, or evolved, with an integrated and automated self-service experience on top. A platform that is both generic at its core and flexible enough to accommodate the ever-changing landscape of data solutions and requirements. A platform with a long-term return on investment by allowing you to expand capabilities incrementally, avoiding costly, large-scale migrations. Composability enables you to continually adapt your platform capabilities by adding new technologies under the umbrella of one stable core platform layer. Two key ingredients of composability Building blocks: These are the individual components that make up your platform. Interoperability: All building blocks must work together seamlessly to create a cohesive system. An ecosystem of building blocks When building composable data platforms, the key lies in sourcing the right building blocks. But where do we get these? Traditional monolithic data platforms aim to solve all problems in one package, but this stifles the flexibility that composability demands. Instead, vendors should focus on decomposing these platforms into specialized, cost-effective components that excel at addressing specific challenges. By offering targeted solutions as building blocks, they empower organizations to assemble a data platform tailored to their unique needs. In addition to vendor solutions, open-source data technologies also offer a wealth of building blocks. It should be possible to combine both vendor-specific and open-source tools into a data platform tailored to your needs. This approach enhances agility, fosters innovation, and allows for continuous evolution by integrating the latest and most relevant technologies. Standardization as glue between building blocks To create a truly composable ecosystem, the building blocks must be able to work together, i.e. interoperability. This is where standards come into play, enabling seamless integration between data platform building blocks. Standardization ensures that different tools can operate in harmony, offering a flexible, interoperable platform. Imagine a standard for data access management that allows seamless integration across various components. It would enable an access management building block to list data products and grant access uniformly. Simultaneously, it would allow data storage and serving building blocks to integrate their data and permission models, ensuring that any access management solution can be effortlessly composed with them. This creates a flexible ecosystem where data access is consistently managed across different systems. The discovery of data products in a catalog or marketplace can be greatly enhanced by adopting a standard specification for data products. With this standard, each data product can be made discoverable in a generic way. When data catalogs or marketplaces adopt this standard, it provides the flexibility to choose and integrate any catalog or marketplace building block into your platform, fostering a more adaptable and interoperable data ecosystem. A data contract standard allows data products to specify their quality checks, SLOs, and SLAs in a generic format, enabling smooth integration of data quality tools with any data product. It enables you to combine the best solutions for ensuring data reliability across different platforms. Widely accepted standards are key to ensuring interoperability through agreed-upon APIs, SPIs, contracts, and plugin mechanisms. In essence, standards act as the glue that binds a composable data ecosystem. A strong belief in evolutionary architectures At ACA Group, we firmly believe in evolutionary architectures and platform engineering, principles that seamlessly extend to data mesh platforms. It's not about locking yourself into a rigid structure but creating an ecosystem that can evolve, staying at the forefront of innovation. That’s where composability comes in. Do you want a data platform that not only meets your current needs but also paves the way for the challenges and opportunities of tomorrow? Let’s engineer it together Ready to learn more about composability in data mesh solutions? {% module_block module "widget_f1f5c870-47cf-4a61-9810-b273e8d58226" %}{% module_attribute "buttons" is_json="true" %}{% raw %}[{"appearance":{"link_color":"light","primary_color":"primary","secondary_color":"primary","tertiary_color":"light","tertiary_icon_accent_color":"dark","tertiary_text_color":"dark","variant":"primary"},"content":{"arrow":"right","icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"tertiary_icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"text":"Contact us now!"},"target":{"link":{"no_follow":false,"open_in_new_tab":false,"rel":"","sponsored":false,"url":{"content_id":230950468795,"href":"https://25145356.hs-sites-eu1.com/en/contact","href_with_scheme":null,"type":"CONTENT"},"user_generated_content":false}},"type":"normal"}]{% endraw %}{% end_module_attribute %}{% module_attribute "child_css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "definition_id" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "field_types" is_json="true" %}{% raw %}{"buttons":"group","styles":"group"}{% endraw %}{% end_module_attribute %}{% module_attribute "isJsModule" is_json="true" %}{% raw %}true{% endraw %}{% end_module_attribute %}{% module_attribute "label" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "module_id" is_json="true" %}{% raw %}201493994716{% endraw %}{% end_module_attribute %}{% module_attribute "path" is_json="true" %}{% raw %}"@projects/aca-group-project/aca-group-app/components/modules/ButtonGroup"{% endraw %}{% end_module_attribute %}{% module_attribute "schema_version" is_json="true" %}{% raw %}2{% endraw %}{% end_module_attribute %}{% module_attribute "smart_objects" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "smart_type" is_json="true" %}{% raw %}"NOT_SMART"{% endraw %}{% end_module_attribute %}{% module_attribute "tag" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "type" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "wrap_field_tag" is_json="true" %}{% raw %}"div"{% endraw %}{% end_module_attribute %}{% end_module_block %}

Read more
We learn & share

ACA Group Blog

Read more about our thoughts, views, and opinions on various topics, important announcements, useful insights, and advice from our experts.

Featured

8 MAY 2025
Reading time 5 min

In the ever-evolving landscape of data management, investing in platforms and navigating migrations between them is a recurring theme in many data strategies. How can we ensure that these investments remain relevant and can evolve over time, avoiding endless migration projects? The answer lies in embracing ‘Composability’ - a key principle for designing robust, future-proof data (mesh) platforms. Is there a silver bullet we can buy off-the-shelf? The data-solution market is flooded with data vendor tools positioning themselves as the platform for everything, as the all-in-one silver bullet. It's important to know that there is no silver bullet. While opting for a single off-the-shelf platform might seem like a quick and easy solution at first, it can lead to problems down the line. These monolithic off-the-shelf platforms often end up inflexible to support all use cases, not customizable enough, and eventually become outdated.This results in big complicated migration projects to the next silver bullet platform, and organizations ending up with multiple all-in-one platforms, causing disruptions in day-to-day operations and hindering overall progress. Flexibility is key to your data mesh platform architecture A complete data platform must address numerous aspects: data storage, query engines, security, data access, discovery, observability, governance, developer experience, automation, a marketplace, data quality, etc. Some vendors claim their all-in-one data solution can tackle all of these. However, typically such a platform excels in certain aspects, but falls short in others. For example, a platform might offer a high-end query engine, but lack depth in features of the data marketplace included in their solution. To future-proof your platform, it must incorporate the best tools for each aspect and evolve as new technologies emerge. Today's cutting-edge solutions can be outdated tomorrow, so flexibility and evolvability are essential for your data mesh platform architecture. Embrace composability: Engineer your future Rather than locking into one single tool, aim to build a platform with composability at its core. Picture a platform where different technologies and tools can be seamlessly integrated, replaced, or evolved, with an integrated and automated self-service experience on top. A platform that is both generic at its core and flexible enough to accommodate the ever-changing landscape of data solutions and requirements. A platform with a long-term return on investment by allowing you to expand capabilities incrementally, avoiding costly, large-scale migrations. Composability enables you to continually adapt your platform capabilities by adding new technologies under the umbrella of one stable core platform layer. Two key ingredients of composability Building blocks: These are the individual components that make up your platform. Interoperability: All building blocks must work together seamlessly to create a cohesive system. An ecosystem of building blocks When building composable data platforms, the key lies in sourcing the right building blocks. But where do we get these? Traditional monolithic data platforms aim to solve all problems in one package, but this stifles the flexibility that composability demands. Instead, vendors should focus on decomposing these platforms into specialized, cost-effective components that excel at addressing specific challenges. By offering targeted solutions as building blocks, they empower organizations to assemble a data platform tailored to their unique needs. In addition to vendor solutions, open-source data technologies also offer a wealth of building blocks. It should be possible to combine both vendor-specific and open-source tools into a data platform tailored to your needs. This approach enhances agility, fosters innovation, and allows for continuous evolution by integrating the latest and most relevant technologies. Standardization as glue between building blocks To create a truly composable ecosystem, the building blocks must be able to work together, i.e. interoperability. This is where standards come into play, enabling seamless integration between data platform building blocks. Standardization ensures that different tools can operate in harmony, offering a flexible, interoperable platform. Imagine a standard for data access management that allows seamless integration across various components. It would enable an access management building block to list data products and grant access uniformly. Simultaneously, it would allow data storage and serving building blocks to integrate their data and permission models, ensuring that any access management solution can be effortlessly composed with them. This creates a flexible ecosystem where data access is consistently managed across different systems. The discovery of data products in a catalog or marketplace can be greatly enhanced by adopting a standard specification for data products. With this standard, each data product can be made discoverable in a generic way. When data catalogs or marketplaces adopt this standard, it provides the flexibility to choose and integrate any catalog or marketplace building block into your platform, fostering a more adaptable and interoperable data ecosystem. A data contract standard allows data products to specify their quality checks, SLOs, and SLAs in a generic format, enabling smooth integration of data quality tools with any data product. It enables you to combine the best solutions for ensuring data reliability across different platforms. Widely accepted standards are key to ensuring interoperability through agreed-upon APIs, SPIs, contracts, and plugin mechanisms. In essence, standards act as the glue that binds a composable data ecosystem. A strong belief in evolutionary architectures At ACA Group, we firmly believe in evolutionary architectures and platform engineering, principles that seamlessly extend to data mesh platforms. It's not about locking yourself into a rigid structure but creating an ecosystem that can evolve, staying at the forefront of innovation. That’s where composability comes in. Do you want a data platform that not only meets your current needs but also paves the way for the challenges and opportunities of tomorrow? Let’s engineer it together Ready to learn more about composability in data mesh solutions? {% module_block module "widget_f1f5c870-47cf-4a61-9810-b273e8d58226" %}{% module_attribute "buttons" is_json="true" %}{% raw %}[{"appearance":{"link_color":"light","primary_color":"primary","secondary_color":"primary","tertiary_color":"light","tertiary_icon_accent_color":"dark","tertiary_text_color":"dark","variant":"primary"},"content":{"arrow":"right","icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"tertiary_icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"text":"Contact us now!"},"target":{"link":{"no_follow":false,"open_in_new_tab":false,"rel":"","sponsored":false,"url":{"content_id":230950468795,"href":"https://25145356.hs-sites-eu1.com/en/contact","href_with_scheme":null,"type":"CONTENT"},"user_generated_content":false}},"type":"normal"}]{% endraw %}{% end_module_attribute %}{% module_attribute "child_css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "definition_id" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "field_types" is_json="true" %}{% raw %}{"buttons":"group","styles":"group"}{% endraw %}{% end_module_attribute %}{% module_attribute "isJsModule" is_json="true" %}{% raw %}true{% endraw %}{% end_module_attribute %}{% module_attribute "label" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "module_id" is_json="true" %}{% raw %}201493994716{% endraw %}{% end_module_attribute %}{% module_attribute "path" is_json="true" %}{% raw %}"@projects/aca-group-project/aca-group-app/components/modules/ButtonGroup"{% endraw %}{% end_module_attribute %}{% module_attribute "schema_version" is_json="true" %}{% raw %}2{% endraw %}{% end_module_attribute %}{% module_attribute "smart_objects" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "smart_type" is_json="true" %}{% raw %}"NOT_SMART"{% endraw %}{% end_module_attribute %}{% module_attribute "tag" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "type" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "wrap_field_tag" is_json="true" %}{% raw %}"div"{% endraw %}{% end_module_attribute %}{% end_module_block %}

Read more

All blog posts

Lets' talk!

We'd love to talk to you!

Contact us and we'll get you connected with the expert you deserve!

Lets' talk!

We'd love to talk to you!

Contact us and we'll get you connected with the expert you deserve!

Lets' talk!

We'd love to talk to you!

Contact us and we'll get you connected with the expert you deserve!

Lets' talk!

We'd love to talk to you!

Contact us and we'll get you connected with the expert you deserve!

teamwork
teamwork
Going Beyond Features: Maximize Outcomes, Minimize Outputs
Reading time 8 min
6 MAY 2025

When building products, there is a growing recognition that success isn’t just about delivering features or hitting deadlines. Instead, it’s about delivering real value to customers and achieving business impact. This requires a shift in mindset from output-driven to outcome-driven thinking. In this post, we'll explore why prioritizing outcomes over output is essential for building successful products, and how you can adopt this approach in your own work. What does “outcomes over output” mean? In the world of business, the terms outcome and output are often used interchangeably, causing a bit of confusion. However, it is important to have a clear understanding of the distinction between these two terms . Although they may seem straightforward, let's define them to ensure we are all on the same page. Let’s imagine you’ve been feeling exhausted lately, so you start working out in the gym to feel more energized. Some people might say that the outcome of your gym routine is the hours you’ve spent working out and the amount of weight you’ve lifted. But the real outcome of your routine is much more significant than that . The outcome is that you feel stronger, more confident, and healthier. The outcome is the way in which your hard work (the output) has translated into a better quality of life and a more positive self-image. The outcome is the way in which your problem was solved by the output. In a business context, an outcome refers to the impact your product has on the organization and its customers and stakeholders, while an output refers to the tangible things your (development) team produces, like documents, software, and tests. Focusing on outcome over output means defining success based on achieving a specific outcome and measuring progress based on how close you are to reaching that outcome. The goal of your team is not to produce outputs; it’s to reach a specific outcome. A successful team strives to maximize the desired outcome while minimizing the amount of work produced. The benefits of an outcome-driven approach 1. It helps you escape from the build trap The first Agile Principle states that your top priority is to make your customers happy by delivering valuable software as early and consistently as possible. As agile practices are adopted in various fields, people have rephrased this principle to emphasize the importance of delivering value to customers quickly and consistently. When you measure success based on an outcome-driven metric, like “ increasing newsletter click-through rates by 15% within six months ”, you immediately connect your team's efforts to the value for your organization and customers. This helps you understand the impact you're making and when you're truly making a difference. In contrast, when you measure success by looking only at the things you produce, such as “ the number of features delivered ” or “ the number of completed points in a scrum sprint ”, you risk running into what Melissa Perri (product management expert, speaker and author) refers to as “the build trap”. This trap involves focusing solely on creating features without considering the desired outcomes. When organizations prioritize output over outcomes, they risk getting caught in a cycle of building more and more features without truly understanding if they are solving customer problems or driving business value. By fixating on feature delivery as a measure of success, you may lose sight of the bigger picture. It doesn't tell you if you're building the right things. So, it is essential to shift your focus to the outcomes that matter. This requires a mindset shift that places the customer's needs and desired results at the forefront. By defining success based on outcomes, your team can escape from the build trap . 2. It helps you focus on learning and iterating When you start thinking critically about value delivery instead of feature delivery, you quickly run into the problem I’ve addressed previously: how can you be sure that the features you’re building are actually going to deliver value? An outcome-driven approach recognizes that you may not have all the answers from the start and that learning is an important part of the process. This is why, when working with outcomes, you need a companion tool: the experiment. When you combine outcome-driven thinking with a process that’s based on running experiments, you really start to unlock the true potential of agile approaches. This is especially valuable in situations where there is a lot of uncertainty. For example, when creating a new software product, you may not be sure if it will have the desired impact on your business or if all the fancy features you came up with are necessary. By focusing on outcomes, you can set goals that allow your team to experiment and try different solutions until they find what works best. In an agile context, we treat each step as a hypothesis and an experiment aimed at achieving a specific outcome. This is where the concept of an MVP, or Minimum Viable Product , comes in. Think of MVP as the smallest thing you can do or the smallest thing you can build to learn if your hypothesis is correct. This iterative process of testing, learning, and adapting allows teams to experiment, to try different solutions, until they hit on the one that works. 3. It helps your team reach more autonomy Employees often find it challenging to feel a profound sense of purpose and motivation solely from the output they produce. What truly drives individuals to show up at work each day is not the specific tasks they engage in day by day, but rather the meaningful outcomes their work will ultimately contribute to . An emphasis on outcomes helps align your team around a common purpose and shared goals. By providing clarity on what needs to be achieved, you can motivate and empower your team to work together towards clear goals that the product should achieve. This allows your team to prioritize their work, and build features that contribute to achieving those goals. Allowing them to make decisions about the features they build, will give a greater sense of ownership over the work they do. Defining the outcomes for your product and implementing them By now, you might agree that focusing on outcomes sounds like a good idea, but actually implementing them in our business practices is not as straightforward . Every methodology has its drawbacks. One challenge is that outcomes are less easily measured and quantified compared to outputs. Secondly, many companies face pressure to quickly move on to the next project once one is completed . Unfortunately, the iterative process of testing, learning, and adapting is still not commonly practiced. Finally, one thing that makes it hard is that we often set goals that are too high-level . For example, when you ask the team to make the business more profitable or reduce risk, it is too complex because those challenges consist of many variables to influence. These impact-level targets are too complex for teams. Instead, you should focus on smaller and more manageable targets . To do this, you need to ask your team to concentrate on changing customer behavior in ways that drive positive business outcomes. In his book “Outcomes Over Output: Why Customer Behavior Is The Key Metric For Business Success”, Joshua Seiden presents three magic questions that can help you identify suitable outcomes: What are the user and customer behaviors that drive business results? (This is the outcome that you’re trying to create.) How can we get people to do more of those behaviors? (These are the features, policy changes, etc that you’ll do to try to create the outcomes.) How do we know that we’re right? (This uncovers the experiments and metrics you’ll use to measure progress.) Let me provide you with an example of how this works. Imagine that you run an e-commerce clothing store, and you’re facing tough competition from a rival company. Your objective is to improve customer loyalty, so you set a broad goal to the team of increasing the frequency of customer visits from once a month to twice a month. To achieve this impact, you need to identify specific customer behaviors that correlate with visiting your site. For instance, you observe that customers tend to visit the site after opening the monthly newsletter showcasing new items. Therefore, one possible outcome could be to increase the newsletter click-through rates. Additionally, you notice that customers also visit the site after a friend shares an image of one of the items on social media. Hence, another outcome to consider is encouraging customers to share images of items more frequently. By focusing on these customer behaviors that drive the desired outcome of site visits, you ensure that your goals are both observable and measurable. This is crucial as it allows you to effectively manage and track progress. I hope this example highlights how outcomes can be specific and easily broken down. Remember, an outcome is a behavior exhibited by customers that directly influences business results. By understanding these behaviors, you can align your efforts with the outcomes that truly matter to your business. Takeaways An outcome refers to the impact your product has on the organization and its customers and stakeholders, while an output refers to the tangible things your team produces, like documents, software, and tests. The goal of your team is not to produce outputs; it’s to reach a specific outcome. A successful team strives to maximize the desired outcome while minimizing the amount of work produced. By fixating on feature delivery as a measure of success, you may lose sight of the bigger picture. It doesn't tell you if you're building the right things. So, it is essential to shift your focus to the outcomes. An outcome-driven approach recognizes that you may not have all the answers from the start and that learning is an important part of the process. This is why, when working with outcomes, you need a companion tool: the experiment. When you’re planning work, be clear about your assumptions. Be prepared to test your assumptions by expressing work as hypotheses. Test your hypotheses continuously by working in small iterations, experimenting, and responding to the data and feedback you collect. Don’t mistake impact—high-level aspirational goals—for outcomes. Impact is important, but these targets are too complex for teams as they consist of many variables to influence. Use these questions to define outcomes: what are the human behaviors that drive business results? How can we get people to do more of these things? How will we know we’re right? 👀 Want to know more about our services ? Click here to find out!

Read more
data points gps
data points gps
Reading time 5 min
6 MAY 2025

As developers, we understand that GPS accuracy is the backbone of many mobile applications, from navigation to location-based services. The accuracy of your app's GPS functionality can make or break the user experience. In this article we’ll give you five practical ways to improve the GPS accuracy of your mobile application and ensure that your users never feel lost again. How poor GPS location accuracy kills mobile application success: real-life example Let’s start with a real-life example of how poor GPS accuracy can cause your mobile application to fail big time. Example Elise downloaded your new mobile application, Commuter . The app promises to enhance her commuting experience by delivering timely notifications about her bus stops and estimated arrival times. However, to her dismay, the performance of your app has been inconsistent. While on some days it offers accurate real-time updates, on others, she receives the notifications too late or too early. Understandably, Elise is frustrated and shares her dissatisfaction with your mobile application through a negative review. What goes wrong with the GPS accuracy? You, as the developer, are left perplexed. After all, you've integrated the platform's standard GPS algorithms, so why the inconsistency? The app calculates her average velocity based on the difference between GPS locations and the time between these updates. It's programmed to notify her of her bus stop once her GPS coordinates fall within a 100-meter radius of the station. While this sounds logical, the real-world results don’t align with expectations. What causes poor GPS location accuracy? The core issue stems from the inherent inaccuracies in GPS location data. While GPS locations include a margin of error, typically expressed in meters with a 68% confidence interval, this margin doesn't consider the influence of GPS signal reflections , also known as multipath errors. Multipath errors occur when GPS signals bounce off objects or surfaces before reaching the GPS receiver's antenna. Urban areas with tall buildings and dense infrastructure are particularly prone to GPS signal reflections. The reflection of signals off skyscrapers, vehicles, and other structures can create a complex signal environment, leading to unpredictable location inaccuracies. GPS signal reflections can divert the signal by kilometers, potentially causing the app to incorrectly indicate that Elise has either already reached her destination or is still kilometers away. Challenges of GPS signal reflections for mobile app developers GPS signal reflections pose several challenges to mobile app developers: Inaccurate positioning : GPS signal reflections can cause the GPS receiver to calculate an incorrect position. When the reflected signal arrives slightly later than the direct signal, the receiver may interpret it as coming from a different angle, leading to inaccurate position estimates. Inconsistent readings : GPS signal reflections are often inconsistent, making it difficult for developers to predict when and where they will occur. This inconsistency can result in varying levels of inaccuracy, posing a challenge when designing location-dependent services. How to improve GPS location accuracy? To counter the challenges of GPS signal reflections and enhance the user experience, a renewed strategy is necessary. Here are some innovative strategies to improve the GPS location accuracy of the Commuter mobile app in the example above: Filtering GPS locations : It's crucial to discard any location updates with inaccuracies exceeding 100 meters. This ensures that only the most reliable data is used for computations. Leveraging additional sensor data : Incorporate accelerometer data to enhance GPS accuracy. Use a velocity Verlet algorithm to predict locations based on the accelerometer data. Combine these predictions using a Kalman Filter, factoring in the uncertainty of each data source, stabilizing the location signal, and providing a more accurate prediction. Projection algorithms for bus routes: Since Elise commutes by bus, projection algorithms can be employed to align her location with the bus’s route. This can be achieved by approximating the route using data from different bus stops. Crowdsourced Wi-Fi SSIDs: Another innovative approach involves crowdsourcing Wi-Fi SSIDs (Service Set Identifiers). These SSIDs can act as location markers, providing additional data points to refine location accuracy. Bluetooth beacons for enhanced accuracy : Detecting crowdsourced Bluetooth beacons, can also serve as location updates. By tapping into these BLE beacons, you can further enhance the app's accuracy. By implementing these strategies, the Commuter app significantly enhances its accuracy, ensuring a consistent and reliable user experience. As a result, Elise and many users like her can enjoy timely and accurate updates, leading to positive reviews and overall customer satisfaction. 📱 Conclusion While the challenges faced by the Commuter app might seem unique, they reflect real-world hurdles many mobile app developers encounter. At ACA, we've successfully navigated these challenges using the strategies outlined above. While GPS is a valuable tool, understanding its limitations and augmenting its data with other technologies is key to ensuring reliable location-based services. Looking for an experienced mobile application development partner? {% module_block module "widget_137b2ccd-e194-4a50-a7ff-05def1e6085b" %}{% module_attribute "buttons" is_json="true" %}{% raw %}[{"appearance":{"link_color":"light","primary_color":"primary","secondary_color":"primary","tertiary_color":"light","tertiary_icon_accent_color":"dark","tertiary_text_color":"dark","variant":"primary"},"content":{"arrow":"right","icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"tertiary_icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"text":"Contact us"},"target":{"link":{"no_follow":false,"open_in_new_tab":false,"rel":"","sponsored":false,"url":{"content_id":230950468795,"href":"https://25145356.hs-sites-eu1.com/en/contact","href_with_scheme":null,"type":"CONTENT"},"user_generated_content":false}},"type":"normal"}]{% endraw %}{% end_module_attribute %}{% module_attribute "child_css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "definition_id" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "field_types" is_json="true" %}{% raw %}{"buttons":"group","styles":"group"}{% endraw %}{% end_module_attribute %}{% module_attribute "isJsModule" is_json="true" %}{% raw %}true{% endraw %}{% end_module_attribute %}{% module_attribute "label" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "module_id" is_json="true" %}{% raw %}201493994716{% endraw %}{% end_module_attribute %}{% module_attribute "path" is_json="true" %}{% raw %}"@projects/aca-group-project/aca-group-app/components/modules/ButtonGroup"{% endraw %}{% end_module_attribute %}{% module_attribute "schema_version" is_json="true" %}{% raw %}2{% endraw %}{% end_module_attribute %}{% module_attribute "smart_objects" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "smart_type" is_json="true" %}{% raw %}"NOT_SMART"{% endraw %}{% end_module_attribute %}{% module_attribute "tag" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "type" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "wrap_field_tag" is_json="true" %}{% raw %}"div"{% endraw %}{% end_module_attribute %}{% end_module_block %}

Read more
developers aca group
developers aca group
Reading time 5 min
6 MAY 2025

Today’s web applications and websites must be available 24/7 from anywhere in the world and have to be usable and pleasant to use from any device or screen size. In addition, they need to be secure, flexible and scalable to meet spikes in demand. In this blog, we introduce you to the modern web application’s architecture and we brush a bit on different back-end and front-end frameworks and how they work together. When people compare solutions used for building web applications and websites, there usually is a sort of pitting one against the other. Here, we will go against this flow and try to frame the differences, so that you can decide whether one, the other, or both fit the use case you have in mind. An essential concept that must be noted is that back-end frameworks, such as Flask or FastAPI , and front-end frameworks, such as React or Vue JS , are two fundamentally different technologies that solve different, although related, problems. Setting them against one another is therefore not a good approach. These days, when you are looking to build a slightly more complex web application or website solution, you often need solid frameworks that address bits of both front-end and back-end sides to achieve what you’re looking for. The specifics of your application will determine what those bits are and whether it’s worth investing in using only one of the two technologies, or both in tandem. Purpose of a back-end framework A back-end framework is the “brains” of your web application. It should take care of most, if not all, computation, data management and model manipulation tasks. Let’s take the example of FastAPI. While this back-end web framework is primarily used for developing RESTful APIs, it can also be applied for developing complete web applications if coupled with a front-end engine such as Jinja2. Using only FastAPI and some templating would be ideal if you want a standalone API for other developers to interact with. Another good purpose would be a website or web app that offers dashboards and insights on data inputs (charts based on files that you upload, etc.) without functionalities that depend on quick user interactions. Below you find an example of an application built entirely with a Python back end and Jinja2 as a templating engine. Click here to get some more information about the project, source code, etc. The issue you might find when creating a complete web app or website with FastAPI is that the entire logic of the program is pushed to the back-end, and the only job for the browser and the device on the client’s side is to render the HTML/CSS/JS response sent to it. The time between when the request from the browser is made for displaying something and when the user sees it, could then vary wildly based on a lot of factors. Think of server load, the speed of the user’s internet, the server’s memory usage or CPU efficiency, the complexity of the requested task, ... Purpose of a front-end framework So far, the back-end can take care of all the operations that we might want our web app to have, but there is no way for it to really interact with the user. A front-end framework takes care of the user experience - UI elements like buttons, a landing page, an interactive tutorial, uploading a file - basically any interaction with the user will go through the front-end framework. Taking a look at React or Vue JS — these are front-end frameworks for developing dynamic websites and single page applications. However, they need some back-end technology (like FastAPI, Flask or NodeJS) to provide a RESTful API so that what they show can be dynamic and interactive. Using only React would happen in situations where there are already existing data sources that you can interact with (public APIs, external data providers, cloud services, etc.) and all you want to create is the user interaction with those services. But we can already see here that, in theory, combining the strengths of a solid back-end framework – such as Flask, FastAPI, or NodeJS – with a good front-end framework is an option, and a very good one on top of that. Examples of that combination are the BBC World Service News websites rendered using a React-based Single Page Application with a NodeJS back-end (Express). Click here for a detailed breakdown of the project’s GitHub page. In these cases, front-end frameworks attempt to delegate some (or a lot) of the tasks of the back end to the client-side. Only the computationally heavy parts remain on the server, while everything that is left and fast to execute is done in the browser on the client’s device. This ensures a good user experience, “snappiness” and is basically a sort of decentralization of parts of the web application’s execution, lowering the load and responsibilities of the server. Combining the two 🤝 Today, the architecture of well-built and scalable web applications consists of a client-side framework that maintains a state, comprising a state of the user interface and a state of the data model. Those states represent respectively UI elements that form the visual backbone of an application, and data elements linked to what kind of data or models (for example a user) are used throughout the application. Any change in the data model state triggers a change in the UI state of the application. Changes in the data models are caused by either an event coming directly from the user (like a mouse click) or a server-side event (like the server saying there is a new notification for the user). Combining all these factors makes for a great user experience that gets closer to a desktop application rather than an old-school, sluggish website. Ready for more? In our next blog , we explain the strengths of Python and NodeJS, and how you should choose between them.

Read more
Data strategy
Data strategy
Reading time 5 min
6 MAY 2025

You may well be familiar with the term ‘data mesh’. It is one of those buzzwords to do with data that have been doing the rounds for some time now. Even though data mesh has the potential to bring a lot of value for an organization in quite a few situations, we should not stare ourselves blind on all the fancy terminology. If you are looking to develop a proper data strategy, you do well to start off by asking yourselves the following questions: what is the challenge we are seeking to tackle with data? And how can a solution contribute to achieving our business goals? There is certainly nothing new about organizations using data, but we have come a long way. Initially, companies gathered data from various systems in a data warehouse. The drawback being that the data management was handled by a central team and the turnaround time of reports was likely to seriously run up. Moreover, these data engineers needed to have a solid understanding of the entire business. Over the years that followed, the rise of social media meant the sheer amount of data positively mushroomed, which in turn led to the term Big Data. As a result, tools were developed to analyse huge data volumes, with the focus increasingly shifting towards self-service. The latter trend now means that the business itself is increasingly better able to handle data under their own steam. Which in turn brings yet another new challenge: as is often the case, we are unable to dissociate technology from the processes at the company or from the people that use these data. Are these people ready to start using data? Do they have the right skills and have you thought about the kind of skills you will be needing tomorrow? What are the company’s goals and how can employees contribute towards achieving them? The human aspect is a crucial component of any potent data strategy. How to make the difference with data? In practice, the truth is that, when it comes to their data strategies, a lot of companies have not progressed from where they were a few years ago. Needless to say, this is hardly a robust foundation to move on to the next step. So let’s hone in on some of the key elements in any data strategy: Data need to incite action: it is not enough to just compare a few numbers; a high-quality report leads to a decision or should at the very least make it clear which kind of action is required. Sharing is caring: if you do have data anyway, why not share them? Not just with your own in-house departments, but also with the outside world. If you manage to make data available again to the customer there is a genuine competitive advantage to be had. Visualise: data are often collected in poorly organised tables without proper layout. Studies show the human brain struggles to read these kinds of tables. Visualising data (using GeoMapping for instance) may see you arrive at insights you had not previously thought of. Connect data sets: in the case of data sets, at all times 1+1 needs to equal 3. If you are measuring the efficacy of a marketing campaign, for example, do not just look at the number of clicks. The real added value resides in correlating the data you have with data about the business, such as (increased) sales figures. Make data transparent: be clear about your business goals and KPIs, so everybody in the organization is able to use the data and, in doing so, contribute to meeting a benchmark. Train people: make sure your people understand how to use technology, but also how data are able to simplify their duties and how data contribute to achieving the company goals. Which problem are you seeking to resolve with data? Once you have got the foundations right, we can work up a roadmap. No solution should ever set out from the data themselves, but at all times needs to be linked to a challenge or a goal. This is why ACA Group always organises a workshop first in order to establish what the customer’s goals are. Based on the outcome of this workshop, we come up with concrete problem definition, which sets us on the right track to find a solution for each situation. The integration of data sets will gain even greater importance in the near future, in amongst other things as part of sustainability reporting. In order to prepare and guide companies as best as possible, over the course of this year, we will be digging deeper into some important terminologies, methods and challenges around data with a series of blogs. If in the meantime, are you keen to find out exactly what ‘Data Mesh’ entails, and why this could be rewarding for your organization? {% module_block module "widget_1aee89e6-fefb-47ef-92d6-45fc3014a2b0" %}{% module_attribute "child_css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "definition_id" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "field_types" is_json="true" %}{% raw %}{"buttons":"group","styles":"group"}{% endraw %}{% end_module_attribute %}{% module_attribute "isJsModule" is_json="true" %}{% raw %}true{% endraw %}{% end_module_attribute %}{% module_attribute "label" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "module_id" is_json="true" %}{% raw %}201493994716{% endraw %}{% end_module_attribute %}{% module_attribute "path" is_json="true" %}{% raw %}"@projects/aca-group-project/aca-group-app/components/modules/ButtonGroup"{% endraw %}{% end_module_attribute %}{% module_attribute "schema_version" is_json="true" %}{% raw %}2{% endraw %}{% end_module_attribute %}{% module_attribute "smart_objects" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "smart_type" is_json="true" %}{% raw %}"NOT_SMART"{% endraw %}{% end_module_attribute %}{% module_attribute "tag" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "type" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "wrap_field_tag" is_json="true" %}{% raw %}"div"{% endraw %}{% end_module_attribute %}{% end_module_block %}

Read more
Apache Kafka in a nutshell
Reading time 5 min
6 MAY 2025

Apache Kafka is a highly flexible streaming platform. It focuses on scalable, real-time data pipelines that are persistent and very performant. But how does it work, and what do you use it for? How does Apache Kafka work? For a long time, applications were built using a database where ‘things’ are stored. Those things can be an order, a person, a car … and the database stores them with a certain state. Unlike this approach, Kafka doesn’t think in terms of ‘things’ but in terms of ‘events’ . An event has also a state, but it is something that happened in an indication of time. However, it’s a bit cumbersome to store events in a database. Therefore, Kafka uses a log : an ordered sequence of events that’s also durable. Decoupling If you have a system with different source systems and target systems, you want to integrate them with each other. These integrations can be tedious because they have their own protocols, different data formats, different data structures, etc. So within a system of 5 source and 5 target systems, you’ll likely have to write 25 integrations. It can become very complicated very quickly. And this is where Kafka comes in. With Kafka, the above integration scheme looks like this: So what does that mean? It means that Kafka helps you to decouple your data streams. Source systems only have to publish their events to Kafka, and target systems consume the events from Kafka. On top of the decoupling, Apache Kafka is also very scalable, has a resilient architecture, is fault-tolerant, is distributed and is highly performant. Topics and partitions A topic is a particular stream of data and is identified by a name. Topics consist of partitions. Each message in a partition is ordered and gets an incremental ID called an offset. An offset only has a meaning within a specific partition. Within a partition, the order of the messages is guaranteed. But when you send a message to a topic, it is randomly assigned to a partition. So if you want to keep the order of certain messages, you’ll need to give the messages a key. A message with a key is always assigned to the same partition. Messages are also immutable. If you need to change them, you’ll have to send an extra ‘update-message’. Brokers A Kafka cluster is composed of different brokers. Each broker is assigned an ID, and each broker contains certain partitions. When you connect to a broker in the cluster, you’re automatically connected to the whole cluster. As you can see in the illustration above, topic 1/partition 1 is replicated in broker 2. Only one broker can be a leader for a topic/partition. In this example, broker 1 is the leader and broker 2 will automatically sync the replicated topic/partitions. This is what we call an ‘in sync replica’ (ISR). Producers A producer sends the messages to the Kafka cluster to write them to a specific topic. Therefore, the producer must know the topic name and one broker. We already established that you automatically connect to the entire cluster when connecting to a broker. Kafka takes care of the routing to the correct broker. A producer can be configured to get an acknowledgement (ACK) of the data write: ACK=0: producer will not wait for acknowledgement ACK=1: producer will wait for the leader broker’s acknowledgement ACK=ALL: producer will wait for the leader broker’s and replica broker’s acknowledgement Obviously, a higher ACK is much safer and guarantees no data loss. On the other hand, it’s less performant. Consumers A consumer reads data from a topic. Therefore, the consumer must know the topic name and one broker. Like the producers, when connecting to one broker, you’re connected to the whole cluster. Again, Kafka takes care of the routing to the correct broker. Consumers read the messages from a partition in order, taking the offset into account. If consumers read from multiple partitions, they read them in parallel. Consumer groups Consumers are organized into groups, i.e. consumer groups. These groups are useful to enhance parallelism. Within a consumer group, each consumer reads from an exclusive partition. This means that, in consumer group 1, both consumer 1 and consumer 2 cannot read from the same partition. A consumer group can also not have more consumers than partitions, because some consumers will not have a partition to read from. Consumer offset When a consumer reads a message from the partition, it commits the offset every time. In the case of a consumer dying or network issues, the consumer knows where to continue when it’s back online. Why we don't use a message queue There are some differences between Kafka and a message queue. Some main differences are that after a consumer of a message queue receives a message, it’s removed from the queue, while Kafka doesn’t remove the messages/events. This allows you to have multiple consumers on a topic that can read the same messages, but execute different logic on them. Since the messages are persistent, you can also replay them. When you have multiple consumers on a message queue, they generally apply the same logic to the messages and are only useful to handle load. Use cases for Apache Kafka There are many use cases for Kafka. Let’s look at some examples. Parcel delivery telemetry When you order something on a web shop, you’ll probably get a notification from the courier service with a tracking link. In some cases, you can actually follow the driver in real-time on a map. This is where Kafka comes in: the courier’s van has a GPS built in that sends its coordinates regularly to a Kafka cluster. The website you’re looking at listens to those events and shows you the courier’s exact position on a map in real-time. Website activity tracking Kafka can be used for tracking and capturing website activity. Events such as page views, user searches, etc. are captured in Kafka topics. This data is then used for a range of use cases like real-time monitoring, real-time processing or even loading this data into a data lake for further offline processing and reporting. Application health monitoring Servers can be monitored and set to trigger alarms in case of system faults. Information from servers can be combined with the server syslogs and sent to a Kafka cluster. Through Kafka, these topics can be joined and set to trigger alarms based on usage thresholds, containing full information for easier troubleshooting of system problems before they become catastrophic. Conclusion In this blog post, we’ve broadly explained how Apache Kafka works, and for what this incredible platform can be used. We hope you learned something new! If you have any questions, please let us know. Thanks for reading!

Read more
What is Low-Code and why should you care?
Reading time 5 min
6 MAY 2025

In this blog, we dive deeper into 'Low-Code' as a concept and how to choose the best Low-Code platform for your business. Low-Code as a concept Low-Code, visual development application modelling, citizen development, … If you are a developer, these terms will definitely ring a bell. But what exactly is Low-Code, why should it matter to you, and who benefits from it? The answer is simple: "It depends". Taking a step back, Low-Code is a concept revolving around visual development or modeling and widely supported by automation . This definition makes it applicable to many different areas in IT and more. How to choose the best Low-Code platform? Just having a quick glimpse at the Forrest and Gartner reports will result in more than 300 different Low-Code platforms and products. Are these all equivalent? Obviously not. Are they all meant for Application Development? Again, no. So how can you determine what Low-Code platform, product or solution might be relevant to you and your customers? Focusing on the leaders and well-known names, we can easily recognise some patterns to categorise them (see the example below). Low-Code solutions by category - Robotic Process Automation (RPA) Using Low-Code and visual development/modeling for process creation, integrations and automation. - Custom software delivery automation Empowering traditional development frameworks with Low-Code accelerators to generate parts of applications or starting points. - Package customisation Most of the leading package vendors are investing in Low-Code solutions within their own platform to facilitate customisations. - Test automation Everyone knows Selenium , a very powerful tool to create and maintain test scripts. Some platforms are even applying a no-code approach to model the test cases and have the Selenium scripts generated, executed and maintained without any need for coding. - Citizen development for team and departmental apps Tired of working with Excel or Google Sheets? Looking for an easy to use and to learn solution to build small and simple applications for your team or department? Then Citizen Development platforms are the solution for you. However, it is not suitable for enterprise applications. So use it for what it's intended. After all, you don't want your gardener assembling your new electric car, do you? - Enterprise full-stack development platforms A Low-Code enterprise development platform provides a Graphical User Interface for programming and generates the underlying code automatically, reducing hand-coding efforts for developers. These tools not only help with quick front-end development, but also with logic, back-end, integrations and even lifecycle management. In with Low-Code development, out with traditional development? Can Low-Code fully replace traditional development such as .Net or Java? Of course not! But Low-Code development platforms can definitely help deliver more projects in less time with the same amount of people . It enables organisations to react faster to opportunities with a shorter time to market. Moreover, it can help make a valid business case for projects that have been waiting in the backlog due to "other priorities" and teams mainly focusing on business critical systems in your company. Why we use Low-Code at ACA Group As an IT-consulting company and integrator, we are focused on delivering custom software to cover the specific needs of our customers. Low-Code offers us a powerful tool to do so. And not only for the simple apps! With the right Low-Code platform you can even modernise your legacy and build customer-facing web and mobile apps integrated with your ERP, CRM, IAM and broad existing landscape. It can also help you rationalise and simplify your IT landscape and bring back your company's tools under IT Governance without having to tell the business "sorry we have other priorities". Our strategic choice: OutSystems ACA chose to make a strategic partnership with OutSystems , a leading and by far the most productive, versatile and stable Low-Code platform on the market for enterprise full-stack development. Our team of experts has been working with OutSystems since 2016 and has also delivered projects with other leading platforms. We are keeping up to date with the latest announcements, new functionalities and major differences between the 3 leading Low-Code platforms for general application development. That way we can advise our customers on the best fit for their needs. OutSystems Developer Cloud: best fit for full-stack development OutSystems Developer Cloud is the leading PaaS cloud native high performance Low-Code platform for full-stack development and integrations. It covers the widest set of use cases. From internal apps, to B2B- and B2C customer-facing web and mobile applications, processes and even core systems. Using it for legacy modernisation and innovation projects is a big accelerator for your digital transformation. The benefits of OutSystems Developer Cloud What does OutSystems Developer Cloud bring to companies and the developers and software engineers using it to deliver the projects? 3-4x times faster delivery One unified way of developing web, mobile, front-end, back-end and integrations Easy to learn for software engineers and web developers AI-assisted development and built-in generative AI No need for additional development tools (GitHub, Eclipse, Visual Studio, …) Development teams focused on delivering value Secure out of the box A lot less maintenance No infrastructure to manage Built-in lifecycle management Auto-scaling All this while still having the architecture and best practices in your hands. Want to get started with Low-Code? {% module_block module "widget_90cf7739-dc28-449d-9a8c-ff948108e163" %}{% module_attribute "buttons" is_json="true" %}{% raw %}[{"appearance":{"link_color":"light","primary_color":"primary","secondary_color":"primary","tertiary_color":"light","tertiary_icon_accent_color":"dark","tertiary_text_color":"dark","variant":"primary"},"content":{"arrow":"right","icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"tertiary_icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"text":"Let's do it together"},"target":{"link":{"no_follow":false,"open_in_new_tab":false,"rel":"","sponsored":false,"url":{"content_id":null,"href":"","href_with_scheme":"","type":"CONTENT"},"user_generated_content":false}},"type":"normal"}]{% endraw %}{% end_module_attribute %}{% module_attribute "child_css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "definition_id" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "field_types" is_json="true" %}{% raw %}{"buttons":"group","styles":"group"}{% endraw %}{% end_module_attribute %}{% module_attribute "isJsModule" is_json="true" %}{% raw %}true{% endraw %}{% end_module_attribute %}{% module_attribute "label" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "module_id" is_json="true" %}{% raw %}201493994716{% endraw %}{% end_module_attribute %}{% module_attribute "path" is_json="true" %}{% raw %}"@projects/aca-group-project/aca-group-app/components/modules/ButtonGroup"{% endraw %}{% end_module_attribute %}{% module_attribute "schema_version" is_json="true" %}{% raw %}2{% endraw %}{% end_module_attribute %}{% module_attribute "smart_objects" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "smart_type" is_json="true" %}{% raw %}"NOT_SMART"{% endraw %}{% end_module_attribute %}{% module_attribute "tag" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "type" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "wrap_field_tag" is_json="true" %}{% raw %}"div"{% endraw %}{% end_module_attribute %}{% end_module_block %}

Read more
aca award
aca award
Reading time 7 min
6 MAY 2025

The Global Accessibility Awareness Day takes place every year on the third Thursday of May with the aim of putting accessibility in the spotlight. For ACA Group, the accessibility, user-friendliness and inclusion of technology have long been an important focus. In this blog, you will discover some of our projects in which accessibility was high on the list of priorities. The intention of the Global Accessibility Awareness Day (GAAD) is to get as many people as possible to think and talk about how technology can be made accessible to people with a disability. In this way, the initiative wants to contribute to a more inclusive digital world. What is accessibility? Digital accessibility means that digital technologies, such as online tools, applications and electronic documents are designed in such a way that they are accessible to everyone, including people with disabilities. This allows them, like everyone else, to continue to participate in the digital economy and society. One of the most important aspects of accessibility is that people with visual, auditory, cognitive or physical disabilities can effectively perceive, understand, navigate and interact with digital content. ACA Group's vision on accessibility “Our sustainability policy is much more than our sponsorship of charities,” says Dorien Jorissen , Chief Digital Officer Sustainability Manager at ACA Group. “We strive to analyze and integrate all aspects of sustainability into our operations. Accessibility is also an integral part of our sustainability policy. ” The SDGs (Sustainable Development Goals) of the United Nations form the basis of the sustainability framework of ACA Group. “We want to propagate this not only in our offices, in our team and with our stakeholders, but also in our digital services and our project methodology ,” says Dorien. “In a rapidly evolving world, in which technology is becoming more and more intertwined with our daily lives, as a leading IT company we are obliged to keep digital accessibility high on the agenda.” Below, a picture of ACA Group winning the DataNews Award 2022 for Most Environmentally Responsible ICT Company of the Year'👇🏻 Accessibility in practice Below you will find three projects from ACA Group for which accessibility was an important design requirement. ⭐️ Mobile app for De Lijn with a focus on accessibility Accessibility is very important to De Lijn . Not only in terms of easy access to their vehicles, but also in terms of their digital applications, such as the mobile app. The challenge The transport company wants their app to be accessible and user-friendly for everyone, including people with a visual impairment. They often rely on public transport and must therefore be able to use the app easily. “In the past, people with a visual impairment could use a separate app that could better read out routes and real-time information,” says Joren Vos , Mobile Solution Engineer at ACA Group. “However, this app was outdated. In addition, De Lijn's general app also needed an update.” The solution So there was a need for an update of both the regular De Lijn app and the BLS app . That is why it was decided to integrate the BLS app and the general De Lijn app into one user-friendly app for everyone. “In the new design of the app, we focused on easy and user-friendly navigation,” explains Joren. “We replaced the old complex navigation structure with an easy-to-use navigation bar at the bottom of the screen. We also realized a clear context when reading from the screen, the support of larger text sizes and a voice-over." “We also improved the real-time information and added a congestion barometer. This allows a traveler to see how crowded it is on a particular bus or tram.” The result Thanks to the new menu structure, the updated De Lijn app makes it much easier for everyone to buy tickets, map out public transport routes and search for stops and destinations. Thanks to new functionality such as voice over, exit warning notifications and the support for larger font sizes, people with a visual impairment can also easily use the app. After an accessibility assessment by Eleven Ways and having obtained the required label, the De Lijn app can now officially call itself 'accessible'. ⭐️ ACA website according to Web Content Accessibility Guidelines In 2020 we wanted to give the ACA website a redesign. Stijn Schutyser , today UI/UX designer at ACA Website, was involved in the project as a copywriter and SEO Specialist at the time. He says: “We think it is important to involve our colleagues in every phase of such a project. That is why we sent an initial proposal internally during the preparation phase. One of the ACA colleagues suggested that we should pay extra attention to accessibility for people with a disability from the start. Since inclusion is an important focus of our sustainability policy, we immediately started working on this fantastic idea.” Web development according to international standard “We decided to develop the website according to the Web Content Accessibility Guidelines,” explains Stijn. “It was the first time we would develop a website according to this international standard. That made it quite a challenge for our technical team: studying the guidelines, checking how we could best implement them, the coding, …” “One of the most important targets was to make the website user-friendly for people who use a screen reader that reads the text on a website. For example, we have ensured that a screen reader jumps directly to the main content of a page at the touch of a button, without reading out the unnecessary content in the menu bar, etc..” Audit by Eleven Ways and AnySurfer “After the development and launch of the new website, we had it tested by Eleven Ways ,” says Stijn. “They gave us some work points that we had to tackle in order to comply with the guidelines. After these adjustments, we had the site audited by AnySurfer with the aim of receiving the AnySurfer label level AA. That label proves that your website has been tested by AnySurfer and that it meets the WCAG standard to speak of an accessible website.” By the way, did you know that the ACA website has a Lighthouse accessibility score of 98, an almost perfect score. Accessibility will continue to be an important design parameter for our website in the future. ⭐️ How we improve the accessibility of PDF files Accessibility is not only important for websites and apps. “Every piece of content should be accessible to everyone, including PDF files,” says Ibn Renders , Lead Branding at ACA Group. “That is why at ACA Group we ensure that our PDF files are adapted for people with a visual impairment who use a screen reader.” Below, Ibn gives three tips to make PDF files accessible to everyone: Accessibility check: To improve the accessibility of our PDF documents, we use the 'accessibility check' feature of Acrobat Pro. This tool checks your document and indicates which things you should adjust. Reading order: It is important to structure your PDF file with the correct headings and paragraphs. If you don't, your document will become one big chaos for people with a screen reader. With Acrobat Pro, the accessibility options make it easy to determine the desired reading order. Alt text: Screen readers don't know what's on an image, audio, or video element. Fortunately, you can help them by adding an alt text with a short description of the relevant audiovisual element. Want to know more about accessibility for PDF files? Read the blog article “3 easy tips to make your PDF files accessible to everyone” . Conclusion In an increasingly digital world, we need to ensure that everyone, including people with disabilities, continues to have access to online and offline digital solutions and content. As a leading IT company, we want to take our responsibility with ACA Group to integrate accessibility into our services, our methodology and our solutions. We are already making a lot of efforts to achieve this, but it remains a continuous effort to do even better. Looking for an IT partner who really understands you? {% module_block module "widget_721f158c-b460-4017-9a15-8780ca97dc15" %}{% module_attribute "buttons" is_json="true" %}{% raw %}[{"appearance":{"link_color":"light","primary_color":"primary","secondary_color":"primary","tertiary_color":"light","tertiary_icon_accent_color":"dark","tertiary_text_color":"dark","variant":"primary"},"content":{"arrow":"right","icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"tertiary_icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"text":"Let's talk"},"target":{"link":{"no_follow":false,"open_in_new_tab":false,"rel":"","sponsored":false,"url":{"content_id":230950468795,"href":"https://25145356.hs-sites-eu1.com/en/contact","href_with_scheme":null,"type":"CONTENT"},"user_generated_content":false}},"type":"normal"}]{% endraw %}{% end_module_attribute %}{% module_attribute "child_css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "definition_id" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "field_types" is_json="true" %}{% raw %}{"buttons":"group","styles":"group"}{% endraw %}{% end_module_attribute %}{% module_attribute "isJsModule" is_json="true" %}{% raw %}true{% endraw %}{% end_module_attribute %}{% module_attribute "label" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "module_id" is_json="true" %}{% raw %}201493994716{% endraw %}{% end_module_attribute %}{% module_attribute "path" is_json="true" %}{% raw %}"@projects/aca-group-project/aca-group-app/components/modules/ButtonGroup"{% endraw %}{% end_module_attribute %}{% module_attribute "schema_version" is_json="true" %}{% raw %}2{% endraw %}{% end_module_attribute %}{% module_attribute "smart_objects" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "smart_type" is_json="true" %}{% raw %}"NOT_SMART"{% endraw %}{% end_module_attribute %}{% module_attribute "tag" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "type" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "wrap_field_tag" is_json="true" %}{% raw %}"div"{% endraw %}{% end_module_attribute %}{% end_module_block %}

Read more
application laptop
application laptop
Why does my application not always work the same way?
Reading time 4 min
6 MAY 2025

Some differences: the labels Shop and Shop/Upgrade are not consistent, the blurred labels stand in a different place, like "Support" and "Account", "TV", "labels" and "Sign in" are sometimes labels and sometimes icons, the search function is missing in the top header, only the first header had a menu hamburger. You may recognize this situation: as your application grows, the diversity of elements grows with it. Buttons on different pages are slightly different or not exactly in the same place, icons don't all belong to the same set, newer forms don't follow the same structure as previous ones, there are different fonts or sizes for the same purpose, and so on. That's annoying and downright messy. It is worse when this inconsistency ensures that your application no longer works as expected according to your users because there is also too little consistency in the interaction patterns. This can lead to users using your application or part of it less and less or even stop working with it. The importance of consistency “Consistency” is an important metric that most companies underestimate. Consistency is a crucial part of any company with a digital platform or service. It not only ensures a user-friendly product, but also numerous other benefits including: a unified experience across different devices, correct implementation of branding, brand awareness and much more… We all recognize the importance of that consistency, but how can you ensure for now that you also guarantee this within your organization? What is a 'design system'? A design system is a central place where all components of a digital product or set of digital products are described . You can think of it as a kind of library in which different visual components are stored for use in your website, app or social media content. Color and typography are primary components in a design system, just like buttons, forms, footers, and other components. Design system 'Atomus' , available for free design system within Figma The advantages of a design system The use of a design system has 3 big advantages: it creates more cohesion and consistency, iensures a high degree of reusability, and is very easy to use. A design system helps to create a consistent brand image. Once you create a design system, it becomes the "single source of truth" for your visual identity. Everyone will be able to create designs that look and feel the same and work according to the same interaction patterns. High degree of reusability Your team can quickly design new components based on existing smaller elements called atoms . So you can always reuse your current atoms to create new things that immediately fit within the design and look feel of your design system. Quick and easy to use Existing or new colleagues who have less experience with UX or UI design can help create modern, user-friendly and beautiful interfaces. This speeds up your developers' work and increases your efficiency! In addition, this efficiency also offers another advantage, namely that changes in your product or service can be implemented very quickly. This means that you can realize a much faster time-to-market . Do you recognize one or more of these challenges? Do your applications sometimes suffer from inconsistent operation or visual display and are you curious about how you can remedy this with a design system? Or do you have questions about exactly how you can set up a design system to ensure that you do not run into problems in terms of consistency? Then book a free and non-binding slot in our agenda for a Q A session below. During this meeting we are happy to listen to your questions and give you specific advice. {% module_block module "widget_4ef2ded0-7241-4df2-939c-0070891b3837" %}{% module_attribute "buttons" is_json="true" %}{% raw %}[{"appearance":{"link_color":"light","primary_color":"primary","secondary_color":"primary","tertiary_color":"light","tertiary_icon_accent_color":"dark","tertiary_text_color":"dark","variant":"primary"},"content":{"arrow":"right","icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"tertiary_icon":{"alt":null,"height":null,"loading":"disabled","size_type":null,"src":"","width":null},"text":"Book Q A session with an expert"},"target":{"link":{"no_follow":false,"open_in_new_tab":true,"rel":"noopener","sponsored":false,"url":{"content_id":null,"href":"https://calendly.com/q-and-a-session/boek-een-q-a-sessie-met-onze-expert-clone?month=2022-11","href_with_scheme":"https://calendly.com/q-and-a-session/boek-een-q-a-sessie-met-onze-expert-clone?month=2022-11","type":"EXTERNAL"},"user_generated_content":false}},"type":"normal"}]{% endraw %}{% end_module_attribute %}{% module_attribute "child_css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "css" is_json="true" %}{% raw %}{}{% endraw %}{% end_module_attribute %}{% module_attribute "definition_id" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "field_types" is_json="true" %}{% raw %}{"buttons":"group","styles":"group"}{% endraw %}{% end_module_attribute %}{% module_attribute "isJsModule" is_json="true" %}{% raw %}true{% endraw %}{% end_module_attribute %}{% module_attribute "label" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "module_id" is_json="true" %}{% raw %}201493994716{% endraw %}{% end_module_attribute %}{% module_attribute "path" is_json="true" %}{% raw %}"@projects/aca-group-project/aca-group-app/components/modules/ButtonGroup"{% endraw %}{% end_module_attribute %}{% module_attribute "schema_version" is_json="true" %}{% raw %}2{% endraw %}{% end_module_attribute %}{% module_attribute "smart_objects" is_json="true" %}{% raw %}null{% endraw %}{% end_module_attribute %}{% module_attribute "smart_type" is_json="true" %}{% raw %}"NOT_SMART"{% endraw %}{% end_module_attribute %}{% module_attribute "tag" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "type" is_json="true" %}{% raw %}"module"{% endraw %}{% end_module_attribute %}{% module_attribute "wrap_field_tag" is_json="true" %}{% raw %}"div"{% endraw %}{% end_module_attribute %}{% end_module_block %}

Read more
How to set up simple and flexible ETL based anonymization - part 1
Reading time 7 min
8 APR 2021

In this technical blog post, I want to talk about how to set up simple and flexible ETL based anonymization. Why? Well, I recently had the opportunity to do a small proof of concept for a customer. The customer wanted to know the options that were available that would enable them to take internal data, remove or anonymize any personally identifiable information (PII) and make it available in a simple way and form for external parties. After further requirements gathering, the context for this proof of concept was defined as: Whatever solution, it needs to be able to extract data from an on premise Oracle database. The end result should be a set of CSV files in an Amazon S3 bucket. In between ingesting the Oracle data and dumping it in CSV form on S3, there should be something that removes/anonymizes PII data. If possible, the chosen solution should be cloud native . In this 3 part blog series I’ll explain how to set up simple and flexible ETL based anonymization with the following subjects: The research into products that might be used to solve the problem. Also, check how suitable they are for what the proof of concept needs to achieve. How the chosen product can be used to create an ETL pipeline that fits the requirements. Additionally, how to setup a local Oracle database in Docker that can be used as a data source for the data ingestion part of the proof of concept (just because this was such a PITA to do). And whether this can be done in a cloud native way. Research The research part of the proof of concept consists of 2 parts: How to extract data from an Oracle database, anonymize it somehow and store it as a bunch of CSV files in an S3 bucket aka the ETL part. Figuring out the best way to accomplish the anonymization. Extracting, transforming and storing the data Straight off the bat, the customer’s problem sounded remarkably as something that you might solve with an ETL product: E xtract T ransform L oad. So the research part for this part of the proof of concept would be concentrated on this type of product. I also got some input from someone in my team to have a look at singer.io as that was something they had used successfully in the past for this kind of problem. When looking at the Singer homepage, there are a number things that immediately catch your eye: Singer powers data extraction and consolidation for all of the tools of your organization. The open-source standard for writing scripts that move data. Unix-inspired: Singer taps and targets are simple applications composed with pipes. JSON-based: Singer applications communicate with JSON, making them easy to work with and implement in any programming language. So when getting down to the basics, Singer is a just a specification, albeit not an official one. It’s a simple JSON based data format and you can either produce something in this format (a tap in Singer terminology) or consume the format (a target ). You’re able to chain these taps and targets together to extract data from one location and store it in another. Out of the box Singer already comes with a bunch of taps (100+) and targets (10). These taps and targets are written in Python. Because the central point of the system is just a data format, it’s pretty easy to write one yourself or adapt an existing one. When checking out the taps, the default Oracle tap should cover the Extract part of our proof of concept. The same however doesn’t seem to be the case for the Load part when looking at the default targets. There is a CSV target , but it stores its results locally, not in an S3 bucket. There is the option of just using this target and do the S3 upload ourself after the ETL pipeline has finished. Another option would be to adapt the existing CSV target and change the file storage to S3. Some quick Googling turns up a community made S3 CSV Singer target . According to its documentation, this target should do exactly what we want. Whoops, Singer doesn't transform With the Extract and Load parts covered, this leaves us with just the Transform part of the ETL pipeline to figure out… and this is where it gets a bit weird. Even tough Singer is classified as an ETL tool, it doesn’t seem to have support for the transformation part? Looking further into this I came by this ominously titled post: Why our ETL tool doesn’t do transformations . Reading this, it seems they consider their JSON specification/data format as the transformation part. So they support transformation to raw data and storing it, but don’t support other kinds of transformations. That part is up to yourself after it has been stored somewhere by a Singer target. So it turns out that Singer is more like the EL part of an ELT product than an “old school” ETL product . At this point, Singer should at least be sufficient to extract the data from an Oracle database and to put it in an S3 bucket in CSV format. And because Singer is pretty simple, open and extendable, I’m going to leave it at that for now. Let’s continue by looking into the anonymization options that might fit in this Singer context. Data anonymization Similarly to the ETL part, I also received some input for this part, pointing me to Microsoft Presidio . When looking at the homepage we can read the following: It provides fast identification and anonymization modules for private entities in text and images such as credit card numbers, names and more. It facilitates both fully automated and semi-automated PII de-identification flows on multiple platforms. Customizability in PII identification and anonymization. So there’s a lot of promising stuff in there that could help me solve my anonymization needs. Upon further investigation it looks like I’m evaluating this product during a major transformation (get it? 😉) from V1 to V2. V1 incorporated some ETL-like stuff like retrieving data from sources (even though Oracle support in the roadmap never seems to have materialized ) and storing anonymized results in a number of forms/locations. However, V2 has completely dropped this approach to concentrate purely on the detection and replacement of PII data. At its core, Presidio V2 is a Python based system built on top of an AI model. This enables it to automatically discover PII data in text and images and to replace it according to the rules you define. I did some testing using their online testing tool and it kind of works, but for our specific context it definitely needs tweaking. Also, when looking at the provided test data, it seems that it is mostly simple and short data but no large text blobs or images. This then begs the question: even if we’re able to configure Presidio to do what we want it to do, might we be hitting small nails with a big hammer? Is Presidio too much? So let’s rethink this. If we can easily know and define which simple columns in which tables need to be anonymized and when just nulling or hashing the column values is sufficient, we don’t need the auto detection part of Presidio. We also wouldn’t need the Presidio full text or image support and we also wouldn’t need fancy substitution support. Presidio could be a powerful library to create an automatic anonymization transformation step for our Singer based pipeline. It also helps that Presidio is Python based. However, my gut feeling says I maybe should first try to find a slightly simpler solution. I started searching for something that’s can do a simple PII replace and that works in a Singer tap/target context. I found this Github repository: pipelinewise-transform-field . The documentation reads “Transformation component between Singer taps and targets” . Sounds suspiciously like the “ T ” part that Singer as an ETL was missing! Further down in the configuration section we even read: “You need to define which columns have to be transformed by which method and in which condition the transformation needs to be applied.” and the possible transformation types are: SET-NULL : Transforms any input to NULL HASH : Transforms string input to hash HASH-SKIP-FIRST-n : Transforms string input to hash skipping first n characters, e.g. HASH-SKIP-FIRST-2 MASK-DATE : Replaces the months and day parts of date columns to be always 1st of Jan MASK-NUMBER : Transforms any numeric value to zero MASK-HIDDEN : Transforms any string to ‘hidden' This seems to cover our simple anonymization requirements completely! We can even see how we need to use it in the context of Singer: some-singer-tap | transform-field --config [config.json] | some-singer-target Default Conclusion We now have all the pieces of the puzzle on how to set up simple and flexible ETL based anonymization. In the next blog post we’ll show how they fit together and whether they produce the results the customer is looking for.

Read more
How to use the BroadcastChannel API with Angular
Reading time 6 min
18 JUN 2020

Have you ever heard of the BroadcastChannel API? We hadn’t either just a couple of weeks ago. We happened to stumble upon it after looking for a solutions that allowed us to communicate between different browser windows of the same origin. In this blog post, we’ll discuss the API itself and teach you how to use the BroadcastChannel API within an Angular application. The BroadcastChannel API Imagine you opened a web page in multiple tabs, and you want to communicate between these tabs to keep them up-to-date. How would you even start to do that? After some digging around the internet, we came across the BroadcastChannel API that is implemented directly in web browsers. It turns out that this API has already been available since 2015. Mozilla Firefox 38 was the first browser to adopt the specification. Over the course of the next few years, other browsers followed Mozilla’s example. Check out the demo below to see an example of what can be done by utilizing this technology. Albeit a very simple demo, it nevertheless shows the true power of the BroadcastChannel API. In this example, the counter is kept in sync between the two windows. It may not be your typical real world example, but you could use the API to : log out a user of an application that is running in multiple browser tabs, keep a shopping cart in sync in another browser tab, and refresh data in other tabs. The BroadcastChannel is basically an event bus where you have a producer and one or more consumers of the event: How to set up a BroadcastChannel Creating a BroadcastChannel Creating a BroadcastChannel is very simple. It doesn’t require any libraries to be imported in your code. You just need to invoke the constructor with a String that contains the name of the channel to be created. const broadcastChannel = new BroadcastChannel('demo-broadcast-channel'); Sending a message Now that we’ve set up a channel, we can use it to post messages. Posting a message can be done by calling the postMessage on the BroadcastChannel that you created earlier. this.counter++; broadcastChannel.postMessage({ type: 'counter', counter: this.counter }); } The postMessage can take all kinds of objects as message. You can basically send anything that you want , as long as the consumer knows how to handle the receiving objects. However, it’s good practice to have a field on your messages that describes the type of message it is. This makes it easier to subscribe to messages of a specific type instead of having a BroadcastChannel per type of message. Receiving a message On the consumer side, you’ll need to create a BroadcastChannel with the same name as on the producer side. If the names do not match, you (obviously) won’t receive any messages. Next, you need to implement the onmessage callback. const broadcastChannel = new BroadcastChannel('demo-broadcast-channel'); this.broadcastChannel.onmessage = (message) = { console.log('Received message', message); } The BroadcastChannel that posts a message won’t receive the message itself, even if it has a listener registered. However, if you create a separate BroadcastChannel instance for posting and consuming messages, the browser window that posted the message will receive the message . Most likely, that’s not something you want. To avoid this, it is best practice to create a singleton instance per BroadcastChannel. Creating a reusable BroadcastService You don’t want to reference the BroadcastChannel API everywhere in you code where you need to produce/consume messages. Instead, let’s create a reusable service that encapsulates the logic. That way, if you ever want to replace the BroadcastChannel with another API, you only have to update one service. import {Observable, Subject} from 'rxjs'; import {filter} from 'rxjs/operators'; interface BroadcastMessage { type: string; payload: any; } export class BroadcastService { private broadcastChannel: BroadcastChannel; private onMessage = new Subject any (); constructor(broadcastChannelName: string) { this.broadcastChannel = new BroadcastChannel(broadcastChannelName); this.broadcastChannel.onmessage = (message) = this.onMessage.next(message.data); } publish(message: BroadcastMessage): void { this.broadcastChannel.postMessage(message); } messagesOfType(type: string): Observable BroadcastMessage { return this.onMessage.pipe( filter(message = message.type === type) ); } } In this particular service, we made good use of RxJS Observables. Pay close attention to the messagesOfType function: in this case, we used the standard RxJS filter operator to only return the messages that match the provided type. Nice and simple! The service is almost ready for use in your Angular application. There is only one more challenge that you’ll have to face. Running inside the Angular Zone If you’ve been using Angular for some time, you’ll probably know about the Angular Zone . Code that runs inside the Angular Zone will automatically trigger the change detection. The service above doesn’t run Angular’s zone, since it uses an API that does not hook into Angular. If it receives a message and updates the internal state of a component, Angular is not immediately aware of this. That means that you don’t immediately see any changes reflected in the browser. Only after the next change detection is triggered, will the results be visible inside the browser. To work around this issue, you can create a custom RxJS OperatorFunction . The sole purpose of the OperatorFunction is to make sure that every life cycle hook of an Observable is running inside the Angular’s Zone. import { Observable, OperatorFunction } from 'rxjs'; import { NgZone } from '@angular/core'; /** * Custom OperatorFunction that makes sure that all lifecycle hooks of an Observable * are running in the NgZone. */ export function runInZone T (zone: NgZone): OperatorFunction T, T { return (source) = { return new Observable(observer = { const onNext = (value: T) = zone.run(() = observer.next(value)); const onError = (e: any) = zone.run(() = observer.error(e)); const onComplete = () = zone.run(() = observer.complete()); return source.subscribe(onNext, onError, onComplete); }); }; } NgZone is an object provided by Angular that you can use to programmatically run code inside Angular’s zone. The only thing that remains is to use the above OperatorFunction in our BroadcastService. ... import {runInZone} from './run-in-zone'; export class BroadcastService { ... constructor(broadcastChannelName: string, private ngZone: NgZone) { this.broadcastChannel = new BroadcastChannel(broadcastChannelName); this.broadcastChannel.onmessage = (message) = this.onMessage.next(message.data); } ... messagesOfType(type: string): Observable BroadcastMessage { return this.onMessage.pipe( // It is important that we are running in the NgZone. This will make sure that Angular component changes are immediately visible in the browser when they are updated after receiving messages. runInZone(this.ngZone), filter(message = message.type === type) ); } } After updating the service, changes will be visible immediately upon receiving messages. Injecting the service You can use Angular’s InjectionToken to create a singleton instance of the service. Declare the InjectionToken: export const DEMO_BROADCAST_SERVICE_TOKEN = new InjectionToken BroadcastService ('demoBroadcastService', { factory: () = { return new BroadcastService('demo-broadcast-channel'); }, }); Inject the service via the InjectionToken: constructor(@Inject(DEMO_BROADCAST_SERVICE_TOKEN) private broadcastService: BroadcastService) { } Is the BroadcastChannel API supported everywhere? You have to keep the following in mind when using the BroadcastService. It’ll only work when all browser windows are running on the same host and port , all browser windows are using the same scheme (it will not work if one app is opened with https and the other with http), the browser windows aren’t opened in incognito mode , and your browser windows are opened in the same browser (there is no cross-browser compatibility). All modern browsers support the BroadcastChannel API, except for Safari and Internet Explorer 11 (and below). For a full list of compatible browsers, check out Caniuse . If you need to implement a similar solution in non-supported browsers, you can use the browser’s LocalStorage instead. Takeaway In this blog post, we briefly described how to make use of the browsers BroadcastChannel API inside an Angular application. We also looked at a solution on how the API can be hooked into Angular’s Zone. You can find the full code of the demo on Stackblitz . Moreover, you can consult the BroadcastChannel API documentation on MDN Web Docs .

Read more
Reading time 10 min
10 AUG 2018

Nowadays, loads of functionality that once got taken care of in the backend is shifting towards the frontend. As a frontend designer, I’ll show you how to create a simple SPA (Single Page Application) without the hassle of maintaining a backend. What are we creating? We will make a simple CRUD (Create, Read, Update, Delete) application in the form of a notes app. For this demo we are using: VueJS CLI , VueRouter , ElementUI and Google Firebase . Please note that this is part one of a blogpost series . In this part we will set up Firebase, install VueJS locally via the command line interface, create the necessary views and set up the routing. Our database connection will be limited to ‘read’ only. Creating, updating and deleting notes via our app will be available soon in part two of the blogpost series. Why VueJS? There are many frameworks and technologies that may be suitable for this project, so choosing VueJS is a personal preference. I have chosen VueJS over other frameworks because: VueJS uses HTML templates, which makes it feel much more readable than for example JSX in React. Its learning curve is much more acceptable for frontend developers that lean more towards design than backend development, which is a great match for our creative team at ACA. Single file components (HTML/CSS/JS) and their great reusability. It comes with an integrated state manager (vuex) and router (vue-router) instead of relying on external libraries like for example redux. It’s lightweight and easily integratable in existing projects. Angular on its own is huge, and has a pretty steep learning curve. React’s approach just feels way too messy to me. Then again, it also comes with some tradeoffs: It’s less adapted in the West. VueJS was created by Evan You, a former Google engineer. Evan has Chinese roots, and VueJS has a lot of market share in China. It may occur that you’ll come across some Chinese documentation if you are looking for third party addons. There aren’t as much VueJS jobs / developers available in Europe, although knowledge of the framework is getting more out there. It’s not backed / taken over by a multinational (yet), if you find that important. But it has a lot of Github contributions from a lot of people that simply love VueJS. It was the #1 most starred project on Github last year. Why Firebase? Firebase is a great service that lets you focus on what matters most: crafting fantastic user experiences. Using Firebase means you don’t need to manage servers or write APIs. Firebase can be your hosting, authentication tool, API and datastore. It can be modified to suit a lot of your needs and besides that, it can perfectly scale along with your project as it grows over time. The ‘Spark’ plan, which is completely free, offers you up to 100 simultaneous connections, 1GB storage and a 10GB download limit. Enough for a small project, startups or for indie developers that want their idea(s) validated. Creating the application Depending on your skill level , the following steps will take you approximately 30 to 60 minutes to complete. Step 1: Setting up our Firebase environment Head over to https://firebase.google.com/ and sign up with your Google account. Once that’s done, head over to the Firebase console and create a new project. I named mine ‘FireNotes’. Since we will be using the Firestore database, head over to the ‘Database’ tab in the sidebar under the ‘Develop’ section and enable database usage in test mode * , as seen below. * Important notice: this will give everybody access to your database. If you plan on releasing a project with Firebase, please dig further into the documentation of database permissions and set it up properly. Next up, let’s add some notes in our database. Add a new collection and name it ‘notes’. This will be the parent collection where all our notes live. Populate it with some dummy notes like the screenshot below, containing an automatic ID, post date, title and content: Step 2: Setting up our VueJS Project Fairly easy so far, right? Let’s shift the focus on our application now. Open up your console and navigate to your projects folder. Assuming you already have NPM installed, just install Vue CLI via the following command: npm install -g @vue/cli Default Next, let’s kickstart your project using the following command: vue create firenotes Default This will prompt you with a choice for default or manual preset. For now, let’s start with the default option with NPM. Once installed, head over to your firenotes folder and start your webserver with following command: npm run serve Default If you navigate to the provided IP address (normally localhost:8080), you will see the standard ‘hello world’ view Vue provides: Step 3: Creating the main layout Next, let’s visualize what our application should look like. I’m not going through styling details in this blogpost, so I’ll use the ElementUI kit to speed things up. # In your ~/projects/appname folder npm install element-ui -S Default Head over to /src/main.js and add the ElementUI imports, and register it into our VueJS app, right below the ‘import Vue’ as shown below: import Vue from 'vue' import ElementUI from 'element-ui' import 'element-ui/lib/theme-chalk/index.css' Vue.use(ElementUI) Default Now we can start using ElementUI components in our templates. I’ve replaced the Vue logo with my own ( download here ), and changed the App.vue template as follows: template div id="app" el-container el-header img class="logo" src="./assets/logo.png" / /el-header el-container el-aside width="300px" el-menu default-active="home" class="el-menu-vertical-demo" el-menu-item index="home" i class="el-icon-menu" /i span Home /span /el-menu-item el-menu-item index="notes" i class="el-icon-document" /i span Notes /span /el-menu-item /el-menu /el-aside el-main h1 Welcome to FireNotes /h1 p A simple CRUD (Create, Read, Update, Delete) application in the form of a notes app. /p p For this application we are using: VueJS CLI, VueRouter, ElementUI and Google Firebase. /p /el-main /el-container /el-container /div /template script export default { name: 'App' } /script style html, body { margin: 0; } #app { font-family: 'Avenir', Helvetica, Arial, sans-serif; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; color: #2c3e50; } .el-header { border-bottom: 1px solid #e6e6e6; display: flex; align-items: center; width: 100%; } .el-header button { float: right; } .el-menu-item { border-bottom: 1px solid #e6e6e6; } .logo { max-width: 50%; max-height: 50%; margin-right: auto; } /style Default Which makes it look like: Step 4: Setting up our routes So far so good! Next we can install vue-router, which is the official VueJS router. It deeply integrates with your VueJS application’s core, making single page applications a breeze. With vue-router we will be able to navigate to our notes page, which we will set up later. # In your ~/projects/appname folder npm install vue-router Default Just like ElementUI, the vue-router needs to be imported and registered within our VueJS app. Head over to main.js and replace it with following code: import Vue from 'vue' import ElementUI from 'element-ui' import 'element-ui/lib/theme-chalk/index.css' Vue.use(ElementUI) import VueRouter from 'vue-router' Vue.use(VueRouter) import App from '@/App' import HelloWorld from '@/components/HelloWorld' import Notes from '@/components/Notes' const routes = [ { path: '*', redirect: '/home' }, { path: '/', redirect: '/home' }, { path: '/home', name: 'HelloWorld', component: HelloWorld }, { path: '/notes', name: 'Notes', component: Notes }, ] const router = new VueRouter({ routes }) new Vue({ el: '#app', router: router, render: h = h(App), components: { App } }) Default What happened here? Basically, we have specified the URLs of our two navigation items and assigned them to two different components. We’ve also included rules to automatically go to the homepage on load or whenever a URL triggers a non-registered route. Last but not least, we’ve registered the routes via ‘new VueRouter’, which is now embedded into our application core as it gets passed onto our ‘new Vue’ instance. Now, saving this will probably cause some errors, since we didn’t do any work on the notes component yet. Let’s create a simple Notes.vue file inside the components folder. template div class="notes" Notes will go here /div /template script export default { name: 'Notes' } /script Default Next, we need our app to display the correct components based on these specified routes. This will require some changes to our App.vue file on both the navigation and the content panel (el-main). template div id="app" el-container el-header img class="logo" src="./assets/logo.png" / /el-header el-container el-aside width="300px" el-menu default-active="home" :router="true" class="el-menu-vertical-demo" el-menu-item index="home" i class="el-icon-menu" /i span Home /span /el-menu-item el-menu-item index="notes" i class="el-icon-document" /i span Notes /span /el-menu-item /el-menu /el-aside el-main router-view/ /el-main /el-container /el-container /div /template script export default { name : 'App' } /script style html, body { margin : 0 ; } #app { font-family : 'Avenir' , Helvetica , Arial , sans-serif ; -webkit-font-smoothing : antialiased ; -moz-osx-font-smoothing : grayscale ; color : #2c3e50 ; } .el-header { border-bottom : 1px solid #e6e6e6 ; display : flex ; align-items : center ; width : 100% ; } .el-header button { float : right ; } .el-menu-item { border-bottom : 1px solid #e6e6e6 ; } .logo { max-width : 50% ; max-height : 50% ; margin-right : auto ; } /style CSS Here we added router=”true” to the component, which enables the ’s indexes to be used as ‘path’ to activate the route action. We also moved the introduction heading and paragraphs to HelloWorld.vue (you should too ;-)) and replaced it with router-view/ . Now the router is fully responsible for which component is shown inside that panel. Awesome! Next up we’ll tweak Notes.vue to actually display notes. For now, all content is static. Later, we will populate the notes table with data from our Firestore database. template div class="notes" h1 Notes el-button type="primary" size="medium" i class="el-icon-circle-plus" /i Add note /el-button /h1 el-table :data="tableData" border el-table-column type="expand" template slot-scope="props" p { { props.row.details } } /p /template /el-table-column el-table-column label="Note title" template slot-scope="props" { { props.row.name } } /template /el-table-column el-table-column label="Date added / modified" prop="date" /el-table-column el-table-column fixed="right" label="" width="90" template slot-scope="scope" el-button type="info" size="small" icon="el-icon-edit" circle /el-button el-button type="danger" size="small" icon="el-icon-delete" circle style="margin-left: 5px;" /el-button /template /el-table-column /el-table /div /template script export default { name: 'Notes', data() { return { tableData: [ { date : '2018-07-03' , name : 'Lorem ipsum dolor sit amet' , details : 'Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industrys standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book.' } , { date : '2018-07-02' , name : 'Consectetur adipiscing' , details : 'Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industrys standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book.' } ] } ; } } /script style h1 { margin-top : 0 ; display : flex ; align-items : center ; width : 100% ; } .el-button { margin-left : auto ; } .el-collapse-item__header { font-size : 16px ; } .el-table { font-size : 14px ; font-weight : 700 ; } .el-table__expanded-cell p { font-size : 16px ; font-weight : 400 ; } .el-table th { background : #fafafa ; } .el-table th .cell { color : #333 ; } /style CSS Which makes it look like: Step 5: Coupling Firebase with our project Great! Now everything is in place for us to transform the static application to a Firestore data driven application. Let’s install the dependencies in order to connect with Firebase: # In your ~/projects/appname folder npm install firebase Default import firebase from 'firebase' import 'firebase/firestore' firebase.initializeApp({ apiKey: '', projectId: '' }) export const db = firebase.firestore() const settings = { timestampsInSnapshots: true } db.settings(settings) Default Here you still need to populate ‘apiKey’ and ‘projectId’, which can be found under the Project settings link via the cogwheel: Next, head over to the Notes.vue component where we will empty the static array of notes and fill it with the ones we created in our Firestore database earlier on. We also defined an empty-text on the table, which will be shown while data is being loaded into our table. template div class="notes" h1 Notes el-button type="primary" size="medium" i class="el-icon-circle-plus" /i Add note /el-button /h1 el-table :data="tableData" empty-text="Loading, or no records to be shown." border el-table-column type="expand" template slot-scope="props" p { { props.row.content } } /p /template /el-table-column el-table-column label="Note title" template slot-scope="props" { { props.row.title } } /template /el-table-column el-table-column label="Date added / modified" prop="date" /el-table-column el-table-column fixed="right" label="" width="90" template slot-scope="scope" el-button type="info" size="small" icon="el-icon-edit" circle /el-button el-button type="danger" size="small" icon="el-icon-delete" circle style="margin-left: 5px;" /el-button /template /el-table-column /el-table /div /template script import { db } from '@/main' export default { name: 'Notes', data() { return { tableData : [] } } , created () { db.collection('notes').get().then(querySnapshot = { querySnapshot.forEach(doc = { const data = { 'id' : doc.id , 'date' : doc. data ( ) .date , 'title' : doc. data ( ) .title , 'content' : doc. data ( ) .content } this.tableData. push ( data ) } ) } ) } } /script style h1 { margin-top : 0 ; display : flex ; align-items : center ; width : 100% ; } .el-button { margin-left : auto ; } .el-collapse-item__header { font-size : 16px ; } .el-table { font-size : 14px ; font-weight : 700 ; } .el-table__expanded-cell p { font-size : 16px ; font-weight : 400 ; } .el-table th { background : #fafafa ; } .el-table th .cell { color : #333 ; } /style CSS This makes the notes we manually added earlier in our Firestore database to be seen in our application making it a little bit less static. Great success! What’s next? In our follow-up blog post on this subject , we dig further into the remaining functionalities: creating, updating and deleting notes in our Firebase database, completely controlled by our VueJS app. Since we had a question in the comments on login functionality, we also added that as a bonus!

Read more