Written by
Peter Hardeel
Peter Hardeel
Peter Hardeel
All blog posts
time
time
Reading time 3 min
6 MAY 2025

Nowadays, there is constant talk about the importance of a 'real-time enterprise' that can immediately notice and respond to any event or request. So, what does it mean to be ‘real-time’? Real-time technology is crucial for organizations because real-time decision-making is a competitive differentiator in today's fast-paced world. A real-time application requires the ability to ingest, structure, analyze and act on data in real-time. The emphasis lies on providing insights and decision-making whenever an event occurs, rather than days or even weeks afterwards. Today’s business systems are primarily capable of providing what a real-time application promises: collecting data in real-time. Another criterion, which is analyzing this data and gaining valuable insight in real-time, is a whole other challenge. It is also often confused with the former, diverting attention from what should be the main considerations when planning a real-time application: What are the decisions your business needs to make when receiving data? What slows down your business in making those decisions? How will your business benefit from this ability to make decisions? Enterprises must first be able to answer these considerations and make them clear to the rest of the business before the successful implementation of a real-time application can be guaranteed. Goals of real-time The sole purpose of a real-time application is to make decisions in real-time. As these applications will control a much larger part of an enterprise, close cooperation with humans will offer significant advantages and become a requirement in the future. Software will automate deterministic functions and standardized activities. At the same time, humans will add experience, intuition, and values to: assure the most appropriate actions are taken, intervene when they are not, and take charge when it is not clear enough what to do. By interactions, we mean communication that goes far beyond text, email or chat systems. We are talking about truly sophisticated collaborative relationships in which a software application and a human being communicate and are each aware of the context of what is happening, how a situation changes over time, and what choices or recommendations are likely to produce the best results. The 3 steps to become a real-time enterprise Now that we’ve established what a real-time enterprise is: how do you become one? There are 3 key steps to take into account: Put business needs first: adopt the mindset to create and change both business and operational processes with a real-time-first attitude. For example: allowing certain automatic decisions depending on what data streams are feeding your applications. Speaking of data… get it right! Moving to real-time also requires robust data management that supports both emerging streaming data and traditional data sources for real-time data integration. Look to the edge: as we’ve already established, going to real-time also requires implementing real-time analytics where the data originates and delivering analytics. This requires autonomous support to perform analytics closer to the data source without connecting to the cloud, creating more flexible and powerful deployments. With edge computing, organizations can ingest, enrich and analyze data locally, run machine learning models on cleansed datasets and deliver enhanced predictive capabilities. The velocity and volume of data arriving in real-time require in-memory stream analytics and complex event processing. It requires a shift from a traditional 3-tier database-centric architecture (with presentation, application and data tiers) to a modern event-driven architecture method of application development. Conclusion Although we’ve only scratched the surface, we hope this article has shown you how exciting and valuable real-time applications can be. If you want to learn more or explore ways to implement these technologies into your business, get in touch. We would be happy to help you transform into a real-time enterprise!

Read more
Reading time 9 min
26 JAN 2022

The Internet of Things (IoT) is the idea that everyday objects are embedded with sensors, software, and wireless connectivity to collect data about themselves and their environment. In this blog post, we describe how we used IoT technology to enable a digital transformation for one of our customers in order to improve the health of people in workplaces. Modern canary in the coal mine Our customer IDEWE is an external service for prevention and protection at work. The organization is fully committed to improving the working climate for its clients. To help IDEWE in its goal, we’ve built them an application that helps with external exposure assessment. This application, based on IoT, gathers data from sensors at different locations to visualize long-term measurements and the effects of taken actions. A solution was needed that allowed non-intrusive placement of sensors over a period of time and could display measurements in a dashboard without much configuration. With this application, IDEWE is able to go from discrete measurements to continuous monitoring and formulating recommendations to improve people's health in real-time. How exactly? IDEWE uses a device called Little Lilly in the shape of a yellow bird. Using sensors, the Little Lilly is able to gather CO₂, temperature, relative humidity and Volatile Toxic Organic Compounds (VTOC). The little bird also features an indicator light to indicate low (green) or high (red) CO₂ levels. The device's form factor is a nod to the canaries used in coal mines to warn miners of decreasing air quality. Our application collects the data from Little Lillies and visualizes that data in a dashboard filtered by location, period or sensor. If any actions are taken, e.g. opening a window or turning on the A/C when a Little Lilly reports a high CO₂ concentration, they are displayed in the dashboard as well. That way, users can immediately see what effect those actions had on the measurements. The remainder of the blog describes how to securely ingest the telemetry data from IoT devices (Little Lillies) into a data platform with the necessary analytics dashboards for IDEWE to use. MQTT as a lightweight communication protocol In some situations where (IoT) devices are used, the communication channels or networks are unreliable and reliable communication is still required. For example, a car with sensors could drive through a tunnel and temporarily lose connection. Typically, IoT devices are small and resource-constrained, which implies the need for a very lightweight communication protocol. For these reasons, MQTT is used as communication protocol. It is an OASIS standard messaging protocol for IoT and is designed as a lightweight publish/subscribe messaging transport to enable small devices with low network bandwidth and resources. MQTT scales to millions of devices. With persistent sessions and quality of service levels, MQTT supports reliable message delivery for unreliable networks. The solution is a protocol on top of TCP and is independent of the type of network used (Wi-Fi, 4G/5G, LoRaWAN, ...). The following picture illustrates how MQTT makes use of a broker to facilitate publish/subscribe communication. Note that this MQTT broker should not become the single point of failure and needs to be highly available and scalable. For our project at IDEWE, we used the Google IoT Core solution, which has a fully managed MQTT broker out-of-the-box to address these requirements. IoT Core runs on Google’s serverless infrastructure, which scales automatically in response to real-time changes. Security is crucial With more and more data being collected every day and more devices present in our daily lives, the topic of security is more important than ever when designing an IoT solution: IoT devices are spread in all sorts of uncontrolled environments in the field, which makes them vulnerable for attacks. IoT devices sense a wide range of telemetry in people's cars, homes, working environments or even the public space. Some of this data is not meant for public eyes and should be protected as sensitive data. Data leaks risk serious damage to the reputation of the affected companies. Security is always a matter of introducing security measures at different levels to make it as difficult as possible for an attack to succeed. There are three levels on which an IoT solution can be secured: Network Level : One way to provide a secure and trustworthy connection is to use a physically secure network or VPN for all communication between clients and brokers. This solution is suitable for gateway applications where the gateway is connected to devices on one side and with the MQTT broker on the other side. However, in a more public setting, a physically secure network or VPN is not always an option. In that case, the other levels of security are crucial. Transport Level : When confidentiality is the primary goal, TLS/SSL is commonly used for transport encryption. This method is a secure and proven way to make sure data can't be read or tampered with during transmission. Moreover, it provides client-certificate authentication to verify the identity of both sides. Application Level : On the transport level, communication is encrypted and identities are authenticated. The MQTT protocol provides a client identifier and username/password credentials to authenticate devices on the application level as well. These properties are provided by the protocol itself. Authorization or control of what each device is allowed to do is defined by the specific broker implementation. With the MQTT protocol, both the transport-level encryption, and the application level authentication using protocols such as OAuth can be applied. More specifically, when using the Google IoT Core MQTT broker, it is required to encrypt all communication using TLS, authenticate clients using mutual TLS certificates, and with every communication authenticate using a valid JWT token that is signed by the correct certificate. Furthermore, only devices known to IoT Core's device manager and bound to a gateway (if used) are allowed to publish their telemetry data. This results in a highly secure IoT environment to get telemetry data from the edge into the cloud. Standardize protocols and message formats to enjoy flexibility Being able to try out multiple types of devices and communication network technologies is key when designing and evolving an IoT solution. Devices evolve constantly. Not being stuck with a specific kind allows you to keep up with the new possibilities or integrate existing legacy devices when needed. Some types of devices and/or network technologies might work perfectly in specific environments, but don’t work in other environments. For example: in office environments, there might be Wi-Fi connection or 4G/5G, but in remote areas this might not be the case. Network technologies geared towards longer ranges, such as LoRaWAN are then necessary. LoRaWAN allows a single device to communicate across 10-20 km ( world record is even set to 700 km ). In other cases, some devices might also not have enough power to use Wi-Fi or 4G/5G. In these cases, LoRaWAN networks can also help, as they require very low power consumption when communicating. For legacy devices using very old or proprietary communication technology, a combination of devices and a gateway that supports this technology might be needed. The gateway is then responsible to adapt to a more standard protocol like MQTT. For the Little Lilly project at IDEWE, we only standardized the communication protocol and message format being ingested. We kept full flexibility regarding the device type and network technology, as long as MQTT is used and an agreed upon message structure. An example message format is given below: { "version": "2.0.0", "deviceId": "Li074726", "timestampEpoch": "1643100924.268599987", "timestampUtc": "2022-01-25T08:55:24.268600Z", "metrics": [ { "name": "co2", "value": 805, "unit": "ppm" }, { "name": "temperature", "value": 78, "unit": "celsius" }, { "name": "humidity", "value": 29, "unit": "percent" }, { "name": "tvoc", "value": 78, "unit": "ppb" } ] } Note that this is a simple example. Depending on the use cases you want to support, a more advanced format might be advised. This example telemetry message contains the reported CO₂ value, the timestamp when it was measured, and an identifier to know where or on which Little Lilly device it was measured. In addition to the actual message format, MQTT also requires a topic to publish messages to. The way topics are structured is called a topic namespace. This topic namespace also needs to be part of the agreed upon structure. For example, when using Google IoT Core, a device can publish its telemetry data to a topic with structure `/devices/Li074715/events` and its state data to a topic with structure `/devices/Li074715/state`. Google does not impose any message format structure itself. Transform your business insights and services by combining IoT telemetry with other enterprise data A pure IoT solution allows you to put IoT devices everywhere and sense for example the current CO₂ levels by subscribing with a mobile device to the MQTT broker. If we stop there, then we are not realizing the full potential of IoT. A next step is to get insights into the evolution over time of these measurements. For this, we need to persist and get access to the history of CO₂ measurements. MQTT is lightweight and designed for high scalability. For that reason, it does not offer a durable persistence of the MQTT messages, but directly pushes the messages to subscribed consumers of this data. Only a limited buffer is supported to have reliable communication. Google IoT Core solves this with an MQTT bridge, which automatically publishes all MQTT messages onto a more durable pub/sub solution, i.e. Google Pub/Sub. This is also a more general purpose pub/sub component, which allows it to more easily connect to other software systems you may use. For IDEWE, the MQTT bridge allowed us to ingest the telemetry data into a data platform. That data is then visualized in a dashboard for analyzing the evolution of CO₂ levels in schools, offices, restaurants, and other work places. We used a combination of Google Dataflow based on the Apache Beam programming model, Google BigQuery and Google Data Studio to construct this dashboard. By using these fully managed services from Google, setting up a future-proof IoT and Data architecture is fairly easy to do. It’s also not hard to set up extensions with additional measurements and sensors (temperature, humidity, light, sound, ...), or even actuator IoT devices. Having this IoT data as part of an enterprise-grade data platform allows it to go to the next level and combine the IoT data with other datasets you may have available in your enterprise. Designing this data as a product or a group of products in a Data Mesh then becomes more important, but that is another topic for another blog series. ;) Conclusion With the above solution, ACA Group has enabled IDEWE to augment their existing services or even introduce completely new services to their customers. The end result is a digital transformation in which the well-being and health of employees in the workplace (and children in schools) are improved using new technology and possibilities. Ultimately, the use of IoT devices for health monitoring is still in its early stages, and many more devices will become available in the future. However, these devices provide an easy way for people to improve their health without constantly being present. So far, the data collected from these devices has shown that people can make changes to their daily routine based on the data collected and improve their health. The Internet of Things is changing how we live, work and play. If you want to learn more about this fascinating concept, contact us today. We’ll gladly help you make the most informed decision possible for your IoT project needs. Sources: Exposome and Exposomics (Dutch) Het exposoom, een zoektocht naar de oorzaken van ziekte in onze werkomgeving How to protect yourself against omicron?

Read more
Reading time 7 min
19 OCT 2021

Theoretically, we should be focusing on the unread emails that have come in since the last time we checked our inbox. But as human beings, our eyes are sometimes drawn to the red emails, the ones that we have already dealt with and should forget about, so we lose our focus and get distracted. A recent study has shown that it takes on average 64 seconds for us to get back on track. In other words, one of the main benefits of Inbox Zero is that as each new email arrives, you're not distracted by seeing a swirl of earlier messages . Not only will that distraction make you less productive, but it will also cause you to make more careless mistakes as well. The entire Inbox Zero method centres on the idea that all emails received fall broadly into three categories. First, email that you need to take action on. Secondly, emails that require another party to take action on, but you're still responsible for the outcome. For example, someone owes you information for a report that you're writing. And thirdly, emails that you might want to read through again because they contain helpful information. Although I'll be using Gmail for this blog, Inbox Zero works with most other email service providers like Outlook. Let’s get started! Keyboard shortcuts First, you want to go to your Gmail settings in the top right corner and click 'See all settings’. Then you want to go to the Advanced tab and enable ‘auto-advance’ and click ‘Save changes’. You are redirected to the main screen. Let’s dive right back into ’See all settings’. And this time, you want to go to the General tab, go down to ‘auto-advance’, and make sure the next newer conversation option is selected here. And also, go down to keyboard shortcuts and ensure that the ‘Keyboard shortcuts’ radio button is selected. Click ‘Save changes’. Defining Labels Now, go back into our settings by clicking ‘See all settings’. And this time, go to the Labels tab and create four different labels. The first one is "Follow-up", The second is just "Waiting", For our third label, we’ll make one called "Read-Later", And finally, we’ll create a label "Calendar". For the next step to work, make sure you follow along exactly as I have described above. Don’t worry if you want to customize more; you can always adjust labels later on. Using multiple inboxes Now, go to the inbox tab. And instead of the default inbox, we change now to multiple inboxes. First for the section one search query, type in "l:follow-up", and the section name is "Action Items". Section two would be "l:waiting", and the section name would be "Awaiting Reply". Section three would be "l:read-later" and the section name is "Read Through Later". As you’ve probably noticed, we’re going to save the calendar label for something later on. We'll get back to that. For now, though: the maximum page size is set to 10, put the multiple inbox position to the right of the inbox, deselect importance markers, don't override filters, and click Save Changes. Once you've made those adjustments and saved all your changes, you should be brought back to your main inbox screen. Nothing crazy has happened so far. Three empty sections have popped up on the right-hand side here. Applying labels But here is where it gets exciting. So let's say at the beginning of a typical workday, the first thing I'll do is to go through my emails and sort them by their types, starting at the very bottom here. The first email seems like an interesting one which I might want to refer back to later on. So I first type "V" on my keyboard for label and start typing in "read later". And once the option comes up, press "Enter" to apply the "read later" label. Now, because we enabled the auto-advance feature, once we move the current email, we go to the next newer email down the list, which saves us a lot of time. The next email seems to be an email where I don’t need to take action on. So I can archive it by pressing "E" straight away. Another example is an email that has been sent to a vast email group that doesn't directly affect me. I can archive this directly. But here's a tip: you can press "M" to mute. This makes it possible that if other people reply-all to this email chain, you will not receive a notification unless it's sent to you directly. So I press "M" instead of "E" here, and it mutes and archives that email. The next email requires action from my side. So I press "V" for label and start typing "follow-up". Press "enter" once the follow-up label pops up, and boom, I've applied the follow-up label to this email. So this is how my inbox looks right now. The main inbox has zero emails, and all your action items and read-through emails have shifted to the right because you applied the labels and archived those emails. Color coding for more oversight We can make this look a little more visually appealing by applying some color coding. If you click the "follow-up" label, I like to add the red color to this, green for "read later", and yellow for waiting. Of course, you’re free to choose your own colors for labels. Another small tip here. If you go into any of these emails, let's say I just clicked on a big project one. And I want to go back to my main inbox screen without having to click the inbox tab here, and you can type in "G" and "I", which stands for go to inbox. And you'll be brought back to the main inbox screen with a refreshed inbox. So the idea here is that now your main inbox is clean. You can focus your attention on the follow-up action items you have already labelled and spend any free time going through the read-through emails. Some of you might be wondering why the waiting inbox is still empty. In this inbox, we list the emails where someone else needs to take action, but you are still responsible for the outcome. For example, you're putting together a report and waiting for your colleague to send you some data. In that case, let's say you write an email asking for the data. You press "C" to compose the email and press "Command + Enter" to send it out. The more manual way to use the "waiting" label is to go to your sent email folder. Click the email you just sent. And you press "L" for label and start typing in "waiting". Click "enter", and you apply the waiting label, and you press "G" and "I". Afterwards, you're brought back to your main screen here. Wait a bit, and you'll see your sent email has now popped up in the awaiting reply inbox section. So this reminds you to chase the other person for that information they haven't gotten back to you in a few days. Cleaning up calendar invites Another small hack that can make your life a lot easier involves calendar invites. So by default, if you send someone a calendar invite using Google Calendar, and they accept, you will always receive a notification saying the other person has accepted your calendar invite. If you think about it, there's no need for you to know if someone has accepted the invitation. You only need to know if someone has rejected your calendar invite so that you can reschedule. So there's a filter for that, you can create a filter with the following properties: In the "Includes the words" field, type in "filename:invite.ics AND accepted OR Geaccepteerd OR Accepté OR Accepted", Create the filter, Select skip the inbox and mark as read, apply the label Calendar Create filter. Now, whenever you send an invitation out, that accept notification will by default be archived and skip your main inbox, so you don't have to see it. Getting started with Inbox Zero How can you apply all this to your inbox? You might have hundreds, if not thousands, of unarchived emails. So my advice is to go back three to four weeks and label your emails in the way I just described above. Then there comes a scary part. You want to select all your emails in your primary inbox, select all the conversations, and click archive. Don't worry because archive does not mean delete. It just simply means it's gone from your primary inbox. You can always find those emails again in your all mail folder. Now that you have your inbox set up more efficiently, feel free to reach out if you have any questions. If you want help setting up and organizing a productive email system, I’m happy to assist! For now, though, enjoy having a clean slate by using these techniques so next time when someone asks, "How do I get my Inbox zero?" just say, "I got it handled." Workflow

Read more
Reading time 5 min
24 MAR 2021

What is Event Storming? Before getting into details, let’s discuss EventStorming’s role in an agile context. Event Storming has become a very popular methodology during the past years and has found its place in the software development lifecycle as a requirements gathering technique. Created by Alberto Brandolini in 2012 as an alternative to precise UML diagramming, Event Storming is a workshop-style technique that brings project stakeholders together (both developers and non-technical users) to explore complex business domains in domain-driven design architecture . One of the strengths of Event Storming is being able to focus on the business stakeholders and the high level of interaction. The technique is straightforward and requires no technical training at all. Using Event Storming, there are different goals you can pursue: identify improvement areas of an existing business flow; explore if a new business model is viable; gain insight into a shared understanding of how a business operates; design clean and maintainable Event-Driven software. There are three primary levels of abstraction for Event Storming: Big picture: used for exploring the current understanding of the system by gathering key people with different backgrounds and creating a shared understanding. Process modelling: in this level, we model a single business process from start to finish, clarifying all the business rules and making sure everyone is aligned. Software design: in this last step, we start designing the software based on building blocks from Domain-Driven Design and a reactive programming paradigm. Each sticky note potentially turns into a software artefact during the implementation phase. When applying Event Storming, you first need to identify the Domain Events in the problem domain on a timeline. The source of a Domain Event could be the following: A user interaction An event coming The result of time passing by The consequence of another Domain Event. We then write this domain event down on an orange sticky note. When all domain events are defined, the second step is to find the command that caused these domain events. Commands are written on blue notes and placed directly before the corresponding domain event. Finally, you’ll need to identify the aggregates within which commands are executed and where events happen are identified. These aggregates are written down on yellow sticky notes. Using the System Modeler In the past years, we have embraced Event Storming as a requirements gathering technique within ACA-IT Solutions – so much so, that it’s now an integrated part of our portfolio and how we develop software for our customers. If you’d like more details about that or want to know more about Event Storming, you can contact us here . The System Modeler uses EventStorming as an inspiration for documenting (modelling) the events that represent business processes, configuring high-level properties associated with those events, then allowing automatic generation of Apps and Collaboration Types from the model. A System Modeler session involves the use of five virtual sticky notes to represent: Event: something that happens in the business Reaction: responses to events Command: user-driven actions that produce events External System: systems that are external to the business Issue: document potential problems or unknowns about events The System Modeler also makes use of one container: Bounded Context: contain notes that share a common vocabulary Below, you can see the result of an EventStorming session in the System Modeler, representing a city’s pothole reporting and tracking system. The model represents: a mobile app that allows city residents to report a pothole creation of new database records to document reported potholes real-time notification of city services of new pothole reports enable workers to update the status of a pothole real-time notification to the reporting resident when the status has changed System Modeler is a great way to bridge the gap between the requirements gathering of an event-driven application and the actual implementation. In this case, we’re doing that electronically on the canvas. Moreover, this is a collaborative environment, allowing several people to work on this model at the same time. Using the System Modeler, users can collaborate with multiple individuals not only in a given room, but across any number of geographic locations. This is an awesome way to genuinely do a sort of distributed requirements gathering session. Even more so with the pandemic still preventing lots of people from going into the office! From requirements to superfast POCs Based on this requirements gathering session, we can now take this requirements model and create an application. All users need to do is switch from ‘model mode’ to ‘generate mode’ and group the various elements. After having defined topics and collaboration tasks, you just click on the Generate button. This simple action alone generates about 70% of this particular application’s code! This makes System Modeler probably the easiest way to very quickly move from application requirements design into application development itself. Conclusion Modern applications need to operate in real-time as they will be driven by what’s happening at that time in the real world. They’ll need to easily incorporate artificial intelligence and IoT technology, and the applications themselves will need to be distributed at the source of the events. The software logic will need to be able to run everywhere (cloud, edge, on-premises). These applications will also require integrating human-beings in the process for when a higher level of intuition and reasoning is needed. With System Modeler, it’s easy to quickly generate a very large portion of such an application. After all, the System Modeler has the ability to gather requirements from business users, domain experts, and developers and very quickly turn those requirements into a running event-driven application. Creating these superfast POCs is a breeze! If you want to learn more about how ACA Group and event-driven technology can help accelerate your digital transformation, please contact us! Our services

Read more