

What images does the city of Vancouver evoke for you? Natural beauty and impressive skyline views? Maybe even a seaplane or two. Yet behind this stunning backdrop lies one of the most innovative regions in North America. Our AI expert, Alexander Frimout, set out on a discovery tour together with VOKA. A journey through cleantech, AI and warm crypto waters. This is his report.

Alexander Frimout of ACA Group on an AI technology mission with VOKA
Cleantech and hydrogen: Vancouver breathes sustainability
Canada takes its sustainability ambitions seriously. While political winds elsewhere can still shift unexpectedly, Canadian innovators continue steadfastly to invest in a low-carbon future.
During visits to, among others, Foresight and HTEC, it became clear how deeply hydrogen and cleantech are embedded in the local industry. Think of whisper-quiet hydrogen trucks, infrastructure built to be future-proof, and start-ups developing technology to address the long travel times across vast Canada.
Canadians do all this with a mindset they themselves call “elbows out.” Assertive, protective, and particularly proud of their own innovations.
Visiting HTEC. This company innovates at the intersection of heavy transport and hydrogen.
Qantum and particle physics: the future is literally accelerating here
What will the IT world look like in ten years? If the Quantum Matter Institute at the University of British Columbia has its way: completely different.
From disruptive cryptography to complex simulations for industry and science. The quantum revolution is in the air, and in Vancouver that revolution is becoming increasingly tangible.
Visit to the Stewart Blusson Quantum Matter Institute. Their field of expertise: quantum materials.
We also visited TRIUMF. This is the lesser-known but equally impressive Canadian sibling of CERN in Geneva (Conseil Européen pour la Recherche Nucléaire).

Here, elementary particles are accelerated to 75% of the speed of light. Not bad for a particle accelerator with a diameter of “only” 18 metres.
The research ranges from medical applications to astrophysics, showing just how broad and deep technological innovation runs here.
Behind the scenes at TRIUMF. TRIUMF stands for TRI-University Meson Facility, a reference to the number of founding universities.
Start-ups aiming for impact, not buzz
From drone AI to biotech to cleantech. Vancouver has a vibrant entrepreneurial climate strongly focused on solving real problems and on scalability.
During a start-up pitch event at The University of British Columbia (UBC), it became clear how many market opportunities are linked to the enormous distances, and thus long travel times, in Canada. Whether it concerns medical services in remote communities or autonomous transport in airports, the challenges here are different from those in Europe. That calls for different solutions.
A packed lecture hall at UBC.
Next stop: Venture Lab, a tech-focused incubator. There we got a look at:
- autonomous mobility for airports
- smart waste solutions to help Canadians sort better
- AI applications based on vision and lidar technology that strongly resemble our own Rematics stack.
Tour at tech incubator Venture Lab.
It is always enlightening to see how other regions tackle technological challenges similar to ours, often with surprisingly creative solutions.

Cinema, but twelve times better
An absolute highlight was the introduction to MTT. This company was acquired about ten years ago by Barco, the Belgian technology company that designs and builds visualisation solutions.
In their private cinema, they showcased the latest HDR by Barco technology. This laser system delivers up to twelve times more optical power than traditional projections.
A look at the cinema of the future at Barco’s MTT.
Deeper colours, brighter whites, darker blacks. Even those who are colour-blind can see the difference. In addition, they are heavily experimenting with automation and 3D projections. Cinema truly gets a technological upgrade here.

Energy innovation: between hype and groundbreaking science
Canada’s energy world is buzzing with ideas. Sometimes visionary, sometimes enthusiastically optimistic.
At MintGreen, we saw how residual heat from crypto mining is used to heat water. An idea with potential, although serious questions remain.
At the other end of the spectrum is General Fusion, one of fifty start-ups worldwide working on commercial nuclear fusion. With investors such as Jeff Bezos and technology that combines liquid metal with perfectly synchronised pistons, they are working here on an energy source that could change the world.
Maybe. If it works. And that is precisely what makes innovation so exciting.
An installation that compresses plasma. For those who want to dive into the technical depth: Wikipedia does its very best to explain it clearly.
Electric seaplanes
Where else but in British Columbia would you find an airline working on fully electric seaplanes?

Harbour Air has an aircraft that flies entirely electric, perfect for the short routes between Vancouver and Victoria. Only the government is not yet cooperating on commercial certification.
Innovation sometimes moves faster than regulation, and rarely has that been as clear as here.
Vanadium flow batteries: the underdog of energy storage
Lithium-ion is king, but not unchallenged.
At Invinity, we discovered Vanadium Flow Batteries (VFBs). A liquid energy storage system that retains its capacity and can cycle quickly. Already deployed worldwide, including an installation in Aalst. A fine example of how niche technology can have global impact.
A look at the build-up of a Vanadium Flow Battery.
Vancouver as a tech ecosystem
Between company visits, there was time for Stanley Park, Grouse Mountain and the iconic skyline. But Vancouver turned out above all to be a city where:
- nature and high tech seamlessly blend
- quantum researchers, cleantech pioneers, AI start-ups and traditional industry find each other
- innovation is not a vague ambition, but a mindset
The skyline of Vancouver.
What do we take home?
- AI never stands alone and only works when it is woven into real societal challenges.
- Quantum, cleantech and energy innovation are strategic sectors here.
- Regulation can make or break innovation, also in Europe.
- Diversity and openness form a strong breeding ground for creativity and entrepreneurship.
In short: Vancouver is a place that inspires!

What others have also read


At ACA, Ship-IT Days are no-nonsense innovation days.
Read more

Whether we unlock our phones with facial recognition, shout voice commands to our smart devices from across the room or get served a list of movies we might like… machine learning has in many cases changed our lives for the better. However, as with many great technologies, it has its dark side as well. A major one being the massive, often unregulated, collection and processing of personal data. Sometimes it seems that for every positive story, there’s a negative one about our privacy being at risk . It’s clear that we are forced to give privacy the attention it deserves. Today I’d like to talk about how we can use machine learning applications without privacy concerns and worrying that private information might become public . Machine learning with edge devices By placing the intelligence on edge devices on premise, we can ensure that certain information does not leave the sensor that captures it. An edge device is a piece of hardware that is used to process data closely to its source. Instead of sending videos or sound to a centralized processor, they are dealt with on the machine itself. In other words, you avoid transferring all this data to an external application or a cloud-based service. Edge devices are often used to reduce latency. Instead of waiting for the data to travel across a network, you get an immediate result. Another reason to employ an edge device is to reduce the cost of bandwidth. Devices that are using a mobile network might not operate well in rural areas. Self-driving cars, for example, take full advantage of both these reasons. Sending each video capture to a central server would be too time-consuming and the total latency would interfere with the quick reactions we expect from an autonomous vehicle. Even though these are important aspects to consider, the focus of this blog post is privacy. With the General Data Protection Regulation (GDPR) put in effect by the European Parliament in 2018, people have become more aware of how their personal information is used . Companies have to ask consent to store and process this information. Even more, violations of this regulation, for instance by not taking adequate security measures to protect personal data, can result in large fines. This is where edge devices excel. They can immediately process an image or a sound clip without the need for external storage or processing. Since they don’t store the raw data, this information becomes volatile. For instance, an edge device could use camera images to count the number of people in a room. If the camera image is processed on the device itself and only the size of the crowd is forwarded, everybody’s privacy remains guaranteed. Prototyping with Edge TPU Coral, a sub-brand of Google, is a platform that offers software and hardware tools to use machine learning. One of the hardware components they offer is the Coral Dev Board . It has been announced as “ Google’s answer to Raspberry Pi ”. The Coral Dev Board runs a Linux distribution based on Debian and has everything on board to prototype machine learning products. Central to the board is a Tensor Processing Unit (TPU) which has been created to run Tensorflow (Lite) operations in a power-efficient way. You can read about Tensorflow and how it helps enable fast machine learning in one of our previous blog posts . If you look closely at a machine learning process, you can identify two stages. The first stage is training a model from examples so that it can learn certain patterns. The second stage is to apply the model’s capabilities to new data. With the dev board above, the idea is that you train your model on cloud infrastructure. It makes sense, since this step usually requires a lot of computing power. Once all the elements of your model have been learned, they can be downloaded to the device using a dedicated compiler. The result is a little machine that can run a powerful artificial intelligence algorithm while disconnected from the cloud. Keeping data local with Federated Learning The process above might make you wonder about which data is used to train the machine learning model. There are a lot of publicly available datasets you can use for this step. In general these datasets are stored on a central server. To avoid this, you can use a technique called Federated Learning. Instead of having the central server train the entire model, several nodes or edge devices are doing this individually. Each node sends updates on the parameters they have learned, either to a central server (Single Party) or to each other in a peer-to-peer setup (Multi Party). All of these changes are then combined to create one global model. The biggest benefit to this setup is that the recorded (sensitive) data never leaves the local node . This has been used for example in Apple’s QuickType keyboard for predicting emojis , from the usage of a large number of users. Earlier this year, Google released TensorFlow Federated to create applications that learn from decentralized data. Takeaway At ACA we highly value privacy, and so do our customers. Keeping your personal data and sensitive information private is (y)our priority. With techniques like federated learning, we can help you unleash your AI potential without compromising on data security. Curious how exactly that would work in your organization? Send us an email through our contact form and we’ll soon be in touch.
Read more

The world of chatbots and Large Language Models (LLMs) has recently undergone a spectacular evolution. With ChatGPT, developed by OpenAI, being one of the most notable examples, the technology has managed to reach over 1.000.000 users in just five days. This rise underlines the growing interest in conversational AI and the unprecedented possibilities that LLMs offer. LLMs and ChatGPT: A Short Introduction Large Language Models (LLMs) and chatbots are concepts that have become indispensable in the world of artificial intelligence these days. They represent the future of human-computer interaction, where LLMs are powerful AI models that understand and generate natural language, while chatbots are programs that can simulate human conversations and perform tasks based on textual input. ChatGPT, one of the notable chatbots, has gained immense popularity in a short period of time. LangChain: the Bridge to LLM Based Applications LangChain is one of the frameworks that enables to leverage the power of LLMs for developing and supporting applications. This open-source library, initiated by Harrison Chase, offers a generic way to address different LLMs and extend them with new data and functionalities. Currently available in Python and TypeScript/JavaScript, LangChain is designed to easily create connections between different LLMs and data environments. LangChain Core Concepts To fully understand LangChain, we need to explore some core concepts: Chains: LangChain is built on the concept of a chain. A chain is simply a generic sequence of modular components. These chains can be put together for specific use cases by selecting the right components. LLMChain: The most common type of chain within LangChain is the LLMChain. This consists of a PromptTemplate, a Model (which can be an LLM or a chat model) and an optional OutputParser. A PromptTemplate is a template used to generate a prompt for the LLM. Here's an example: This template allows the user to fill in a topic, after which the completed prompt is sent as input to the model. LangChain also offers ready-made PromptTemplates, such as Zero Shot, One Shot and Few Shot prompts. Model and OutputParser: A model is the implementation of an LLM model itself. LangChain has several implementations for LLM models, including OpenAI, GPT4All, and HuggingFace. It is also possible to add an OutputParser to process the output of the LLM model. For example, a ListOutputParser is available to convert the output of the LLM model into a list in the current programming language. Data Connectivity in LangChain To give the LLM Chain access to specific data, such as internal data or customer information, LangChain uses several concepts: Document Loaders Document Loaders allow LangChain to retrieve data from various sources, such as CSV files and URLs. Text Splitter This tool splits documents into smaller pieces to make them easier to process by LLM models, taking into account limitations such as token limits. Embeddings LangChain offers several integrations for converting textual data into numerical data, making it easier to compare and process. The popular OpenAI Embeddings is an example of this. VectorStores This is where the embedded textual data is stored. These could, for example, be data vector stores, where the vectors represent the embedded textual data. FAISS (from Meta) and ChromaDB are some more popular examples. Retrievers Retrievers make the connection between the LLM model and the data in VectorStores. They retrieve relevant data and expand the prompt with the necessary context, allowing context-aware questions and assignments. An example of such a context-aware prompt looks like this: Demo Application To illustrate the power of LangChain, we can create a demo application that follows these steps: Retrieve data based on a URL. Split the data into manageable blocks. Store the data in a vector database. Granting an LLM access to the vector database. Create a Streamlit application that gives users access to the LLM. Below we show how to perform these steps in code: 1. Retrieve Data Fortunately, retrieving data from a website with LangChain does not require any manual work. Here's how we do it: 2. Split Data The resulting data field above now contains a collection of pages from the website. These pages contain a lot of information, sometimes too much for the LLM to work with, as many LLMs work with a limited number of tokens. Therefore, we need to split up the documents: 3. Store Data Now that the data has been broken down into smaller contextual fragments, to provide efficient access to this data to the LLM, we store it in a vector database. In this example we use Chroma: 4. Grant Acces Now that the data is saved, we can build a "Chain" in LangChain. A chain is simply a series of LLM executions to achieve the desired outcome. For this example we use the existing RetrievalQA chain that LangChain offers. This chain retrieves relevant contextual fragments from the newly built database, processes them together with the question in an LLM and delivers the desired answer: 5. Create Streamlit Application Now that we've given the LLM access to the data, we need to provide a way for the user to consult the LLM. To do this efficiently, we use Streamlit: Agents and Tools In addition to the standard chains, LangChain also offers the option to create Agents for more advanced applications. Agents have access to various tools that perform specific functionalities. These tools can be anything from a "Google Search" tool to Wolfram Alpha, a tool for solving complex mathematical problems. This allows Agents to provide more advanced reasoning applications, deciding which tool to use to answer a question. Alternatives for LangChain Although LangChain is a powerful framework for building LLM-driven applications, there are other alternatives available. For example, a popular tool is LlamaIndex (formerly known as GPT Index), which focuses on connecting LLMs with external data. LangChain, on the other hand, offers a more complete framework for building applications with LLMs, including tools and plugins. Conclusion LangChain is an exciting framework that opens the doors to a new world of conversational AI and application development with Large Language Models. With the ability to connect LLMs to various data sources and the flexibility to build complex applications, LangChain promises to become an essential tool for developers and businesses looking to take advantage of the power of LLMs. The future of conversational AI is looking bright, and LangChain plays a crucial role in this evolution.
Read moreStay relevant in an ever faster changing world.
Dive into our approach to innovation!

Stay relevant in an ever faster changing world.
Dive into our approach to innovation!

Stay relevant in an ever faster changing world.
Dive into our approach to innovation!

Stay relevant in an ever faster changing world.
Dive into our approach to innovation!


