To be able to help Liantis with outsourcing their infrastructure management, we offered our ACA Atlassian Managed Services as a solution. With this solution, we centralize the licenses, hosting, monitoring, infrastructure and maintenance of the applications for a fixed amount per year.
For scaling the applications up or down depending on a rising or declining load, we migrated Liantis’ on-premise Jira appliction to AWS Cloud.
Two Kubernetes clusters for maximal flexibility
As a solution, we proposed to use two different Kubernetes clusters, each with their own AWS account. Splitting the solution up into two clusters allows Liantis to keep the management of rights and costs separate. Additionally, this approach facilitates management and maintenace: whenever there’s no more need for a particular Kubernetes cluster, it can be terminated immediately without impacting other clusters. This also applies to maintenance: if one cluster needs to be taken offline for (scheduled) maintenance, the other can stay online so service is never interrupted.
Reliable infrastructure-as-code
The on-premise infrastructure at Liantis was set up and managed manually. This traditional approach may cause inconsistencies between environments, with a greater risk of human errors.
Through our infrastructure-as-code approach, the entire infrastructure of Liantis is written down in legible code. This way, we’re able to use the same code to quickly set up different environments without running the risk that these environments differ from one another.

Whenever the need arises for new infrastructural components or changes, the code can be quickly and efficiently adjusted to reflect these changes. The new code is then executed in an acceptance environment. When the code is validated, it’s pushed to the production environment. This automatic process drastically reduces the chances of human errors.
Goal of 99.9% availability
The use of containers in a Kubernetes cluster allows us and Liantis to strive towards a rate of availability of 99.9%. Whenever Jira encounters problems, performance hiccups or even flat-out stops working, the current container terminates and a new one automatically spins up. Users don’t even notice this and can just keep working without interruptions. This ‘self-healing’ of Kubernetes clusters also allows an application to recover themselves after office hours. Being unable to continue working or losing data because of crashes are a thing of the past!
