It uses EC2 instances as K8s nodes and MariaDB with 1 read replica for storage, but it also supports using RDS or MariaDB Galera cluster with multi-master replication. It has an ALB setup (application load balancer) in front of O3 gateway and health checks to automatically bring down and up instances if they are not responding.
I also released OpenMRS helm chart (v0.1.5), which has a number of features included:
Run MariaDB with one read-replica
Run MariaDB Galera cluster with 3 masters
Use any other DB backend (e.g. hosted RDS)
Comes with health checks for all services
You can customize O3 version or point to your own distrubtion as long as it’s published as docker images.
You can configure it to use a hosted load-balancer like ALB or point to your own load-balancer.
The release concludes the initial work on POC on running O3 on Kubernetes.
In the coming days I’ll be working on some demo on how to run Kubernetes with Rancher and deploy O3 with helm charts on-premise.
I am also hoping to add Grafana or Prometheus for aggregated logging and metrics.
There’s further work needed to streamline upgrades, add alerts on slow running DB queries, auto-scale DB cluster, have automated backups, … and a significant work on openmrs-core itself to run multiple replicas for high availability and scalability. We are looking for implementers and developers to support us on this path. Please reach out and contribute!
Kubernetes is a cluster container, which can be setup on any number of machines that are connected via network. You can deploy your applications to Kubernetes and have them be automatically allocated and distributed to connected machines for scale and high availability. From the perspective of an OpenMRS implementer it provides:
Ease of setup for clustered services such as MariaDB Galera cluster that can run DB engine on any number of available machines in the cluster with replicas for high availability and performance. You are no longer limited to a single beefy machine running your MariaDB, rather you can easily have 3+ of them replicating data between one another (in case one fails, others continue to run without disruption) and providing extra compute and storage capacity.
A centralized place to manage all your services running on different machines. You can do upgrades from one place, have insights into metrics such as CPU, RAM usage, etc. You get centralized logging, where you can inspect logs of all your services from one place, set notifications on certain events, etc.
Helm Charts in particular allow us to package all needed services together and provide a way to have them all setup with a single command, similar to how we use docker-compose. Any service that we add to the setup can be preconfigured and ready to use by anyone deploying to Kubernetes.
We can continue to extract services out of OpenMRS instance such as search index, cache storage, file storage to improve performance, distribute load, increase reliability without greatly increasing complexity of the setup as it all comes as a bundle and preconfigured.
We can provide common solutions for logging, backup, maintenance pages, etc. as part of helm charts that implementers can use out of the box.
Implementers can build solutions in-premise that provide high availability and scalability that match architectures offered by big cloud vendors such as AWS, Azure, etc.
Happy to clarify anything and continue the discussion.
Thanks @raff for the milestone reached so far. I know how complicated Kubernetes can be, but the results after implementing it are greater compared to the setup struggle.
Am just curious about this and I know it should be dictated by implementations, is aws the defacto standard, what if someone is already using another provider like azure, GCP or on-prem.
are they on the roadmap ?
I’d be glad to write some charts and terraform scripts for other providers, since you have already laid out the framework. Is there still need for this?