OpenMRS 3 in the cloud with Kubernetes and helm charts!

I’m happy to announce that we now have a fully functional helm chart for running OpenMRS 3 on Kubernetes!

Here is the POC on AWS: http://k8s-default-gateway-a6c6d96f67-1005435730.us-east-2.elb.amazonaws.com/ (admin:Admin123)

It’s running on EKS, which is a managed Kubernetes instance on AWS. It’s been all setup in an automated way so that anyone can run it in its own AWS account with terraform. Please refer to GitHub - openmrs/openmrs-contrib-cluster: Contains terraform and helm charts to deploy OpenMRS distro in a cluster for details.

It uses EC2 instances as K8s nodes and MariaDB with 1 read replica for storage, but it also supports using RDS or MariaDB Galera cluster with multi-master replication. It has an ALB setup (application load balancer) in front of O3 gateway and health checks to automatically bring down and up instances if they are not responding.

I also released OpenMRS helm chart (v0.1.5), which has a number of features included:

  1. Run MariaDB with one read-replica
  2. Run MariaDB Galera cluster with 3 masters
  3. Use any other DB backend (e.g. hosted RDS)
  4. Comes with health checks for all services
  5. You can customize O3 version or point to your own distrubtion as long as it’s published as docker images.
  6. You can configure it to use a hosted load-balancer like ALB or point to your own load-balancer.

The release concludes the initial work on POC on running O3 on Kubernetes.

In the coming days I’ll be working on some demo on how to run Kubernetes with Rancher and deploy O3 with helm charts on-premise.

I am also hoping to add Grafana or Prometheus for aggregated logging and metrics.

There’s further work needed to streamline upgrades, add alerts on slow running DB queries, auto-scale DB cluster, have automated backups, … and a significant work on openmrs-core itself to run multiple replicas for high availability and scalability. We are looking for implementers and developers to support us on this path. Please reach out and contribute!

See also other threads related to this topic:

3 Likes

Exciting.

I wonder Rafal, would you mind taking a step back and share a little about what this can start to enable for implementers?

Maybe even update the topic title with some of that to give newbies to Kubernetes some context? :slight_smile:

1 Like

@paul, all right!

Kubernetes is a cluster container, which can be setup on any number of machines that are connected via network. You can deploy your applications to Kubernetes and have them be automatically allocated and distributed to connected machines for scale and high availability. From the perspective of an OpenMRS implementer it provides:

  1. Ease of setup for clustered services such as MariaDB Galera cluster that can run DB engine on any number of available machines in the cluster with replicas for high availability and performance. You are no longer limited to a single beefy machine running your MariaDB, rather you can easily have 3+ of them replicating data between one another (in case one fails, others continue to run without disruption) and providing extra compute and storage capacity.
  2. A centralized place to manage all your services running on different machines. You can do upgrades from one place, have insights into metrics such as CPU, RAM usage, etc. You get centralized logging, where you can inspect logs of all your services from one place, set notifications on certain events, etc.
  3. Helm Charts in particular allow us to package all needed services together and provide a way to have them all setup with a single command, similar to how we use docker-compose. Any service that we add to the setup can be preconfigured and ready to use by anyone deploying to Kubernetes.
  4. We can continue to extract services out of OpenMRS instance such as search index, cache storage, file storage to improve performance, distribute load, increase reliability without greatly increasing complexity of the setup as it all comes as a bundle and preconfigured.
  5. We can provide common solutions for logging, backup, maintenance pages, etc. as part of helm charts that implementers can use out of the box.
  6. Implementers can build solutions in-premise that provide high availability and scalability that match architectures offered by big cloud vendors such as AWS, Azure, etc.

Happy to clarify anything and continue the discussion.

2 Likes

Thanks @raff for the milestone reached so far. I know how complicated Kubernetes can be, but the results after implementing it are greater compared to the setup struggle.

Am just curious about this and I know it should be dictated by implementations, is aws the defacto standard, what if someone is already using another provider like azure, GCP or on-prem.

are they on the roadmap ?

I’d be glad to write some charts and terraform scripts for other providers, since you have already laid out the framework. Is there still need for this?

some early development progress for gcp at GitHub - tendomart/openmrs-cloud-cluster: Generic kubernetes configuration for azure, GCP and on-prem openmrs