Playing around with OpenMRS in the cloud

PS- I see that I accidentally clicked Post before I was ready. I will post an update with code links.

Edited the first post to add a link to the code: https://github.com/djazayeri/openmrs-contrib-gcptest/tree/master/kubernetes

1 Like

google cloud sounds impressive.

Would us supporting horizontal scaling require a central machine in our pool to do a ‘sync module’ kind of work or we would just build multiple external database installations support guided by an algorithm that figures out when to lookup to either databases!

6 posts were split to a new topic: Can I run multiple copies of openmrs in the same tomcat instance?

@k.joseph, horizontally scaling OpenMRS primarily means:

  1. scaling the mysql database using replication
  2. scaling the application server by running multiple copies of tomcat/jetty behind a load balancer

Horizontally scaling mysql can’t really be done transparently to the application.

The easiest approach is to create read replicas (e.g. a read-only copy of the mysql db that is automatically kept in sync via mysql master-slave replication), and try to find some read-only workloads that we can offload from the master. Some implementations already do this for reporting and I’m suggesting we could also do it for things like patient search and concept search. (Though perhaps there’s less value to this now that we’ve started using Lucene.)

One can also do multi-master replication, or “sharding” (splitting data across multiple db servers based on something like patient location), but I’m not sure how MySQL supports these, and they aren’t “easy” so I would definitely try other approaches first.

In the read-replica approach you still do have a master server. The sharding approach could mean multiple DBs, with no master, but the application needs to know which one to go to.

Scaling the application server is something we should be able to do. It would require some work, but I think it’s tedious/straightforward kind of work. The basic idea is that a stateless service is trivial to scale, but we do have state in our Java application. So the task here would be to determine where we do have state (besides what’s in the database) and either (a) get rid of it, (b) move it to the db or some other external service, or © ensure tomcat can automatically handle it.

In this case there’s no “master” application server, though you do need a load balancer.

2 Likes

Thanks @darius, this is super interesting.

That’s JVM clustering right? Or at least JVM clustering is a prerequisite to run Tomcat behind a load balancer?

Assuming that we get to a point where the former is possible, what are the limitations or blockers that would prevent using a cluster of Tomcat instances with something like Google Cloud SQL?

Tomcat Clustering has a concept of session replication across nodes. Using this I believe it may be possible to run a cluster without changing anything in OpenMRS. Obviously this has to be tested.

I believe there are multiple possible approaches. That linked wiki page is from 2010, but a lot of the considerations are the same. I.e. it’s not okay to store state on the filesystem attached to your webserver. It either needs to go in the database, or be pulled out into a new microservice. (Or maybe it could be made to work with shared cloud storage, if we don’t allow modifying existing files.)

Today it may be possible to run a load balancer with sticky sessions (like nginx) in front of multiple tomcats that don’t necessarily know they’re being load balanced.

My understanding is that tomcat can automatically replicate sessions across nodes but this has some prerequisites (e.g. all session state must be serializable). I also remember @wyclif looking into this many years ago and finding examples of things we would need to change.

Well, that’s right in my playground :smiley:

I don’t do GCP nor kubernetes, I’m in AWS and docker swarm, but it’s not so different.


Yeah, but. This is a hard problem. You know all of this, but I’m adding here for posterity.

There are two reasons why you add multiple nodes to an application: high availability and performance. Majority of applications that are SQL intensive are limited by the SQL server, and increasing the number of nodes won’t help (and there’s a chance it will make it worse due to the constraints placed on the SQL server). I’d be surprised if OpenMRS performance bottlenecks are in tomcat right now instead of the MySQL server. I’d think that adding more nodes in OpenMRS would be extremely relevant for high availability only.


Keeping state outside the container/node is a good thing ™, and helps heaps. That said, sometimes life is hard and we have to make tradeoffs.

In docker swarm (and I’m fairly sure in Kubernetes), you are able to add plugins to mount volumes to ‘persistent’ locations. I do have a plugin to allow docker volumes in EFS (the NFS from AWS). It’s totally against the official rules ™, but well. The application keeps writing to the filesystem, but that’s actually stored in an external location (outside the pod/cluster). Beware of racing conditions here.


Atlassian suite for datacenter opted for a model where there’s a ‘shared’ folder (which you mount on every node via NAS, NFS, SMB or something similar), and another folder for each node maintains its own ‘cache’ (lucene cache). Apparently there’s some problems with lucene over NFS. But I’ve heard anecdotal evidence that EFS is somehow acceptable.

While this isn’t a huge implementation change (only two different folders, one for shared and another one for non-shared), and single-node deployment remains the same (only multiple nodes have to mount the shared folder), having multiple active instances of the same application have a huge tail of side effects.

  • You literally have multiple applications changing the same database, some tables, sometimes same row. Racing conditions problems are a real thing. Hibernate doesn’t cater for that easily. Your hibernate cache is going to fail you all the time.
  • Scheduled tasks are quite tricky if you have multiple identical nodes
  • Sticky sessions don’t really take you much far. Load balancers and the applications are restarted eventually, due to security patches or whatever. It’s better to have session replication/in DB/externalized. Sticky sessions can improve user experience (due to things cached in memory), but you shouldn’t rely on it.
  • You need to know when to invalidate your cache, on each node. Mistakes can happen, and different nodes will give you different results.
  • Do not overlook the amount of race condition problems you’ll find when running multiple nodes. Make it part of your CI builds.

Having more usability than AWS is not hard :joy:

Just to keep the same theme: it’s not obvious to everyone, but having a cluster of MySQL or Postgresql doesn’t improve performance; if anything, it’s actually worse for some operations. Clusters are there to be high available, and that’s all.

(But I use RDS for everything and I’m not going back to the DBA era. I don’t know how to DBA). :money_with_wings: :money_with_wings:

Every time I suggested read replicas, I was told by developers that it was way too expensive for what they were using it. That they could just keep a cache in elasticache/redis/lucene/solr, and it would be more than enough and it would be cheaper. I was told that it’s not very straightforward to do those things in hibernate. So I never got anyone on board of read replicas at all.

2 Likes

Hi darius, Are you able to successfully deployed on kubernetes am also trying similar to this. Can we bundle mysql also in the same kubernetes pod?.

Thanks Prapa

@prapakaran, what exactly are you asking? I did what I described in the first post, and I linked to all the code I wrote. I was just playing around with this, so I did not go any further.

If you want to run your own mysql, yes you can definitely bundle this in the same kubernetes pod. (Though it’s still the case that Kubernetes doesn’t really give you any advantage when running OpenMRS, because it can’t dynamically scale.)

@gschmidt could we pick some things from here into this? OpenMRS 3 R&D - Discussions

1 Like

Hi Darius, Thanks for the reply. Am planning take advantage of ‘Infra structure as a code’ like one openrms instance running on kubernetes and for new doctor, replicate new instance on other kubernetes instance.

Am trying access the url outside pods using openmrs-service.yaml ################# apiVersion: v1 kind: Service metadata: name: openmrs labels: app: openmrs spec: selector: app: openmrs tier: web ports:

  • port: 8080 targetPort: 8080 type: LoadBalancer loadBalancerIP: “35.193.46.99” ####################### But am unable access url outside(local browser). How to we check logs.

Thanks Prapa

@gschmidt Please take the below item for discussion. May i know when is meeting happening. In practice there’s no point in orchestrating a multi-node OpenMRS deployment with Kubernetes, because we don’t support horizontal scaling of either the application server or the database. We should really start to work on this, though! (Driven by actual usage patterns, of course.)

A few ways to approach scaling:

  • The “correct” solution is to refactor our whole web application to remove/externalize local disk storage, or local state, so that we can just run multiple tomcats behind a load balancer with sticky sessions.
  • Put a reverse-proxy cache in front of our REST web services (useful for distros built on client-side code)
  • Use read replicas of the db for some things:
    • refactor Patient Search and Concept Search to be microservices backed by a read replica instead of the master DB.
    • have the reporting module automatically support using a read replica database
    • route all GET requests to REST so they’re served from a read replica

(Many of these start to run up against the fact that one of OpenMRS’s success factors has been that it can be installed on very simple hardware, and I’m starting to describe approaches that need docker-compose at a minimum.)

1 Like

@dkayiwa Is anyone working on these areas. May i know when is next openmrs developer meeting.

Thanks Prapa

@prapakaran do you want to suggest a design topic?

Would you like to create new story as containerization of OpenMRS then we can create sub stories from here?. @darius @isears

Thanks Prapa

1 Like

If you have a use case for doing this, then please write up a story describing it.

(In other words, I do not personally have any need to run OpenMRS on kubernetes, or in the cloud. But I’m happy to advise anyone who has a real need, and wants to document it +/- create shared artifacts.)

For small hospitals, create individual instance and depending on doctor specialization. As an user, easily create new instance quickly and portable on different cloud platforms.

Thanks Prapa

Is anyone still working on minikube for local development?