Using Docker for faster dev environments (learning from Bahmni)

Hi All,

The Bahmni team has been working on a way to set up a dev environment using Docker, such that you write code directly on your local machine, but the server components run in containers.

It’s especially valuable for Bahmni because (a) the full stack of OpenMRS + OpenELIS + Odoo is really tedious to set up, and (b) the front-end app is just html + js + css, so the compilation process to see code changes reflected on the server is minimal. But the approach can be helpful for plain OpenMRS too – here are some of my thoughts form looking at what they have done.

For precise details, see their code here

  1. Dockerizing the database

If OpenMRS development is the primary thing you’re doing, installing MySQL and having an OpenMRS DB isn’t really a big deal. But there are also reasons for us to avoid requiring it:

  • a dev passing by might prefer not to install a DB just to test OpenMRS out (e.g. maybe they’ve already got PostgreSQL and CouchDB installed, and they can’t handle another)
  • you might want to be experimenting with MySQL vs MariaDB, different versions, etc
  • you might want an easy way to reset to a known database state
  • you might want to be able to load up random DBs that implementations send you, but keep them contained, and disposable.

Anyway, I see that it requires a trivial Dockerfile to run MySQL and have it load a DB at startup.

  1. Dockerizing the server

I find Jetty very convenient, but running against Tomcat is perhaps slightly more realistic. Or some people just prefer it. One specific thing they do in Bahmni is that they map a local folder (specifically, the distro/target/distro filter from the distro maven project) to appear as the ~/.OpenMRS/modules folder, so that when tomcat starts up, the omods that you have in a specific local folder are loaded as modules.

  1. OpenMRS as a component of larger systems

Docker can make dev environment setup easier, but where I think it’s really going to be powerful is making it easier for people building systems that use OpenMRS as one component of a system (e.g. Bahmni, OpenSRP, OpenHIE, etc). If you can easily stand up a representative OpenMRS server in one or two containers, then it becomes much simpler to develop for a complex environment with multiple databases, event routers, etc.

@craigappl, @burke mentions you’re interested in this topic. Any interest in setting up some example Docker containers that produce an OpenMRS dev environment?


I been developing (quietly for now) a Vagrant+Chef setup for ID Dashboard, may consider doing this. Previously, new developers would just use our which would setup openldap, but that service is going in a different direction…which had lead me to doing this…primarily for me but FOSS is about sharing…for whatever reason, slapd refuses to install from the linux mint package repository

I considered Docker but figured a VM would be better…

Hey Robby,

While I have absolutely nothing against Chef, why did you choose that instead of puppet?

Right now we have at least the refapp and bamboo agents on puppet, and it’s something I don’t want to be a thing only a couple of people are willing to change.

About docker containers for development, I think it’s really good. Problem is yet windows, but it works so nicely in both Linux and Mac that I think it’s worth. I think it’s not a really complex tool, it of course doesn’t reproduce production (if you are not deploying the docker containers itself), but they are much faster than vagrant.

It’s shouldn’t be a lot of work, but I won’t absolutely have time to do that in a few weeks. I can help it if someone is doing, even reviewing. Docker is now available for Bamboo agents.

I had the idea to setup an OpenMRS account on Docker Hub which contains an image for each platform distribution from 1.9.8 and Reference Application releases from 2.0 to present. This would allow anyone to quickly test against these released versions. In the future, it would be awesome to add the capability to the dockerfile to get the nightly builds of each module.

I ran in to a barrier using Boot2Docker in Windows. We’re not able to use Docker Compose with this, which makes it challenging to setup separate containers for each service. For example, one container for MySQL and one for Tomcat.

I created this repo for OpenMRS 2.2 based on @burke’s work.

1 Like

Ignorance namely :slight_smile: It’s easy to work with.

I am very interested in this area and I would like to contribute efforts. I think the best way would be to have clear guidelines of how we want to achieve this to avoid having different people coming out with different strategies. This is particularly useful because I believe we have different levels of understanding of how docker works. I am worried if we leave everyone to try their own thing it might be very difficult to consolidate these efforts in the future.

For example some of the questions that comes into my mind are:-

  1. Are you planning to bundle both mysql & tomcat in the same image?
  2. Well… I guess I have one question for now :smiley:
1 Like

I have been using the plain old mysql docker image with OpenMRS running with maven/jetty and that works pretty well for a simple disposable dev environment.

If you wanted to test with tomcat I’m pretty sure you could just use the default tomcat image and mount the .war file in the correct directory and link the containers. So we may not need any new images just a few example commands to get people started?

Also, docker compose looks interesting. I haven’t touched it at all, so I don’t know much but maybe a docker-compose.yml file would be better for a dev environment so we could make use of existing (and maintained by someone else :slight_smile: ) images.

I’d even take a step back from this and ask “what do people want to achieve?” Personally, I’d like to see:

  • easy to load a particular OpenMRS version (e.g. for one project I want to have OpenMRS 1.9.7, but that’s not my main version…)
  • easy to load the set of modules you want
  • easy to load a particular db backup (e.g. you need to debug a data-specific problem from a real environment)
  • ability to play around with different databases (this is sort of an invented need, though)

Do people have specific tasks or goals in their typical work, that could imagine being helped by Docker?

(That said, docker descriptors tend to be straightforward enough that I think it’s fine to experiment, and have multiple competing approaches, at least for now…)


Something like having a second option to the standalone? :slight_smile:

1 Like

Why is vagrant not good for these? @teleivo has been making some nice vagrant/puppet VM scripts that has specific modules like Radiology in a deployed state. Its really quick and easy to deploy and test something using these images… Bhamni (see the spelling) also has a deployable puppet/vagrant instance… Virtual environments like Docker don’t provide enough isolation. With RAM being cheap these days and CPUs having VT, why are VMs through vagrant not the direction we want to go with this?

1 Like

Can you say more about this? Such a statement is certainly contrary to what I’ve been hearing through multiple industry evaluations such as this one by Gartner.

1 Like

a VE is a lite VM. In a VE, it uses the underlying kernel and hardware, while recreating the application layer. This means that although you’ll get a cleaner OS instance, it will still have the same quirks (missing drivers, same RAM etc). Instead, in our dev environments, we want to get rid of any local quirks like Linux env for a Windows dev? or say different RAM configurations for performance testing?

The Gartner article you cite misses the point about rootkits (since you are sharing the kernel) still being a persistent problem for security, but for our dev environments we likely don’t care much about that…

1 Like

@dkayiwa, I don’t think of this as a full replacement for the standalone. Personally I’m interested in docker to make dev environments easier, whereas there’s a use case for the standalone for “light” production use that I don’t think we need to replicate.

@sunbiz, I like Vagrant also, and it’s more prod-like but in my limited experience I have found it heavyweight for dev setups, especially for the scenario where OpenMRS is one component in a system. Docker seems to make it easier to be running some components of the system virtually, while running the code you are actively working on directly on your dev machine. E.g. in the OpenSRP scenario I need to have OpenMRS + PostgreSQL + CouchDB + ActiveMQ + OpenSRP + emulated android client. I don’t want to install all these locally, and I also fear that a vagrant VM would be huge and monolithic. With Bahmni their starter vagrant box (including OpenMRS, OpenELIS, and Odoo) is 6GB and I have never even been able to download it. I presume they could have made it smaller, but the docker version I’ve seen from them was much easier to set up.

Aside: “Bahmni” is actually the correct spelling. The name of the github org is misspelled as “Bhamni”. (Really. See

@darius, I see no reason for it to be huge or monolithic. the site.pp files should execute the install for all those packages, either using apt-get or whatever package manager is available. But the OS image is probably the largest download size. One can use packer or remove free space and let the disk expand as one works on an image. Startup times is likely the biggest hassle for many. For me VMWare has really helped to speedup images and I dont ever stop them, just pause them and it starts in a flicker.

@darius I agree with the needs you listed above, those would be what I’d like to see as well. Also, the last need isn’t invented. I used docker when comparing performance between mysql 5.5 and 5.6 and it was great to be able to switch out the database easily.

On vagrant vs docker, I’ve never really been a fan of VMs. When I’m developing I want the full power that my laptop has to offer. I usually get frustrated at some stage with a slow vm and end up installing natively. So, for me docker gives me the ability to run stuff natively while also not clogging up my laptop with stuff I don’t need/want running all the time.

I think Docker is great, but I still use Vagrant most of the time (probably out of habit). First, unless you’re running Linux, Docker still runs in a VM, so you don’t really get native performance. Second, just like with Docker, you can mount your working directory in the VM so that code changes get reflected immediately.

My 2¢.

My team is very interested in using docker as well.

We would like to use docker to be able to more easily deploy OpenMRS, as well a other things, on our build server. We’ve currently got a few different instances of OpenMRS running and I would like to better isolate each. We also have a bunch of other services running on the same box (CI, Issue tracking, Sonar, mysql, etc) and we hope to eventually set up containers for each of those as well.

Even if docker doesn’t provide perfect isolation, it’s better than nothing at all. We recently had an issue with our CI service because someone (ok ok… it was me!) installed Java 8 for another service and updated the default version system-wide. I would like to ensure we don’t run into that kind of issue again.


Docker vs Vagrant (including answers by its creators)

1 Like