Dockerizing development and deployment of OpenMRS


Thanks @mseaton , At U-wash we already run Selenium qa tests as Docker Builds


I see TRUNK-6083 and PR #4094. Can you provide a brief update on progress/plans?

I’d like to pull together teams already working with Docker (e.g., UCSF, AMPATH, PIH, Mekom, Bahmni, etc.) in the next week or two to begin aligning on a best practice approach to use of Docker for build/deployment. This will certainly be a journey, but I think it will help to have concrete artifacts against which orgs can compare their approach and to anchor the conversation on real decisions (e.g., “why not put these files under that folder” instead of having more abstract conversation about containers). My hope is we can use your efforts as the straw man and combine the lessons & best parts of everyone’s experience to align on an approach that can become a new standard for development & deployment within the community.

There’s a new day coming soon where we can reduce the churn of issues like this. :slight_smile:

1 Like

@burke, I’ll update the PR with recent changes and based on reviews on Monday and adjust CI to build and push nightly images to dockerhub. Next in line will be Ref App 2.x and Ref App 3.x, which will be based on the openmrs-core image. They both should land on dockerhub by the end of next week. I’ll also work with @corneliouzbett to provide guidance on building and developing modules with Docker.

Once we have things working for a while and adjusted based on input from others, we will be able to make first releases to dockerhub with new docker builds.

@achachiez and @mksrom:eyes:?

I’m happy to announce that we now have openmrs-core images published to Dockerhub!

I’ll be adding more documentation around the use of these images in the coming days, but in short we have 2 image variants.

The first one is tagged as dev and contains the latest build of openmrs-core master with maven, maven repo cache and openmrs-core sources and binaries in it. It is there for devs and development of modules and distros. I still need to make adjustments to development server, which is jetty for time being to startup a fully initialised OpenMRS instead of requiring to go through the setup wizard (similarly as in SDK).

The second image variant is tagged as nightly and contains Tomcat and the latest war built from the openmrs-core master branch. It doesn’t contain any build tooling so it’s much smaller and it is targeted to be deployed in production. Please note that it is still work in progress and we will go through a few cycles to improve it and make it really production worthy. Once ready we will create an actual release and publish it to Dockerhub. As part of the work I would like openmrs-core to read properties from environment variables in addition to a property file. It is more common for containers and it will make the extra step of storing properties in a file in needless.

Both images are now built and pushed through CI with every commit. See here.

I have also added an initial Dockerfile and docker-compose for Reference Application 2.x in here. It shows the direction where we are heading. We are moving towards builds done in Docker and using openmrs-core as a base image. I am still to make changes to CI, deprecate the old way of building and add documentation.

In the next two weeks I will be applying the same approach to building OpenMRS 3.x.


Yay!! Rafal it will be game-changing to improve the OpenMRS 3 build process :smiley: At the moment a number of us have been getting discouraged about the state of our dev (“dev3”, to production/demo (“o3”, pipeline, but no one has had enough time to improve it enough.

We want to be able to regularly test in test3, and release features into o3.

Ultimately we need a new O3 RefApp build pipeline - @ibacher keeps finding that Bamboo is not succeeding. We also really want an intermediary “test3” environment but we haven’t set this up purely because of this broken pipeline issue.

Two things that would be awesome for you to use when you start this work next week:

  • Ian has already started an approach (here’s his draft) that would leverage a dockerized process; he just doesn’t have time to finish this. He estimates this would take ~1 week to finish up but can’t do that himself right now… so I’m hoping you can?
  • FWIW here’s a diagram we made to try and explain the current vs desired O3 pipeline steps: (yeah I know it’s awkward but I hope this helps show you relatively quickly what we have vs what’s a problem): O3 Release Pipeline 2022-02 - Google Slides

CC @dkayiwa @burke @zacbutko @bistenes @ibacher

Grace, thanks for keeping an eye on this. I’ll look into what you have just provided next week. Could you please correct the link to Ian’s approach in the meantime? It points to the OCL importer…

1 Like

Whoops… everything I put there was in this commit which is the 3.x-new-build branch.

The main idea was:

  1. Push all the build stuff into Docker images
  2. The only result of the build we “care” about from the packaging stuff is the stuff that used to wind-up in the package subproject (this has the configuration, the WAR, the OMODs and the ESMs / single page), so the first step was to package that as a Docker image.
  3. The images for the 3 components (frontend / backend / proxy) have two potential targets: a “bare” image which is just the container (nginx for the frontend, tomcat for the backend) and scripts that can be used via Docker volumes to mount the WAR / OMODs / ESMs and an “embedded” image which is the same image with the contents of copied from the image described in #2.
  4. The 3 components are then published to Docker Hub more or less as they currently are (i.e., tagged as latest).

From that point, I wanted to change the delivery process so that instead of targeting a specific NPM tag, we’re targeting specific NPM versions so that creating an image for the “test” environment is just a matter of re-tagging a “latest” image and then when we decide the “test” version is ok, we can retag that image with a version and as “demo”.

1 Like

Agh sorry about that Rafal. Copy-paste melodrama. Thanks Ian for the right link (and I’ve updated above as well).

Hi @raff, how did it go last week with beginning this work? Any quick/brief updates?

We were just wondering today in the QA Team whether the server might be up soon :slight_smile: But we know this is pending the overall O3 pipeline improvement work first.

EDIT: Just heard you were sick last week, I’m sorry!! Apparently @ibacher pushed ahead with the build process, to solve the problem of running out of disk space b/c the frontend was being copied in 4 different places across our CI.

Here’s what he did last week:

  • Changed O3 build so we’re now using Docker to build everything
    • Moved frontend so it’s built purely from npm instead of using the SDK to build the frontend. Reason: npm is a moving target; we have some hardcoded versions for the version of npm we use, and seemed this was getting increasingly hard to maintain that version correctly.
  • So now we have this build process that builds Docker images
  • He’s created deploy processes that downloads those D images and then re-tags them
  • the downside: The version #s for everything need to be manually maintained in the repository - we lose the ability to see automatically exactly what versions were used; you’d have to look into the src code
  • So now we are on the latest version for everything that was marked “lastest” as of Fri July 1

The deploy process still isn’t properly functioning but he’s going to look into that this week and hopefully get the O3 test server up.

CC @zacbutko @eudson @samuel34

Thanks @grace for communicating the changes and @ibacher for moving it forward. All the changes are in the right direction. I will make follow up improvements to the process today.

  1. Make backend use the openmrs-core image.
  2. Extract out gateway build to a separate repository i.e. openmrs-gateway. We will use that to provide a general gateway service for OpenMRS. In the future it may include maintenance pages, load balancing, etc.
  3. Add docker-compose to orchestrate the whole build.
1 Like

A small update here. The reference application distro build uses openmrs-core image for 2.x and 3.x builds. There’s docker-compose for each build available as well to build and run it locally. Next on my TODO is:

  1. Build ARM images on our CI in addition to x64. It’s mostly to support developers on Apple’s M1 chips.
  2. Extract out gateway as a general service to be used by any distro.
  3. Adjust npm builds to use pre-release versioning so that we are able to deploy the latest “un-released” version of 3.x in any environment including locally.
  4. Enable build caching on CI for distros (docker buildkit) to speed up builds.
  5. Backport docker build to other openmrs-core branches.
  6. Do release of openmrs-core maintenance branches with the new docker image.
  7. Do release of reference application 2.x and 3.x with the new docker image.
  8. Update docs.

I expect this all to be done this month. It shall conclude the initial dockerization.

Next I’ll be looking into speeding up development of backend with disabling hot-reloading of modules in openmrs-core in favour of increased startup time and hot-reloading of classes during development. This is experimental so it’s hard to say exact timeline before I do some investigation. I’ll be able to say more early next month.

1 Like

Hi @raff , especially thanks for this. We at Mekom deploy OpenMRS on ARM devices, so that’s not only for devs with M1 chips, but also prod use!


@raff is this still the best image for folks to be using? (Asking because this is what @binduak & team is currently using for the Bahmni upgrade to Platform 2.4.2)

@grace I forgot about the openmrs-distro-platform. It needs to be updated as well. I’ll do that next week.

@raff These images seem to end up with broken file permissions. Trying to run openmrs-distro-referenceapplication from the 3.x branch results (with the CI built images) results in this error:

cp: cannot create directory '/openmrs/data/modules': Permission denied

The error reported on Slack for Initializer seems related.

@ibacher the issue should be fixed now.

A small update on the progress… It took a bit longer than expected to convince our CI to build arm images, but it is finally working. ARM images are available for: openmrs-core, openmrs-reference-application, openmrs-reference-application-3-backend/frontend/gateway

Please note that I unified the image name of reference application 2.x with 3.x and we now publish openmrs-reference-application instead of openmrs-reference-application-distro. I will apply the same to openmrs-distro-platform, which will become openmrs-platform.


Thanks for the heads up. I’m trying to think of all the documentation where we need to redirect people. Probably as a start.

So I mentioned this pedantic concern of maintaining our links across our spaghetti-pile of documentation/wiki-pages to @burke today and he had a good suggestion - we could have standard short-links for each image. That would give us just one single place to maintain (i.e. in our short link admin). How does this idea and these specific suggestions sound to you @raff @ibacher?


Am I missing anything? (Totally open to different suggestions.)

@grace, it is a good idea. We should have in addition to the above. I imagine would point to Docker Hub and from there we could document frontend and gateway images are also needed to complete the stack.

That said when referring to those images from docker-compose.yml or command line one would still need to say openmrs/openmrs-reference-application-3-backend, etc.

1 Like