Dockerizing development and deployment of OpenMRS

,

As the world has moved to containers for development and deployment we will be working on providing ways for OpenMRS to follow up with the container approach.

OpenMRS is not new to Docker. Containers have been gradually introduced by OpenMRS SDK e.g. providing runtimes and distro Docker builds. Yet, the time has come to step it up.

The use of containers for application development and deployment gives a strict control over dev and production environments and it makes them aligned. Developers write and debug applications in isolated environments that are more closely matching production deployments, which eliminates many of infrastructure issues. In addition dev community can better respond to security issues and upgrade infrastructure that OpenMRS is running on.

As part of the work we will be introducing practices and toolset for common administration tasks such as upgrades and backups. Later down the road we will develop common practices for running OpenMRS in a cluster and enable horizontal scaling.

Do not be put off by all this. It is going to be introduced transparently and you will have a choice to use it or not, yet we will continue to push towards the container first approach. We are hoping that eventually we will all be running in containers.

To start with I will be focusing on:

  1. Migrate OpenMRS 3.x build to docker. The assembly of artifacts is going to happen inside Docker builds as opposed to Docker image being produced of artifacts assembled by OpenMRS SDK outside of Docker. We will benefit from Docker caching and multi-stage builds to speed up the whole process. I will share more details as soon as I have a proof of concept.

  2. Dockerize our main repositories including openmrs-core, openmrs-esm-core, etc. so that we can develop in Docker and produce proper Docker images.

Please do reach out if you already use containers in production deployments of OpenMRS. We would like to learn from your experience and make it better for everyone.

17 Likes

This will be great @raff ! Our team at UW has been doing a lot of work over the past few years on CI pipelines and dockerization of OpenMRS (and other tools). Be sure to reach out to @mozzy and @pmanko to learn/leverage what we’ve experienced/built/lessons learned. I’m hoping that your vision has some similarities (I think I recall Grace saying that we’re aligned with where you say you’re headed here).

2 Likes

Thanks @janflowers!

@mozzy and @pmanko have you been utilising Dockerfiles produced by OpenMRS SDK for your builds? Did you make any tweaks to them? If so, would you be able to share your Dockerfiles?

Are you using docker-compose or deploying to some cluster?

1 Like

Thanks @raff for starting work on this. i believe it will greatly improve developer experience with openmrs .

On our side ,we basically use docker-compose for our deploments.
We include maven builds ,assembly of artifacts etc, as part of our docker builds.
We make use of github actions to integrate this as part of our PR workflows .

Here are of some of our docker files isante-server , isante-db

2 Likes

@raff - I’m here with @samuel34 in person and we’re wondering if there are any updates on item #1 above :slight_smile:

@grace, I’ve been experimenting a bit and I will be proposing the following structure:

  • openmrs core docker image as a main image to be extended by other images and not directly for deployments
  • openmrs platform image built on top of the openmrs core image to be extended by other images and not directly for deployments
  • openmrs infra docker image as a complementary service to be used to do infrastructure maintenance like scheduled DB backups, DB restore scripts, services status page, maintenance page, etc.
  • openmrs distro images built on top of the openmrs core, platform or other distro images e.g. RA In openmrs distro images we will be adding custom modules, metadata and the old style UI. One will be able to take any openmrs distro image and build a customized distro image based on it. It will naturally transform to a backend only service once the new 3.x UI replaces the old style UI.
  • openmrs distro UI images with the new 3.x UI, which will be built and deployed as a separate service. Builds and deployments of a UI service will be independent of a backend service, thus reducing the build and deployment time of the whole distro when a change is applied only to UI or backend.

This structure follows pretty much the same approach that proved to work well when creating distros with OpenMRS SDK and it will naturally coexist with the SDK approach. It has a slight modification on extracting the 3.x UI to a separate build and image as well as introduces yet another service to orchestrate some of the OpenMRS administration tasks.

Given this layout I’ve started from the 2nd point listed in my original post. I expect to have a proof of concept to share within a week or two.

4 Likes

This is great. @mohant and others in Bahmni docker sub-groups have done some interesting work over base openmrs (backend only). Maybe worthwhile to have discussions.

This is great @raff . I especially like how infra images can emerge to enable the community to converge on recommended, well-tested tooling for things like backups, status page, etc. On running OpenMRS, I assume you have seen and are familiar with the work we started here GitHub - PIH/docker-openmrs-server: Library of Docker Images for OpenMRS Distributions. I’d be more than happy to move this over to the “openmrs” organization in Github and evolve this if that makes sense. @ibacher has already done a lot to contribute to this as well.

One area that I don’t see listed here but am curious about is whether there are plans to utilize Docker to standardize building and testing of OpenMRS.

For example, in Bamboo we currently run a native Maven 3 executable, with a native JDK installation, which is controlled by those who have admin access to Bamboo. Reproducing the same build as Bamboo (eg. to try to troubleshoot build errors, etc) requires figuring out exactly which versions of these Bamboo is using and then trying to run them locally. I’m interested in thoughts in whether building with Docker, and having standard Docker images (likely based on maven:3-jdk-8) that we build with might start to make sense. These would evolve into more complex cases (eg. able to execute selenium tests, able to deploy to a maven repository given appropriately configured environment variables, etc)

1 Like

Thanks @mseaton , At U-wash we already run Selenium qa tests as Docker Builds

@raff,

I see TRUNK-6083 and PR #4094. Can you provide a brief update on progress/plans?

I’d like to pull together teams already working with Docker (e.g., UCSF, AMPATH, PIH, Mekom, Bahmni, etc.) in the next week or two to begin aligning on a best practice approach to use of Docker for build/deployment. This will certainly be a journey, but I think it will help to have concrete artifacts against which orgs can compare their approach and to anchor the conversation on real decisions (e.g., “why not put these files under that folder” instead of having more abstract conversation about containers). My hope is we can use your efforts as the straw man and combine the lessons & best parts of everyone’s experience to align on an approach that can become a new standard for development & deployment within the community.

There’s a new day coming soon where we can reduce the churn of issues like this. :slight_smile:

1 Like

@burke, I’ll update the PR with recent changes and based on reviews on Monday and adjust CI to build and push nightly images to dockerhub. Next in line will be Ref App 2.x and Ref App 3.x, which will be based on the openmrs-core image. They both should land on dockerhub by the end of next week. I’ll also work with @corneliouzbett to provide guidance on building and developing modules with Docker.

Once we have things working for a while and adjusted based on input from others, we will be able to make first releases to dockerhub with new docker builds.

@achachiez and @mksrom → :eyes:?

I’m happy to announce that we now have openmrs-core images published to Dockerhub!

I’ll be adding more documentation around the use of these images in the coming days, but in short we have 2 image variants.

The first one is tagged as dev and contains the latest build of openmrs-core master with maven, maven repo cache and openmrs-core sources and binaries in it. It is there for devs and development of modules and distros. I still need to make adjustments to development server, which is jetty for time being to startup a fully initialised OpenMRS instead of requiring to go through the setup wizard (similarly as in SDK).

The second image variant is tagged as nightly and contains Tomcat and the latest war built from the openmrs-core master branch. It doesn’t contain any build tooling so it’s much smaller and it is targeted to be deployed in production. Please note that it is still work in progress and we will go through a few cycles to improve it and make it really production worthy. Once ready we will create an actual release and publish it to Dockerhub. As part of the work I would like openmrs-core to read properties from environment variables in addition to a property file. It is more common for containers and it will make the extra step of storing properties in a file in startup.sh needless.

Both images are now built and pushed through CI with every commit. See here.

I have also added an initial Dockerfile and docker-compose for Reference Application 2.x in here. It shows the direction where we are heading. We are moving towards builds done in Docker and using openmrs-core as a base image. I am still to make changes to CI, deprecate the old way of building and add documentation.

In the next two weeks I will be applying the same approach to building OpenMRS 3.x.

7 Likes

Yay!! Rafal it will be game-changing to improve the OpenMRS 3 build process :smiley: At the moment a number of us have been getting discouraged about the state of our dev (“dev3”, o3.openmrs.org) to production/demo (“o3”, o3.openmrs.org) pipeline, but no one has had enough time to improve it enough.

We want to be able to regularly test in test3, and release features into o3.

Ultimately we need a new O3 RefApp build pipeline - @ibacher keeps finding that Bamboo is not succeeding. We also really want an intermediary “test3” environment but we haven’t set this up purely because of this broken pipeline issue.

Two things that would be awesome for you to use when you start this work next week:

  • Ian has already started an approach (here’s his draft) that would leverage a dockerized process; he just doesn’t have time to finish this. He estimates this would take ~1 week to finish up but can’t do that himself right now… so I’m hoping you can?
  • FWIW here’s a diagram we made to try and explain the current vs desired O3 pipeline steps: (yeah I know it’s awkward but I hope this helps show you relatively quickly what we have vs what’s a problem): O3 Release QA Pipeline Process 2022, 2023 - Google Slides

CC @dkayiwa @burke @zacbutko @bistenes @ibacher

Grace, thanks for keeping an eye on this. I’ll look into what you have just provided next week. Could you please correct the link to Ian’s approach in the meantime? It points to the OCL importer…

1 Like

Whoops… everything I put there was in this commit which is the 3.x-new-build branch.

The main idea was:

  1. Push all the build stuff into Docker images
  2. The only result of the build we “care” about from the packaging stuff is the stuff that used to wind-up in the package subproject (this has the configuration, the WAR, the OMODs and the ESMs / single page), so the first step was to package that as a Docker image.
  3. The images for the 3 components (frontend / backend / proxy) have two potential targets: a “bare” image which is just the container (nginx for the frontend, tomcat for the backend) and scripts that can be used via Docker volumes to mount the WAR / OMODs / ESMs and an “embedded” image which is the same image with the contents of copied from the image described in #2.
  4. The 3 components are then published to Docker Hub more or less as they currently are (i.e., tagged as latest).

From that point, I wanted to change the delivery process so that instead of targeting a specific NPM tag, we’re targeting specific NPM versions so that creating an image for the “test” environment is just a matter of re-tagging a “latest” image and then when we decide the “test” version is ok, we can retag that image with a version and as “demo”.

1 Like

Agh sorry about that Rafal. Copy-paste melodrama. Thanks Ian for the right link (and I’ve updated above as well).

Hi @raff, how did it go last week with beginning this work? Any quick/brief updates?

We were just wondering today in the QA Team whether the test3.openmrs.org server might be up soon :slight_smile: But we know this is pending the overall O3 pipeline improvement work first.

EDIT: Just heard you were sick last week, I’m sorry!! Apparently @ibacher pushed ahead with the build process, to solve the problem of running out of disk space b/c the frontend was being copied in 4 different places across our CI.

Here’s what he did last week:

  • Changed O3 build so we’re now using Docker to build everything
    • Moved frontend so it’s built purely from npm instead of using the SDK to build the frontend. Reason: npm is a moving target; we have some hardcoded versions for the version of npm we use, and seemed this was getting increasingly hard to maintain that version correctly.
  • So now we have this build process that builds Docker images
  • He’s created deploy processes that downloads those D images and then re-tags them
  • the downside: The version #s for everything need to be manually maintained in the repository - we lose the ability to see automatically exactly what versions were used; you’d have to look into the src code
  • So now we are on the latest version for everything that was marked “lastest” as of Fri July 1

The deploy process still isn’t properly functioning but he’s going to look into that this week and hopefully get the O3 test server up.

CC @zacbutko @eudson @samuel34

Thanks @grace for communicating the changes and @ibacher for moving it forward. All the changes are in the right direction. I will make follow up improvements to the process today.

  1. Make backend use the openmrs-core image.
  2. Extract out gateway build to a separate repository i.e. openmrs-gateway. We will use that to provide a general gateway service for OpenMRS. In the future it may include maintenance pages, load balancing, etc.
  3. Add docker-compose to orchestrate the whole build.
1 Like

A small update here. The reference application distro build uses openmrs-core image for 2.x and 3.x builds. There’s docker-compose for each build available as well to build and run it locally. Next on my TODO is:

  1. Build ARM images on our CI in addition to x64. It’s mostly to support developers on Apple’s M1 chips.
  2. Extract out gateway as a general service to be used by any distro.
  3. Adjust npm builds to use pre-release versioning so that we are able to deploy the latest “un-released” version of 3.x in any environment including locally.
  4. Enable build caching on CI for distros (docker buildkit) to speed up builds.
  5. Backport docker build to other openmrs-core branches.
  6. Do release of openmrs-core maintenance branches with the new docker image.
  7. Do release of reference application 2.x and 3.x with the new docker image.
  8. Update docs.

I expect this all to be done this month. It shall conclude the initial dockerization.

Next I’ll be looking into speeding up development of backend with disabling hot-reloading of modules in openmrs-core in favour of increased startup time and hot-reloading of classes during development. This is experimental so it’s hard to say exact timeline before I do some investigation. I’ll be able to say more early next month.

1 Like