Dockerizing development and deployment of OpenMRS

,

Hi @raff , especially thanks for this. We at Mekom deploy OpenMRS on ARM devices, so that’s not only for devs with M1 chips, but also prod use!

3 Likes

@raff is this still the best image for folks to be using? https://hub.docker.com/r/openmrs/openmrs-distro-platform (Asking because this is what @binduak & team is currently using for the Bahmni upgrade to Platform 2.4.2)

@grace I forgot about the openmrs-distro-platform. It needs to be updated as well. I’ll do that next week.

@raff These images seem to end up with broken file permissions. Trying to run openmrs-distro-referenceapplication from the 3.x branch results (with the CI built images) results in this error:

cp: cannot create directory '/openmrs/data/modules': Permission denied

The error reported on Slack for Initializer seems related.

@ibacher the issue should be fixed now.

A small update on the progress… It took a bit longer than expected to convince our CI to build arm images, but it is finally working. ARM images are available for: openmrs-core, openmrs-reference-application, openmrs-reference-application-3-backend/frontend/gateway

Please note that I unified the image name of reference application 2.x with 3.x and we now publish openmrs-reference-application instead of openmrs-reference-application-distro. I will apply the same to openmrs-distro-platform, which will become openmrs-platform.

4 Likes

Thanks for the heads up. I’m trying to think of all the documentation where we need to redirect people. Probably om.rs/o3deploy as a start.

So I mentioned this pedantic concern of maintaining our links across our spaghetti-pile of documentation/wiki-pages to @burke today and he had a good suggestion - we could have standard short-links for each image. That would give us just one single place to maintain (i.e. in our short link admin). How does this idea and these specific suggestions sound to you @raff @ibacher?

  • om.rs/platformimage
  • om.rs/refapp2image
  • om.rs/refapp3image

Am I missing anything? (Totally open to different suggestions.)

@grace, it is a good idea. We should have om.rs/coreimage in addition to the above. I imagine om.rs/refapp3image would point to Docker Hub and from there we could document frontend and gateway images are also needed to complete the stack.

That said when referring to those images from docker-compose.yml or command line one would still need to say openmrs/openmrs-reference-application-3-backend, etc.

1 Like

Another option (possibly easier to maintain over time even with future major changes to our Docker approach similar to what we are doing now) would be to just use om.rs/docker when referring to any of our docker images in documentation and leverage documentation of our Docker Hub pages/images to guide people to the correct image.

In other words, if we can make the OpenMRS landing page on Docker Hub informative and have clear README & image naming conventions for our images, it might be enough to send someone to om.rs/docker with instructions to grab the latest reference application image or the platform image, etc. Then we wouldn’t have to juggle a bunch of image-specific short links.

1 Like

@raff,

Could you join our Platform Team call next week (Wednesday, 10 August, at 17:00 UTC) and walk us through the current state & plans for dockerized deployment of OpenMRS? @ibacher, @dkayiwa, myself, and others would like to understand how things are being built & wired together. In return, we could help use what we learn to improve documentation on Docker Hub and prepare for a broader discussion with organizations in the community using Docker in their deployments (to build consensus on the approach not only for demos & reference application builds, but as an exemplar for anyone distributing/deploying OpenMRS).

On a related note, we’re currently using this Talk thread and the #infra Slack channel for issues with docker builds. As the approach matures, we should probably be creating tickets for issues that come up with builds & docker images. Am I correct in assuming we would create JIRA tickets in the project associated with the particular artifact (e.g., TRUNK, PLAT, O3, etc.)? @ibacher mentioned today he was running into an issue where openmrs-reference-application-3-backend would run for him locally, but was failing to run on the demo server.

2 Likes

@burke, yes, I’ll be happy to join and discuss! I should be able to add some docs around new docker images tomorrow as the setup seem to be stabilised enough now.

A small update… The platform is now being published at Docker Hub with ARM64 and AMD64 images for the master branch. I still need to make a few tweaks to speed up the builds with cache and then I’ll be ready to apply the same to older maintenance branches of core and platform.

I’ve been also trying to understand and address issues with o3 for the last few days. They are not directly related to the docker work, but in general to the o3 development. More on this here.

Today, I’ll be increasing storage on our bamboo agents so that we don’t hit the no space left errors when building.

1 Like

@burke could you include me in the invite for the call on 10th

1 Like

This is a Platform Team call. You’re welcome to join! :slightly_smiling_face: Details (when & how to join) available on om.rs/platform.

@raff,

Could you share update on where things stand?

From our recent discussion on a Platform call, I believe the consensus was to have:

  • Last commit building to a “snapshot” tag (or “head” or whatever is the most intuitive and/or common practice as a stable tag for last commit)
  • Run smoke tests and only deploy to dev if smoke tests pass.
  • Run all tests and only tag as official version (e.g., “dev”) if they pass

This way, devs could develop against snapshot images if they wish, the dev site doesn’t crash because tests have to pass before it gets deployed, and people creating their own modules/apps can use dev images to work against newest changes without worrying about whether or not the image is going to load or run.

Our first goal is to get to the 3.0 build process not only Dockerized, but popularized enough to avoid challenges like in this thread.

What do you think about trying out Bamboo Specs for the 3.0 builds – i.e., specifying the CI build pipeline in YAML files in a GitHub repo? @mseaton has been trying this out for PIH with some success. It seems like it could not only help coordinate changes to our plans, but also serve to make the 3.0 build process easier for everyone to see.

Our second goal is to leverage the build process not only to reliably deploy OpenMRS 3.0 for our dev site and development workflows, but use it to deploy OpenMRS 3.0 with OHRI functionality.

Finally, what’s the best way to know your OpenMRS vs. OCL week? I think if we could get you regularly on the Platform (Wed, 5p UTC) call – or, if that doesn’t work for you, then the TAC (Mon, 4p UTC) call – it could help us coordinate efforts.

2 Likes

Thank you @burke for this. I will attend the forthcoming call

@raff, here’s a failing TRUNK-MASTER build failure that @ibacher notes is failing when running inside Docker, but passes when run natively. Can you tell what we might be missing within the Docker environments?

Alright… figured this one out. We have a slightly odd way of building core in the Docker environment where instead of running, e.g., mvn clean install, we run mvn clean install for each subproject, one at a time. This allows us to take advantage of Docker’s layers so that, e.g., if the build fails in one subproject, Docker can effectively “resume” the build from the point that failed, making builds hopefully faster.

However, doing things this way left out one important thing: the root POM. Like most Maven multi-module projects, OpenMRS Core’s root POM is primarily used for configuring shared dependencies, plugins, etc. Importantly, for this case, we heavily use it to ensure all sub-projects use the same versions of shared dependencies.

Recently, a new dependency was added to OpenMRS Core, to allow us to serialise the java.time classes to JSON correctly. This was done, correctly, by adding the dependency and the version to the root POM and then adding the dependency without the version to the child POM (the -api submodule).

This is where things get weird: because we resolve dependencies before running the builds in a project-wide way, a version of the new dependency was downloaded, and Maven seems to have used that to build the api sub project happily. Unfortunately, the next project, web, actually depends on api to provide most of its dependencies (there are a large number of transitive dependencies). At this point, we see this WARNING in the log:

[WARNING] The POM for org.openmrs.api:openmrs-api:jar:2.6.0-SNAPSHOT is invalid, transitive dependencies (if any) will not be available, enable debug logging for more details

The api submodule is invalid because there is no version specified for the newly added dependency because, while the parent project is resolved properly from the child project, to resolve the api POM, Maven looks for the parent POM of the api method and the last one that has been cached doesn’t have a version provided for that dependency.

Anyway, that’s a very long explanation for why I added this one line which fixes the problem: by building the parent project (again as a separate step) we ensure that these kinds of transitive dependencies are properly resolved.

4 Likes

@burke today I should push improvements for development images so that jetty starts up fully initialised and ready for development. Alongside I’ve made changes to startup scripts and renamed some of environment variables to unify them with docker-compose.

By tomorrow I’ll make adjustment to tag the latest build as “head” and add a build step which does a few REST calls to act as smoke tests and applies the “nightly” tag only if passed. The “nigthly” tag is typically used in dockerhub for the latest build. Later we could add some selenium tests as well to check a few pages in the browser environment.

Yes, Bamboo Specs look great and feel familiar enough to start using them right away as they follow the same proved approach for setting up CI builds known from Travis CI, GitHub Actions, etc. It’ll be some time to migrate all builds, but I could adjust some to start with.

Things feel stable enough now so I will start documenting all the changes in Docker setup tomorrow.

You should be able to see my OpenMRS calendar here: https://calendar.google.com/calendar/u/0?cid=cmFmZkBvcGVubXJzLm9yZw I’ve marked all days I have for OpenMRS. We could add that to the OpenMRS calendar if that helps.

I’m not available on Mondays, but I could make the Platform call regularly. It’s a bit late here so ideally I’d join bi-weekly on my OpenMRS weeks and occasionally in my OCL weeks. It will be easier for me once we shift to the winter time in November and it starts an hour earlier. I won’t be able to attend next week as I’m on vacation, but I’ll make it Sep 28th.

1 Like

:heart_eyes: Awesome - @ibacher and I have been talking about the need for differentiated “smoke tests”, for the o3 frontend specifically.

2 Questions:

  • Are you referring to the o3 frontend or platform-core or…?
  • Does this set-up re-build anytime someone merges a PR to one of the main O3 repos (esm-core, esm-patientchart, or esm-patientmanagement)? Or do we need to manually click the go-button in Bamboo for a build/“release” to the dev3 environment?

I’m sorry I forgot to hit the reply button before going off.

So I’ll start from adding basic REST tests for O3, but they could be used for other distros as well (RA 2.x, platform).

It deploys only backend and for now it needs to be triggered manually if dependencies change. The frontend is still served from CDN and updated automatically whenever any O3 repo changes. Eventually we will be deploying frontend via Bamboo as well, but this can happen only when we make it discover o3 repo changes without having to trigger anything manually.

1 Like