Using pre-filled docker images for running e2e tests

Over the past few weeks, we have encountered a persistent bug in the e2e test process for the esm-patient-chart repository. Despite successful local test runs, the GitHub action builds consistently fail after approximately 30 minutes. Upon investigating the issue and reviewing the failed workflow logs, it became apparent that the tests were failing at random steps without providing clear error messages. However, annotations in the overview of each failed workflow indicated issues such as connection loss with the GitHub action runner and occasional crashes.

Based on this information, it is likely that the GitHub Actions Runner is crashing due to resource limitations. The default runner, with its 2-core CPU and 7 GB of RAM, is insufficient to handle the simultaneous execution of all 17 esm apps in the esm-patient-chart repository.

To address this issue, we have been discussing possible solutions in this Slack thread. As the thread has become lengthy and challenging to follow, I created this talk thread to continue the discussion.

Thanks to @ibacher, we identified two main steps to resolve this issue:

  1. When running the tests, using a dynamic version of the frontend image that includes only the apps in the current repository, along with the login app and primary navigation app. This requires building a frontend docker image during the e2e tests.
  2. Utilizing pre-filled docker containers for the backend and database, which will reduce the time required to start the server. (currently it takes around 20 minutes)

I am currently working on the second part of the solution and will be using this thread for my updates.

cc: @ibacher @jayasanka @dkigen @anjisvj @randila



I was able to build 2 pre-filled docker images for the backend and the database. And tested it with esm-patient-management e2e test workflow. And it was successful!

It takes less than 2 minutes to start the openmrs instance.

Earlier it took 15-20 minutes to start the openmrs instance.

The next step would be to automate the image building process and also optimize the docker image size.



I was able to write a Bamboo job that runs the backend environment, waits for data generation to complete, commits the snapshots, and pushes them to Docker Hub. The Docker images can be found here: db | backend.

This Bamboo job takes around 20 minutes to execute.

This is what happens inside the Bamboo job:

  1. It runs the OpenMRS backend environment without volumes using the docker-compose-without-volumes.yml Docker Compose file.
  2. It waits until the backend is started and data generation is completed.
  3. It commits the snapshots of the two Docker containers (backend and db).
  4. It tags the snapshots as “:nightly-with-data”.
  5. It pushes the images to Docker Hub.
  6. It stops everything using docker-compose down.

After that, we can use a Docker Compose file like this to create the backend environment when running end-to-end tests. With this approach, it takes less than 2 minutes to spin up the server, whereas previously it took around 15-20 minutes.

However, I have a few things to clarify regarding this.

  1. Where should we keep the above-mentioned docker-compose-without-volumes.yml Docker Compose file? Can we keep it in the openers-distro-referenceapplication GitHub repository? (I used this file because we are only running the backend and db, without the gateway and frontend).

  2. Currently, the Bamboo job is created in an example bamboo plan. Do we need to create a separate new Bamboo plan for creating these pre-filled Docker images? Or can we do it within the Distribution 3.x Bamboo plan?

By the way, I’m currently using my Docker Hub account to store the images. Later, we can update the Bamboo job to push them to OpenMRS’s Docker Hub.

@ibacher and @jayasanka, could you please share your suggestions and comments on this?


So, in my ideal world we’d use the exact same stack as the docker-compose.yml and just provide a secondary overrides file that removes the volumes, but that doesn’t seem possible. Barring that, I guess adding a new docker-compose stack will work. (@Raff do you have any opinion on a naming convention we should follow here?)

This should definitely go in openmrs-distro-referenceapplication and just be part of the standard reference application build. My preference is we trigger these images to be built after the current stack is built and deployed.

Thanks for the work on the @piumal1999! I’m excited to see this actually become a reality!


Shall I add the automation part as a separate stage in this Distribution 3.x bamboo plan?

So the pre-filled docker images will be built and pushed after each successful run of Deploy stage.

Btw, I opened a PR for the docker-compose file. Please review and merge.

I also created an example bamboo stage and job here:

When it is reviewed and approved, we can copy it to the Distribution 3.x plan.


Hi @ibacher and @jayasanka ,

Will you be able to review this bamboo job? If it is okay, Shall I move the job to Reference Application - Distribution 3.x: Plan summary - OpenMRS Bamboo bamboo plan?



  • The bamboo stage was added to the main Distribution 3.x plan. And the pre-filled images can be found on the openmrs docker hub.
  • The e2e workflows of all the main repositories were updated to utilize the pre-filled images.

You can find the lightning talk I delivered about this at the OpenMRS Conference 2023 here:

1 Like

This awesome, @piumal1999 ! :star_struck:

1 Like