SDK-docker: Edit docker-compose.yml prior to 'docker-compose build'

Hi @SolDevelo team!

Thanks for the amazing work done on the SDK and Docker. Before we could actually use it directly on our CI server we would need to gain more control on the YAML configuration for docker-compose.

(Q.) How to specify the <distro> string in docker-compose.yml? It seems that it was somehow generated out of our file, but ideally we would like to control its exact value prior to launching docker-compose build.

The reasons being that we will run several instances of the same distro at the same time, so I guess we need to make sure that there is no name collision between the Docker containers.

Heads up, I was working in a similar thing a little bit ago. It’s still in review, but I suppose it will be merged soon~ish.

That will make the docker image a little bit more configurable by itself, in isolation of the docker-compose. So the docker image can be a proper build artefact which can be configurable independently. You know, docker build and docker push.

You will be able to potentially deploy the docker image to a docker registry (docker hub or internal registry) or build it independently, without ‘docker-compose build’. And then you will be able to configure several different docker-compose based on the same image deployed.

Here, an example of that using refapp:2.6: The docker image you see there were generated by the SDK (the code on the PR) and pushed to docker hub. It wasn’t committed anywhere.

I can tell you what I plan to do it from CI on our internal openmrs environments:

  • Each build that we want to deploy will upload a docker image to docker hub, versioned with a tag. Like ‘my-distro:nightly-56’ for build 56.
  • Each release will deploy a proper version to docker hub, like ‘my-distroy:2.6’ for version 2.6

Each of my environments will have a docker-compose file, committed somewhere. I still don’t know if I’m going to update a tag to docker hub (let’s say, tag my-distro:nightly-57 as my-distro:qa), if I’m going to commit the version to the docker-compose file or if I’m going to use a ‘template’ and replace it with the version received by CI during deployment. Still not sure.

But the docker-compose file used on our environments will not be the one that the SDK generated. It’s lightly based on it, modified to what the environment needs. From the SDK I will only use the docker image, not the compose file.

Of course you don’t need to push to docker hub, it can be only local images. But overall, you want the very same image to be used in tests AND production, you don’t want to recreate it.

If you are running multiple CI agents on the same machine sharing the docker environment, you can run into quite a lot of problems…

Based on that, I’d always build a different docker image named after the buildnumber (let’s say, ‘awesome-distro:5’ for build 5). Remembering that you can tag the same image multiple times, for example, ‘awesome-distro:local’.

Your docker compose would quite possibly not leave on the same repo, something like

That’s how I plan on deploying our disposable environments: I’ll deploy the ‘release candidates’ docker images to docker hub, and each environment will have a different docker-compose file.


@burke suggested at some point to not specify container names at all and let docker generate them based on folder names… It’s actually a preferred approach for Docker as it allows to scale. Could you please create an issue in SDK to get rid of custom container names?

Actually, the “best practice” (or convention) I’ve seen with Docker Compose is to use simple layer-specific names – e.g., db, api, web, etc. – and let Docker Compose take care of making the container names unique. This allows you to refer to services by simple host names within containers (e.g., API layer connects to db:3306). Docker Compose combines the host folder + service name + number to ensure uniqueness (e.g., docker-compose up in folder myopenmrs would produce containers like myopenmrs_db_1, myopenmrs_api_1, …).

1 Like

@burke, container name != service name. We need to get rid of custom container names and we’ll have auto-generated container names as you described. Any reason not to continue to use service names like openmrs-yourdistro-mysql, openmrs-yourdistro, etc?

Out of curiosity, @mksd, are you planning to run several instances of the same distro on the same host?

If you are running it from CI, wouldn’t it be only a single build/docker-compose per agent/slave?

Or do you have multiple agents running on a single machine?

Yes, basically the same CI agent (Jenkins in our case) runs both a ‘dev’ and a ‘staging’ instance of the same distro at the same time. When I say “same distro”, yes it is the same distro but (most of the time) different versions of the same distro.

‘dev’ means with SNAPSHOT artifacts. ‘staging’ mean without SNAPSHOT artifacts.

@raff, looking at docker-compose.yml, do you mean that container_name should be left out and that service names could be non-unique at all (and explicit)? Like this:

version: '2'

    image: mysql:5.6
      - MYSQL_DATABASE=openmrs
      - MYSQL_ROOT_PASSWORD=Admin123
      - openmrs-<distro>-mysql-data:/var/lib/mysql
      # pass dump file to mysql image
      - ./dbdump:/docker-entrypoint-initdb.d

      context: .
      dockerfile: Dockerfile
    entrypoint: /usr/local/tomcat/
      - openmrs-<distro>-mysql
      - "8080:8080"
      - openmrs-<distro>-mysql:mysql


I have no idea if that’s a valid YAML config for Docker Compose… but if yes I’ll make a PR for this. For instance, can the volume name be simply ‘data’ then?

What about the ability to specify the port? Will we have to edit docker-compose.yml prior to running the docker-compose commands, is that the only way to do this?

Tyoically, people use more generic service names (like db and web) instead of the names of the specific apps, so – for example – the containers can find each other via service names and one can easily substitute Postgres for the db service or nginx for the web layer without changing service names.

For configuration, you can use environment variables (e.g., a .env file). There is also a pattern of using a separate production.yml described in the Docker docs that you could use if settings are significantly different between dev (docker-compsose.yml) and staging (staging.yml, invoked via docker-compose up -f staging.yml). If you follow these conventions, you shouldn’t have to edit the yml file(s) between runs.

Sure, I got the idea, I rather wanted to make sure that the suggested .yml file was valid for a possible PR. In particular that it is fine to not specify a container name at all. But yes makes sense: web, db, data (I guess that applies also for volumes).

Thanks for the tip about using docker-compose up -f with various config for dev, staging (and possibly prod, but not for us for now anyway). That’s really helpful! I will try out and report.

docker-compose 101 question I suppose but I just tried this to clean up everything:

docker-compose rm -v

(Q.) And despite the ‘-v’ the data volume remained dangling, any idea why?

P.S. This then worked to get rid of it:

docker volume rm `docker volume ls -q -f dangling=true`

The dockermentation for docker-compose rm -v says it will “Remove any anonymous volumes attached to containers”. Maybe the dangling volumes aren’t attached to any container, so docker volume rm $(docker volume ls -qf dangling=true) is needed.

Some good news is docker system prune is coming with 1.13.[ref]