Releasing O3 Ref App 3.0.0-alpha

@mksd how do you deploy frontend these days? I think we should publish frontend artifacts as a standalone zip to maven. You would then have choice to use our Docker image or deploy it by other means. I can make that happen by Friday.

So my strong preference is not to try to bundle the frontend into some kind of Maven artifact. Itā€™s not clear to me that thereā€™s any advantage to doing that and we will start to make the frontend less flexible. We have tooling to manage downloading the frontend modules in a modular fashion and we should be using that.

A few notes here on what weā€™re doing with the frontend:

  1. Itā€™s a SPA application. As such, there is a minimal amount of environment-specific information that needs to be embedded in the SPA application. This covers things like: what is the root URL of the SPA app (used for internal navigation), what is the URL to reach the backend, and what frontend modules should be loaded and where to find them (the importmap). Because the frontend is, at the end of the day, static files without a server component, itā€™s hard to resolve these at run time. In the Dockerised build, we can do that by filling those constants with environment variables that are computed at container start. In non-Dockerised environments, though, itā€™s not clear to me that we can do that in a way that can be resolved at runtime.
  2. The frontend modules themselves are published as NPM packages and we have a tool (the assemble command) that just downloads them into an appropriate format and builds the importmap.

In a sense, though, all this is already kind of solved in so far as the SDK already has wrappers to work with the frontend CLI tools from the distro.properties file.

Fair points @ibacher. One would need to run the startup bash script to adjust the variables. We would need to slightly refactor it by extracting the nginx startup command to a separate file. The only usage I see of this zipped frontend is if you want to run plain O3 frontend without any modifications. Otherwise you still need to do a proper build to create the importmap.

@mksd maybe you can shine some light on your specific scenario.

@achachiez and @mksrom based on the three posts above :point_up:, could you weigh in and describe to @raff what we need exactly to accommodate 1) our Ozone CI/CD needs and 2) the Ozone & O3 release process.


I can speak more about 2) myself: for sure we need the distro build to pull the frontend artefacts from NPM as per their versions specified in the distro (wherever that is and whichever tool we use to pull those artefacts). Under which format they are pulled, whether the importmap should be built or not, I defer to the others to figure that out.

I would agree with @ibacher that frontend artefacts should be packaged in a frontend standard way and published using standard frontend tools (rather than Mavenising them for instance).

Update as of Nov 24th::

1 Like

@ibacher in regards to all the steps that you described last week that would get us back to a point where the distro would be entirely self-sufficient, specifically insofar that it would fetch and package the frontend artefacts. Could you create tickets that chunk the work to be done, so that we can groom them together and plan to resolve them?

This part is the one part that more-or-less completely works, at least as far as Iā€™m concerned. The actual remaining steps I see primarily surround tooling and metadata.

What I was trying to stress is that work needs to be done, primarily on the SDK, to handle a few things:

  1. We need to integrate the SDK with the new Docker setup stuff @raff has been working on. Iā€™m not sure about the best way forward here, and Iā€™d prefer to defer to @raffā€¦ I can see a few possible approaches, e.g., we modify build-distro to just output Dockerfiles that are the same as the new Dockerfiles weā€™re using to run things or we drop the SDK altogether, though in that case, thereā€™s a bit of work to be done to come up with something like the distro.properties file that describes a distribution.
  2. At least for the RefApp, we need a way to segregate some kinds of metadata from the product. Right now, the 3.x RefApp metadata is a mismatch of stuff thatā€™s necessary for things to function, stuff thatā€™s nice-to-have but could be replaced with other implementation-specific data and a small amount of stuff thatā€™s just for the demo environments. This may not be a real concern for implementations per se, but itā€™s something we need the ability to segregate out at least for the reference application. Ideally, weā€™d publish these metadata packages to the Maven repo and be able to specify them in distro.properties or a successor format.
  3. Right now, although we can build both x64 and ARM images, but thereā€™s no clear way to just promoted images for both architecturesā€”the docker tag commands essentially only work on the base Docker machine, which means that the current test and demo images are x64 only.
1 Like

Re 1. The new Dockerfiles rely on build-distro to fetch all artifacts from distro.properties. We donā€™t want to drop that. We shall update SDK to output the new Dockerfiles. I wouldnā€™t create the new Dockerfiles as part of the build-distro goal, rather as a new option in the create-project goal i.e. ā€˜Distributionā€™. The new Dockerfiles wonā€™t be changing between builds (thus no point in recreating them by build-distro) and they can be used to run builds and publish produced images (the preferred approach).

Re 2. It deserves a separate talk post so let me open up a discussion in the coming days.

Re 3. Itā€™s an easy fix. Instead of docker tag we need to use:

docker buildx imagetools create --tag "$NEW_TAG" "$OLD_TAG"

Agreed, first of all this is not an impediment to the alpha release and it deserves its own thread. Thereā€™s multiple ways to address this effectively and it should be discussed on that dedicated thread.

@ibacher and @ruhanga any chance you can sync about this in the TAC today, could this be put on the agenda today?

Cc @grace :point_up:

1 Like

Yes, we can discuss it as part of the TAC call.

1 Like

Alright, Iā€™ve added a manifest file to the frontend that declares the versions of modules added to the importmap, which basically means everything but the app shell version itself. Technically, though, I think that out to be declared outside the scope of the frontend stuffā€¦ In the previous iteration, it was passed to the SDK via the special spa.coreVersion property.

I think, with @raffā€™s great pointers above, weā€™ll have resolved most of the outstanding issues towards an alpha release, but Iā€™m not really sure whatā€™s expected.

2 Likes

Thanks @ibacher for working on this :muscle:

Btw there is more :slight_smile: @mksrom will sync with you but I think we need to find a way to make distro inheritance easier. Depending on how the design discussions go, this manifest may become additive. Eg. a child distro of the Ref App would only states its delta with the Ref App through such manifest. Just food for thought, please shout :gun: :wink:

1 Like

This is exciting - @ibacher does this give us the ability to control the versions of stuff that goes into a given environment (eg test3, o3)?

Can you also share a link to whether this manifest lives in github for the O3 refapp?

Not yet, though itā€™s a building-block for that capability.

Itā€™s not part of the source (itā€™s a product of running the actual build), but you can find a live version of it for dev3 here: https://dev3.openmrs.org/openmrs/spa/spa-module-versions.json.

Right now, although we can build both x64 and ARM images, but thereā€™s no clear way to just promoted images for both architecturesā€”the docker tag commands essentially only work on the base Docker machine, which means that the current test and demo images are x64 only.

@ibacher @raff how are multiarch builds handled? Are we using buildx? Because from my understanding with that we should be able to build x64 and ARM images in one go.

Fair points @ibacher. One would need to run the startup bash script to adjust the variables. We would need to slightly refactor it by extracting the nginx startup command to a separate file. The only usage I see of this zipped frontend is if you want to run plain O3 frontend without any modifications. Otherwise you still need to do a proper build to create the importmap. @mksd maybe you can shine some light on your specific scenario.

@raff we are working on having a fixed release point of Ozone, and since we depend on the O3, we need to have a version 3.0.0-alpha tag in https://github.com/openmrs/openmrs-distro-referenceapplication where the spa-build-config.json has fixed ESM versions and the pom of the distro contains fixed versions of artifacts that are known to work with the frontend. We should then build docker images that match this tag.

Our second requirement is the ability to build a custom distribution that builds on top of what we already have in spa-build-config. json and adding custom ESMs, @ibacher does our tooling already allow us to provide multiple spa-build-config.json files?

Note: The second requirement is not of immediate concern as we rely on the published frontend image.

1 Like

Yes, we build using buildx and produce x64 and ARM images in one go.

We discussed this on the Platform call today. We think that it would be best to create a release branch and set the fixed versions there. It will require releasing all ESMs along the way. Once done with releasing all ESMs and having all versions fixed in the new release branch we will create a tag from this release branch. Does this approach sound good to you?

@burke and @raff, so if we create a branch 3.0.x to work our way towards the 3.0.0 final release, and that the branch main becomes the next development branch (presumably for 3.1.0-SNAPSHOT?)

Then I think we have a testing/QA issue for the release because the we donā€™t have any bleeding edge environment(s) to look at the evolution of the 3.0.x branch.

I feel like this is all overkill because we donā€™t have quite a roadmap for something like 3.1.x, you know what I mean? Right now we just need to stabilise things so that o3.openmrs.org can be reliably used.

Also as a general matter of fact I donā€™t like this idea too much, right now at least for this very first release, because it kind of sidelines the release process to its branch and people will keep focusing on the bleeding edge work happening on 3.1.0-SNAPSHOT (that is a bit out of control IMHO). We all need to focus on getting something stable out together. We basically need a code freeze.