Releasing O3 Ref App 3.0.0-alpha

@mksd how do you deploy frontend these days? I think we should publish frontend artifacts as a standalone zip to maven. You would then have choice to use our Docker image or deploy it by other means. I can make that happen by Friday.

So my strong preference is not to try to bundle the frontend into some kind of Maven artifact. It’s not clear to me that there’s any advantage to doing that and we will start to make the frontend less flexible. We have tooling to manage downloading the frontend modules in a modular fashion and we should be using that.

A few notes here on what we’re doing with the frontend:

  1. It’s a SPA application. As such, there is a minimal amount of environment-specific information that needs to be embedded in the SPA application. This covers things like: what is the root URL of the SPA app (used for internal navigation), what is the URL to reach the backend, and what frontend modules should be loaded and where to find them (the importmap). Because the frontend is, at the end of the day, static files without a server component, it’s hard to resolve these at run time. In the Dockerised build, we can do that by filling those constants with environment variables that are computed at container start. In non-Dockerised environments, though, it’s not clear to me that we can do that in a way that can be resolved at runtime.
  2. The frontend modules themselves are published as NPM packages and we have a tool (the assemble command) that just downloads them into an appropriate format and builds the importmap.

In a sense, though, all this is already kind of solved in so far as the SDK already has wrappers to work with the frontend CLI tools from the file.

Fair points @ibacher. One would need to run the startup bash script to adjust the variables. We would need to slightly refactor it by extracting the nginx startup command to a separate file. The only usage I see of this zipped frontend is if you want to run plain O3 frontend without any modifications. Otherwise you still need to do a proper build to create the importmap.

@mksd maybe you can shine some light on your specific scenario.

@achachiez and @mksrom based on the three posts above :point_up:, could you weigh in and describe to @raff what we need exactly to accommodate 1) our Ozone CI/CD needs and 2) the Ozone & O3 release process.

I can speak more about 2) myself: for sure we need the distro build to pull the frontend artefacts from NPM as per their versions specified in the distro (wherever that is and whichever tool we use to pull those artefacts). Under which format they are pulled, whether the importmap should be built or not, I defer to the others to figure that out.

I would agree with @ibacher that frontend artefacts should be packaged in a frontend standard way and published using standard frontend tools (rather than Mavenising them for instance).

Update as of Nov 24th::

1 Like

@ibacher in regards to all the steps that you described last week that would get us back to a point where the distro would be entirely self-sufficient, specifically insofar that it would fetch and package the frontend artefacts. Could you create tickets that chunk the work to be done, so that we can groom them together and plan to resolve them?

This part is the one part that more-or-less completely works, at least as far as I’m concerned. The actual remaining steps I see primarily surround tooling and metadata.

What I was trying to stress is that work needs to be done, primarily on the SDK, to handle a few things:

  1. We need to integrate the SDK with the new Docker setup stuff @raff has been working on. I’m not sure about the best way forward here, and I’d prefer to defer to @raff… I can see a few possible approaches, e.g., we modify build-distro to just output Dockerfiles that are the same as the new Dockerfiles we’re using to run things or we drop the SDK altogether, though in that case, there’s a bit of work to be done to come up with something like the file that describes a distribution.
  2. At least for the RefApp, we need a way to segregate some kinds of metadata from the product. Right now, the 3.x RefApp metadata is a mismatch of stuff that’s necessary for things to function, stuff that’s nice-to-have but could be replaced with other implementation-specific data and a small amount of stuff that’s just for the demo environments. This may not be a real concern for implementations per se, but it’s something we need the ability to segregate out at least for the reference application. Ideally, we’d publish these metadata packages to the Maven repo and be able to specify them in or a successor format.
  3. Right now, although we can build both x64 and ARM images, but there’s no clear way to just promoted images for both architectures—the docker tag commands essentially only work on the base Docker machine, which means that the current test and demo images are x64 only.
1 Like

Re 1. The new Dockerfiles rely on build-distro to fetch all artifacts from We don’t want to drop that. We shall update SDK to output the new Dockerfiles. I wouldn’t create the new Dockerfiles as part of the build-distro goal, rather as a new option in the create-project goal i.e. ‘Distribution’. The new Dockerfiles won’t be changing between builds (thus no point in recreating them by build-distro) and they can be used to run builds and publish produced images (the preferred approach).

Re 2. It deserves a separate talk post so let me open up a discussion in the coming days.

Re 3. It’s an easy fix. Instead of docker tag we need to use:

docker buildx imagetools create --tag "$NEW_TAG" "$OLD_TAG"

Agreed, first of all this is not an impediment to the alpha release and it deserves its own thread. There’s multiple ways to address this effectively and it should be discussed on that dedicated thread.