O3 Implementations

Hi all,

This is going to be a somewhat long post. Please bear with me, as I think it’s an important discussion of the future, etc.

Vision

One of the biggest changes between the O2 and O3 Reference Applications is the philosophy with which we’ve tried to build them. The O2 Reference Application is a minimal EMR, meaning that it provides a good amount of core EMR functionality, but it was fundamentally an example (a “reference”) for how the various components that make up O2 could be combined to make a functioning app. It was anticipated that implementations would take that skeleton and re-work it for their needs. Most prominently, what this meant is that there wasn’t always a clear way to build on top of the reference application without forking the repo and manually synchronizing dependencies.

With O3, we’re aiming to have the reference application be more of a “product” than a “reference”, which is to say that we want to enable implementations to build on top of the core of the Reference Application. This is going to be a bit of a process as we figure out the optimal way to do this. To be clear, we still expect that implementations will need to maintain their own distribution repository for the implementation, but the goal is to make it so that this repository can primarily be about managing implementation-specific customizations.

Concretely, we want this implementation to be incremental. An overall goal is to still be able to have the build determined by a single file (distro.properties) that produces the necessary artifacts via the SDK’s build-distro command. There are a few steps to get to this point with O3, but this is still fundamentally how we’re building the backend in the Reference Application.

Frontend

One of the biggest missing pieces in the O3 frontend is something like the plug-and-play nature of OpenMRS’s backend module system. To load a backend module in OpenMRS, you just need to ensure that the OMOD is in the system module directory and it’s automatically loaded for you. Doing this on the frontend is a little more challenging because the O3 frontend is delivered as a set of static files without a server-side component.[1]

Up until now, however, this tooling has required implementations to completely specify all apps and dependencies they want to include, using a file similar to the spa-assemble-config.json in the Reference Application. This becomes a bit of a pain for implementations because there’s no obvious way to take the reference application configuration and add or remove modules from it, which basically requires implementations to monitor our published versions and update their spa-assemble-config.json files when they want to “upgrade to a new version of the Reference Application”.

We now have (for the latest development builds and the next up-coming beta), what’s hopefully a more workable solution around this. Specifically this solution has two components:

  1. When the frontend is built, it now creates a zip file containing the exact versions of both core and the frontend modules used to build that version of the reference application. This is published to our Maven repository[2] as org.openmrs.distro:referenceapplication-frontend:3.0.0-SNAPSHOT. This zip file has a single file in it, called spa-assemble-config.json that is consumable by the NPM CLI openmrs tool.
  2. We’ve added to the openmrs assemble command two features that enable implementations to build on top of the published spa-assemble-config.json[3]. First, it’s now possible to specify multiple configuration files on the command line for the assemble command. The tooling will merge these files, allowing new custom modules to be provided. Second, the assemble configuration file now allows specifying a frontendModuleExcludes property, which is an array of the names of modules to exclude, which allows an implementation to remove applications included in the Reference Application.

Docker

We’ve been pushing pretty hard on the community to use Docker / OCI containers as a deployment mechanism. Our frontend image is designed to be re-usable in deployments. All that is necessary is to replace the contents in /usr/share/nginx/html/ on the image with whatever static frontend files you want to serve. For example, you could use a docker-compose descriptor like:

# ... snip ...

frontend:
  image: openmrs/openmrs-reference-application-3-frontend:3.0.0-beta.16
  restart: "unless-stopped"
  environment:
    SPA_PATH: /openmrs/spa
    API_URL: /openmrs
    SPA_CONFIG_URLS:
    SPA_DEFAULT_LOCALE:
  healthcheck:
    test: ["CMD", "curl", "-f", "http://localhost/"]
    timeout: 5s
  depends_on:
    - backend
  volumes:
    - './frontend:/usr/share/nginx/html/'

Assuming that the local ./frontend directory contained the built frontend files. Alternatively, if you wanted to publish your own Docker container you could use something like this example Dockerfile:

FROM openmrs/openmrs-reference-application-3-frontend:3.0.0-beta.16

RUN rm -rf /usr/share/nginx/html/*
COPY ./frontend /usr/share/nginx/html/

Which does basically the same thing.

Future Steps

Currently, leveraging all of this is a little clunky. Hopefully over time, we’ll be able to iron things out with some work in the SDK to more easily support these new features and to add something similar for backend configurations. Specifically:

  • distro.properties should support specifying the frontend artifact version and use that as a base when building the frontend.
  • The SDK should generate Docker images built on our newer Docker images rather than the old versions that are being used.
  • We should add extend the build-distro command with capabilities to override an existing distribution similar to the one’s described here.

  1. It’s also unclear if there’s a straight-forward modular frontend deployment package we could depend on. The existing solutions I’m aware of either: (1) like Piral’s feeds or single-spa’s import-map-deployer rely on a long-running backend service that serves as the “module registry” or (2) like Baseplate and Piral’s default setup, require a hosted CDN instance, which aren’t practical.

    The solution that most closely resembles OpenMRS’s backend module system would be to have a server-side instance that parses through a set of modules and builds the necessary routes.registry.json and importmap.json files. The necessary metadata is basically already in the package NPM files. I attempted something like that here, albeit without a full server-side component, since a server-side component. The main issue with this approach is that it is hard to make it work on machines without access to the internet. ↩︎

  2. If you’re a frontend developer, publishing the record of the frontend artifacts used to Maven may look like a funny decision. Ultimately, our SDK is based on Maven and most existing distribution builds are heavily dependent on Maven, so this seemed like the option that would cause the least friction. If there’s an ask for it, we could also publish something to the NPM registry, though. ↩︎

  3. Currently the NPM tooling is not capable of working directly with the zip file. The expectation is that another process downloads and consumes the file. ↩︎

12 Likes

Note that the spa-assemble-config.json looks like this (this is the current latest version as I write):

{
    "coreVersion": "5.3.3-pre.1443",
    "frontendModules": {
        "@openmrs/esm-patient-banner-app": "6.1.1-pre.3524",
        "@openmrs/esm-patient-attachments-app": "6.1.1-pre.3524",
        "@openmrs/esm-patient-allergies-app": "6.1.1-pre.3524",
        "@openmrs/esm-login-app": "5.3.3-pre.1443",
        "@openmrs/esm-primary-navigation-app": "5.3.3-pre.1443",
        "@openmrs/esm-patient-lists-app": "6.1.1-pre.3524",
        "@openmrs/esm-patient-conditions-app": "6.1.1-pre.3524",
        "@openmrs/esm-patient-immunizations-app": "6.1.1-pre.3524",
        "@openmrs/esm-patient-notes-app": "6.1.1-pre.3524",
        "@openmrs/esm-form-entry-app": "6.1.1-pre.3524",
        "@openmrs/esm-patient-forms-app": "6.1.1-pre.3524",
        "@openmrs/esm-home-app": "5.2.1-pre.320",
        "@openmrs/esm-patient-orders-app": "6.1.1-pre.3524",
        "@openmrs/esm-patient-appointments-app": "6.1.1-pre.3524",
        "@openmrs/esm-patient-flags-app": "6.1.1-pre.3524",
        "@openmrs/esm-devtools-app": "5.3.3-pre.1443",
        "@openmrs/esm-system-admin-app": "4.0.2-pre.88",
        "@openmrs/esm-openconceptlab-app": "4.0.2-pre.88",
        "@openmrs/esm-implementer-tools-app": "5.3.3-pre.1443",
        "@openmrs/esm-patient-list-management-app": "5.2.2-pre.2602",
        "@openmrs/esm-active-visits-app": "5.2.2-pre.2602",
        "@openmrs/esm-patient-programs-app": "6.1.1-pre.3524",
        "@openmrs/esm-patient-labs-app": "6.1.1-pre.3524",
        "@openmrs/esm-patient-chart-app": "6.1.1-pre.3524",
        "@openmrs/esm-cohort-builder-app": "3.0.1-pre.183",
        "@openmrs/esm-patient-medications-app": "6.1.1-pre.3524",
        "@openmrs/esm-patient-search-app": "5.2.2-pre.2602",
        "@openmrs/esm-patient-registration-app": "5.2.2-pre.2602",
        "@openmrs/esm-generic-patient-widgets-app": "6.1.1-pre.3524",
        "@openmrs/esm-service-queues-app": "5.2.2-pre.2602",
        "@openmrs/esm-appointments-app": "5.2.2-pre.2602",
        "@openmrs/esm-patient-vitals-app": "6.1.1-pre.3524",
        "@openmrs/esm-dispensing-app": "1.2.2-pre.265",
        "@openmrs/esm-fast-data-entry-app": "1.0.1-pre.124",
        "@openmrs/esm-form-builder-app": "2.2.2-pre.664"
    }
}

If you look closely, you’ll see that this excludes many elements that were in the old spa-build-config.json, specifically these:

{
  "spaPath": "$SPA_PATH",
  "apiUrl": "$API_URL",
  "configUrls": ["$SPA_CONFIG_URLS"],
  "defaultLocale": "$SPA_DEFAULT_LOCALE",
  "importmap": "$SPA_PATH/importmap.json",
  "routes": "$SPA_PATH/routes.registry.json",
  "supportOffline": false
}

These properties are now in the spa-build-config.json file of the Reference Application. Technically, the properties in the spa-assemble-config.json file are those which the openmrs assemble command consumes and those in the spa-build-config.json are used by the openmrs build command. The reason for separating these out is that the spa-assemble-config.json is independent of the way in which the application is deployed, while the spa-build-config.json is not (the version in the Reference Application depends on a number of environment variables that are actually handled by a shell script at container start-up).

Basically, the spa-assemble-config.json should be usable regardless of how you’re deploying the frontend while the spa-build-config.json is only usable if you’re using our provided Docker container. This also means that the spa-build-config.json file isn’t published, though we can look into that if its a feature implementations would like.

5 Likes

Thank you so much Ian for this very clear and extremely timely post. @dkayiwa and @wikumc please be identifying what the SDK roadmap should specifically have urgently to facilitate this. @raff please kindly just be aware of this general strategy.

I believe the following folks will be especially interested: @eudson @alaboso @pirupius @samuel34 @pmanko @gcliff @reagan @mozzy @minimalist @aojwang @dkibet @slubwama @mmwanje @mksd & please tag anyone else you think should be in the loop.

4 Likes

I’m thrilled to read about the progress in transitioning from O2 to O3 for the OpenMRS Reference Applications. The shift towards making the Reference Application more of a “product” and the improvements in handling frontend challenges are particularly fascinating.

As someone eager to contribute and potentially participate in Google Summer of Code (GSoC), I’m curious about the implications of these changes on the overall development workflow. Specifically, could you provide more insights into how the incremental implementation process will impact the overall build and customization for different implementations?

Additionally, the approach to handling frontend configurations and the use of Docker/OCI containers are compelling. I’m interested in understanding more about how these changes will streamline the deployment process for different environments.

Thank you for keeping the community informed, and I’m looking forward to learning more about the ongoing developments!

Ideally, it’s a complete opt-in. That is, if you want to use the RefApp base versions, you’ll be able to use the RefApp base versions and add or remove on top of that, but if you don’t, things will continue to work as they always have.

Implementation is, in our model, a downstream phase of development, so it should have zero impact on development workflows per-se. That said, I suppose the features will be exposed via the SDK, so we might look into the ability to, e.g., run the setup command with multiple configuration files.

The goal here is to add new workflow options without replacing or changing how things currently work.

2 Likes

Is this a fair representation?

FWIW, our goal is to move away from hot-loading of modules in OpenMRS, since, while it’s handy for testing out functionality, it has some serious downsides: makes the system less reliable (memory leaks), slows the development cycle (slower startup times), and makes the system less predictable (state varies depending on which modules have been loaded vs. using a pre-defined build plan like distro.properties).

2 Likes

LGTM

I wasn’t exactly referring to hot module loading (i.e., loading modules at runtime). I just meant that with our current iteration of Docker images, any OMODs in the folder /openmrs/distribution/openmrs_modules becomes part of the application when the Docker image starts (essentially, in the start-up script, we copy whatever files are in that directory into the modules subdirectory of the application directory). Basically, this means implementations can use the RefApp image, but add additional OMODs that they want.

Obviously, as we move towards a more “built” backend, that can all change.

1 Like

Thanks for this great post @ibacher . There is a lot in here, and I wanted to add a few thoughts into the conversation at the risk of these being largely tangential to your overall point, which I fully support.

Just to point out that this is not a technical limitation with O2. It is a symptom of what stakeholders thought the reference application should be at that time, and active decisions made to avoid packaging too much opinionated functionality.

This also isn’t really an O2 vs O3 issue. The problem is that there is no way to configure one distribution to build off of another distribution’s distro.properties file. Solving that problem would allow this in either O2 or O3, so this isn’t really an incremental O3 benefit per se.

I’m all for this, but I do have some concerns with the current approach. Specifically, I feel that the reference application should be a reference that demonstrates the best practice way for putting together an O3 distribution. That means that the frontend should have it’s configuration driven from O3 configuration files, and we should not have a convention where “what is needed in the refapp” is what becomes the default values in all ESM configuration schemas. There are a lot of reasons I feel this way, but generally I feel that this forces us to continue to improve and optimize the configuration and extension mechanisms, develop best practices around them, and provides much more clear reference guidelines for other distributions. I strongly feel that specific metadata configuration is inappropriate to use as a default value unless it is packaged in the backend platform.

I feel similarly - though not quite as strongly - that external configuration should drive extensions, and that no extensions should be wired into slots within the compiled ESMs themselves. The referenceapplication configuration (and any other distribution) should be explicit about what it includes and how things are wired together, and this should not be buried in a litany of less transparent code repositories in github.

Being provocative, we do always have a server-side component available to us - the backend OpenMRS server, the spa module, etc. Theoretically, couldn’t one drop ESMs into a directory just like the modules folder, fire up OpenMRS, and have a backend module generate the various files (index.htm, etc) from this on startup, and eliminate the need for the client-side build step? We don’t necessarily need to run our backend and frontend from different applications/containers. I know this will probably be unpopular, but it might be an interesting thing to discuss, assuming there is technical merit.

The fact that the SDK does not utilize this mechanism may be slowing this adoption somewhat. It may be worth thinking through whether new iterations of the SDK be built around Docker rather than embedded Tomcat.

Thanks for the discussion!

2 Likes

No, it isn’t. I don’t see this as something that O3 is adding, so much as the difference in approach to the reference application implies we need a different set of tools. And, obviously, those tools should be backwards compatible with O2.

I think there are some worthwhile points to discuss here. There are a couple of reasons that we have extensions that get wired into slots by default: (1) extensions are usually developed for the slot they get inserted into; for example, a widget designed for the main section of the patient chart doesn’t really belong in the left nav; (2) it improves the experience of adding new features for developers; currently, this is basically one step (the PR is merged, and the changes are demoable on dev3).

Your absolutely right, though, that the story around configurations and extensibility has been harmed by this approach.

Technically, this isn’t hard to implement. I through together something that did that in this PR. There are a couple of in the weeds reasons for not following this:

  1. The main JS engine on the JVM is Nashhorn, which supports up to ES5 (from 2009); most of our frontend tooling is written with a baseline of ES2015. Some of the libraries we depend on for the openmrs CLI will not work on ES5, and at least personally, I don’t love the idea of maintaining two code bases to do two different things.
  2. This would actually push the client side build into the application start-up step, which I think is substantially worse from a reliability point of view.
  3. How ESMs are packaged and deployed is a bit of a pain, because whereas in Java land, JARs are pretty common, there’s no similarly-exposed default packaging in the frontend (technically NPM stores things as zipped tar archives, but this detail is mostly used for npm install and npm publish), i.e., there isn’t really a single “artifact” of a frontend build the way we have with OMODs, which makes things a bit messier.

Obviously, none of these are knock-down reasons we can’t, just why there’s some inertia around it.

2 Likes

hey @ibacher, while mounting frontend binaries to the docker image, how likely is it that i will get hard to debug UI errors while using an old frontend docker image with a slightly newer frontend core version or newer frontend module versions? e.g. refApp-3-frontend:3.1.1-rc released about 3 months ago with coreVersion: 5.8.1 released about a month ago.

cc @pirupius @michaelbontyes

There are no frontend binaries (other than I guess fonts and images). Everything is still text files.

The version of the frontend image is unlikely to have an impact as long as you’re replacing all the contents in /usr/share/nginx/html. Does somewhat harm our ability to troubleshoot things for you. Again, there’s no binary code, just text. And if you replace the webroot, you’re replacing all the files.

1 Like