Help to unblock issue with Reporting & Reporting REST modules in O3?

, ,

It happens after selecting a patient from the returned list as you can see by this: http://localhost/openmrs/spa/patient/ab4f09df-bb3a-4e82-b90b-0afc178c4600/chart

FWIW, what am experiencing with @raff’s docker setup is exactly what i got when i tried to manually setup a local instance of 03 by running the npx openmrs@next build and npx openmrs@next assemble commands and copy the frontend assets to the OpenMRS data folder. So, what is the trick done by dev3? :smiley:

Absolutely no idea… In the Javascript console I’m guessing there are a lot of errors?

One obvious error message is Found modules with unresolved backend dependencies. And yet the number and versions of modules are exactly as on dev3

Attached is the screenshot of the JavaScript console when i refresh this page: http://localhost/openmrs/spa/patient/0a02d558-125f-4926-9229-07c5b1a7d354/chart

As for the 'Dev Tools` black screen which does not load any esm, it has this JavaScript error.

Actually the error I see that looks most relevant is this:

An extension slot with the name ‘patient-chart-dashboard-slot’ already exists. Refusing to register the same slot name twice (in “registerExtensionSlot”). The existing one is from module @openmrs/esm-patient-chart-app.

Don’t know why it’s trying to register the same slot twice though.

Unfortunately, this is probably just a bug in the code that does the checking.

It seems to be doing the right thing at https://github.com/openmrs/openmrs-esm-core/blob/fa8c4bd18f0a9630748b3e5dea5f6d8029f9691a/packages/framework/esm-utils/src/version.ts#L29

Is the build process using the latest version of openmrs-esm-framework? I see it’s locked in e.g. https://github.com/openmrs/openmrs-esm-home/blob/master/yarn.lock#L2694 How do I check versions of all esm components in the final build?

It depends on what version of the openmrs tool the build step is run with.

So openmrs build creates the app shell (which includes esm-framework) at the same version as the version of the openmrs tool that’s run. But you should be able to determine that in a running system by checking window.spaVersion (set as part of the build process).

That lock file just determines the version used when developing the openmrs-esm-home. At runtime, the framework is provided by the runtime environment. (More exactly, things declared as peer dependencies are expected to be loaded at runtime from the provided environment). Maybe this map is helpful in seeing how things work?

A quick note on setup: while we have yarn.lock files in the root of our monorepos, those may or may not be determinative of the versions used because (at runtime) we only ever load the apps themselves and the components bundled in them. Most of the apps are built using this webpack configuration, though here the version will vary depending on the version declared in the lockfile (it doesn’t change much, so this is usually a non-issue).

Most of our apps have an analyze script which can be used to examine the built bundle in some detail (though it’s mostly useful for seeing what ends up in what chunk and the overall size).

That’s helpful, thanks! Apparently, o3.openmrs.org runs the 3.3.1 version of esm-framework, whereas the latest version used when you build locally is 3.4.1-pre.139, which might be broken.

Even when you force openmrs build tool to use 3.3.1 it fetches the latest 3.4.1-pre.139 as it is instructed in openmrs-esm-core/package.json at v3.3.1 · openmrs/openmrs-esm-core · GitHub

Not sure what’s the way to force the build tool to use 3.3.1 of esm-framework.

Hmmm… this postversion script is supposed to ensure it always points to a fixed version, but obviously, that’s not working as intended… I guess they need to be moved back into the version hook?

Yeah, actually, it is. The problem is the REST module reports it’s version number like this: 2.36.0.7803f0, which is not a valid semver version. The other one I see is that some of the backend modules declare a dependency on FHIR2 at ^1.2 where what they really want it ^1 or 1.x or (in most cases) *.

1 Like

Thanks @ibacher for helping me out here to better understand things in o3.

I discovered that window.spaVersion was returning the @openmrs/esm-devtools-app version defined in spa-build-config.json and not the version set for the npx openmrs@SPA_VERSION build.

There’s a lot of moving pieces in the o3 framework and it’s not clear to me how to determine the exact versions of esm apps used for the specific environment. To make things harder the o3.openmrs.org is now broken as well for the patient search and registration… things that worked a few days ago. I don’t even see a quick way to determine, which commit broke them.

Is there a way to determine the exact versions of esm apps at runtime that end up in the final bundle for a distro?

How to quickly connect a version to specific commits?

Is there anyone in the community who does know what is broken in o3 or local builds of distro 3.x and simply does not have time to address these issues? Or do we have a bigger problem that we don’t really know? Personally I don’t have a clue what’s broken and what needs to be debug as I don’t know versions to debug and I don’t know a way to debug the final build.

Working on that (see this commit).

Technically, spaVersion is actually set here which should be this build-time constant, so it should be the version of the app shell that’s used. You pointed out that the openmrs tool was depending on a relative version rather than a strict version. It was the dependency on a strict version which allowed the version of the openmrs tool run to predict the version of spaVersion created. I’ve fixed that so we should be back (for newer versions of the openmrs tool) to the version of openmrs dictating the SPA version.

We don’t have a good answer for that for prerelease versions (there are Git tags for actual releases). This is similar to the problem of tracing back a -SNAPSHOT artefact to it’s commit… except that it is (in principle) solvable (prerelease version all have the GitHub run number as the -pre version and the GitHub run number should be resolvable to a commit hash).

o3 is using (relatively) old versions of things because it was impossible to deploy for a while. Nothing should’ve changed recently though… except for the migration to Jetstream2?

That would most likely be me, except that I don’t know exactly what’s wrong. :grin:

The real issue was that we couldn’t promote builds to o3 for quite a while due to disk-space limits on the Bamboo agents and a rather disk-heavy previous build system. Moving things to Docker and using Docker tags was (supposed to) fix that. I don’t think we’ve yet deployed the new nightly images to dev3 to confirm that they work, which is a pre-requisite to getting both test3 (which only semi-exists) and o3 up and running.

It should be fairly easy to append git short commit hash (8 chars) to the prerelease version (e.g. 3.1.1-pre.123.a3fascd4) if it doesn’t break ordering or set it in some variable in the final js package when building see e.g. https://github.com/OpenConceptLab/oclweb2/blob/master/set_build_version.sh

It’s possible that I tested on dev3 recently and mixed it up with o3, which was indeed broken for longer.

Yes, we have the new nightly images deployed to dev3 with Reference Application - Distribution 3.x 168: Build result summary - OpenMRS Bamboo It is running images tagged with dev3. It is not deploying the frontend as it is served by CDN and the frontend is broken for the current distro build…

This is great! Thanks!

What about debugging the complete set of apps included in a distro? Do we have a way to build a distro with source maps included so it can be properly debugged in browsers?

That is exactly what i was asking for, a few weeks ago!

Not an easy way because we don’t publish development builds anywhere. Initially, we were including sourcemaps in all the production builds as well (which is why they are referenced by DevTools) but we were getting complaints about the amount of bandwidth this added, so in this PR I removed them, which saved around ~150 MB of disk space and a fair bit of bandwidth.

Source maps are still added when running in dev mode. This means you could build all of the apps in dev mode and then use openmrs assemble with a build configuration file using file: URLs to point to those dev builds (support for that is here).

So, I thought this was down to permissions errors, which I was able to fix by recreating the Docker volumes. However, I’m still having issues. To have a shared place to make sure we’re all reproducing the same environment, test3.openmrs.org has been (temporarily) hijacked to just run the nightly images of all three with no modifications.

Right now, the backend starts, but neither the Legacy UI nor the REST web services appear to be responding as I would expect. I.e., a GET request to https://test3.openmrs.org/openmrs/ws/rest/v1/session returns:

HTTP/1.1 404
Connection: keep-alive
Content-Language: en
Content-Length: 682
Content-Type: text/html;charset=utf-8
Date: Wed, 03 Aug 2022 18:15:01 GMT
Server: nginx/1.18.0 (Ubuntu)
Set-Cookie: JSESSIONID=0FE090100F0DEC733A634BC0E2ECFC75; Path=/openmrs; HttpOnly

Similarly a request for https://test3.openmrs.org/openmrs/index.htm results in a 404 response whereas I’d expect it to serve the Legacy UI.

Conversely, though, a GET request to, e.g., https://test3.openmrs.org/openmrs/ws/fhir2/R4/metadata returns a response as I would expect (response is a bit big to reproduce here), which does imply that OpenMRS is running…

Here is a Gist containing the startup log.

@ibacher, I see test3 does respond properly now. I understand you were able to fix it or it fixed itself :slight_smile:

We do need sourcemaps. They don’t need to be included in production images. It is possible to serve them from a local machine for an app running in a production environment as laid out e.g. here. Another approach is to be able to produce 2 images. One for production and one for debugging with sourcemaps included. The most important thing is to store them or be able to easily produce them for the specific version run in production. Was the bandwith an issue due to storing too much data or from production users/devs complaining on the size of images?