FWIW, what am experiencing with @raff’s docker setup is exactly what i got when i tried to manually setup a local instance of 03 by running the npx openmrs@next build and npx openmrs@next assemble commands and copy the frontend assets to the OpenMRS data folder. So, what is the trick done by dev3?
Actually the error I see that looks most relevant is this:
An extension slot with the name ‘patient-chart-dashboard-slot’ already exists. Refusing to register the same slot name twice (in “registerExtensionSlot”). The existing one is from module @openmrs/esm-patient-chart-app.
Don’t know why it’s trying to register the same slot twice though.
It depends on what version of the openmrs tool the build step is run with.
So openmrs build creates the app shell (which includes esm-framework) at the same version as the version of the openmrs tool that’s run. But you should be able to determine that in a running system by checking window.spaVersion (set as part of the build process).
That lock file just determines the version used when developing the openmrs-esm-home. At runtime, the framework is provided by the runtime environment. (More exactly, things declared as peer dependencies are expected to be loaded at runtime from the provided environment). Maybe this map is helpful in seeing how things work?
A quick note on setup: while we have yarn.lock files in the root of our monorepos, those may or may not be determinative of the versions used because (at runtime) we only ever load the apps themselves and the components bundled in them. Most of the apps are built using this webpack configuration, though here the version will vary depending on the version declared in the lockfile (it doesn’t change much, so this is usually a non-issue).
Most of our apps have an analyze script which can be used to examine the built bundle in some detail (though it’s mostly useful for seeing what ends up in what chunk and the overall size).
That’s helpful, thanks! Apparently, o3.openmrs.org runs the 3.3.1 version of esm-framework, whereas the latest version used when you build locally is 3.4.1-pre.139, which might be broken.
Hmmm… this postversion script is supposed to ensure it always points to a fixed version, but obviously, that’s not working as intended… I guess they need to be moved back into the version hook?
Yeah, actually, it is. The problem is the REST module reports it’s version number like this: 2.36.0.7803f0, which is not a valid semver version. The other one I see is that some of the backend modules declare a dependency on FHIR2 at ^1.2 where what they really want it ^1 or 1.x or (in most cases) *.
Thanks @ibacher for helping me out here to better understand things in o3.
I discovered that window.spaVersion was returning the @openmrs/esm-devtools-app version defined in spa-build-config.json and not the version set for the npx openmrs@SPA_VERSION build.
There’s a lot of moving pieces in the o3 framework and it’s not clear to me how to determine the exact versions of esm apps used for the specific environment. To make things harder the o3.openmrs.org is now broken as well for the patient search and registration… things that worked a few days ago. I don’t even see a quick way to determine, which commit broke them.
Is there a way to determine the exact versions of esm apps at runtime that end up in the final bundle for a distro?
How to quickly connect a version to specific commits?
Is there anyone in the community who does know what is broken in o3 or local builds of distro 3.x and simply does not have time to address these issues? Or do we have a bigger problem that we don’t really know? Personally I don’t have a clue what’s broken and what needs to be debug as I don’t know versions to debug and I don’t know a way to debug the final build.
Technically, spaVersion is actually set here which should be this build-time constant, so it should be the version of the app shell that’s used. You pointed out that the openmrs tool was depending on a relative version rather than a strict version. It was the dependency on a strict version which allowed the version of the openmrs tool run to predict the version of spaVersion created. I’ve fixed that so we should be back (for newer versions of the openmrs tool) to the version of openmrs dictating the SPA version.
We don’t have a good answer for that for prerelease versions (there are Git tags for actual releases). This is similar to the problem of tracing back a -SNAPSHOT artefact to it’s commit… except that it is (in principle) solvable (prerelease version all have the GitHub run number as the -pre version and the GitHub run number should be resolvable to a commit hash).
o3 is using (relatively) old versions of things because it was impossible to deploy for a while. Nothing should’ve changed recently though… except for the migration to Jetstream2?
That would most likely be me, except that I don’t know exactly what’s wrong.
The real issue was that we couldn’t promote builds to o3 for quite a while due to disk-space limits on the Bamboo agents and a rather disk-heavy previous build system. Moving things to Docker and using Docker tags was (supposed to) fix that. I don’t think we’ve yet deployed the new nightly images to dev3 to confirm that they work, which is a pre-requisite to getting both test3 (which only semi-exists) and o3 up and running.
It should be fairly easy to append git short commit hash (8 chars) to the prerelease version (e.g. 3.1.1-pre.123.a3fascd4) if it doesn’t break ordering or set it in some variable in the final js package when building see e.g. https://github.com/OpenConceptLab/oclweb2/blob/master/set_build_version.sh
It’s possible that I tested on dev3 recently and mixed it up with o3, which was indeed broken for longer.
What about debugging the complete set of apps included in a distro? Do we have a way to build a distro with source maps included so it can be properly debugged in browsers?
Not an easy way because we don’t publish development builds anywhere. Initially, we were including sourcemaps in all the production builds as well (which is why they are referenced by DevTools) but we were getting complaints about the amount of bandwidth this added, so in this PR I removed them, which saved around ~150 MB of disk space and a fair bit of bandwidth.
Source maps are still added when running in dev mode. This means you could build all of the apps in dev mode and then use openmrs assemble with a build configuration file using file: URLs to point to those dev builds (support for that is here).
So, I thought this was down to permissions errors, which I was able to fix by recreating the Docker volumes. However, I’m still having issues. To have a shared place to make sure we’re all reproducing the same environment, test3.openmrs.org has been (temporarily) hijacked to just run the nightly images of all three with no modifications.
Right now, the backend starts, but neither the Legacy UI nor the REST web services appear to be responding as I would expect. I.e., a GET request to https://test3.openmrs.org/openmrs/ws/rest/v1/session returns:
Conversely, though, a GET request to, e.g., https://test3.openmrs.org/openmrs/ws/fhir2/R4/metadata returns a response as I would expect (response is a bit big to reproduce here), which does imply that OpenMRS is running…
@ibacher, I see test3 does respond properly now. I understand you were able to fix it or it fixed itself
We do need sourcemaps. They don’t need to be included in production images. It is possible to serve them from a local machine for an app running in a production environment as laid out e.g. here. Another approach is to be able to produce 2 images. One for production and one for debugging with sourcemaps included. The most important thing is to store them or be able to easily produce them for the specific version run in production. Was the bandwith an issue due to storing too much data or from production users/devs complaining on the size of images?