A quick note on setup: while we have yarn.lock files in the root of our monorepos, those may or may not be determinative of the versions used because (at runtime) we only ever load the apps themselves and the components bundled in them. Most of the apps are built using this webpack configuration, though here the version will vary depending on the version declared in the lockfile (it doesn’t change much, so this is usually a non-issue).
Most of our apps have an analyze script which can be used to examine the built bundle in some detail (though it’s mostly useful for seeing what ends up in what chunk and the overall size).
Yeah, actually, it is. The problem is the REST module reports it’s version number like this: 220.127.116.1103f0, which is not a valid semver version. The other one I see is that some of the backend modules declare a dependency on FHIR2 at ^1.2 where what they really want it ^1 or 1.x or (in most cases) *.
Thanks @ibacher for helping me out here to better understand things in o3.
I discovered that window.spaVersion was returning the @openmrs/esm-devtools-app version defined in spa-build-config.json and not the version set for the npx openmrs@SPA_VERSION build.
There’s a lot of moving pieces in the o3 framework and it’s not clear to me how to determine the exact versions of esm apps used for the specific environment. To make things harder the o3.openmrs.org is now broken as well for the patient search and registration… things that worked a few days ago. I don’t even see a quick way to determine, which commit broke them.
Is there a way to determine the exact versions of esm apps at runtime that end up in the final bundle for a distro?
How to quickly connect a version to specific commits?
Is there anyone in the community who does know what is broken in o3 or local builds of distro 3.x and simply does not have time to address these issues? Or do we have a bigger problem that we don’t really know? Personally I don’t have a clue what’s broken and what needs to be debug as I don’t know versions to debug and I don’t know a way to debug the final build.
Technically, spaVersion is actually set here which should be this build-time constant, so it should be the version of the app shell that’s used. You pointed out that the openmrs tool was depending on a relative version rather than a strict version. It was the dependency on a strict version which allowed the version of the openmrs tool run to predict the version of spaVersion created. I’ve fixed that so we should be back (for newer versions of the openmrs tool) to the version of openmrs dictating the SPA version.
We don’t have a good answer for that for prerelease versions (there are Git tags for actual releases). This is similar to the problem of tracing back a -SNAPSHOT artefact to it’s commit… except that it is (in principle) solvable (prerelease version all have the GitHub run number as the -pre version and the GitHub run number should be resolvable to a commit hash).
o3 is using (relatively) old versions of things because it was impossible to deploy for a while. Nothing should’ve changed recently though… except for the migration to Jetstream2?
That would most likely be me, except that I don’t know exactly what’s wrong.
The real issue was that we couldn’t promote builds to o3 for quite a while due to disk-space limits on the Bamboo agents and a rather disk-heavy previous build system. Moving things to Docker and using Docker tags was (supposed to) fix that. I don’t think we’ve yet deployed the new nightly images to dev3 to confirm that they work, which is a pre-requisite to getting both test3 (which only semi-exists) and o3 up and running.
Not an easy way because we don’t publish development builds anywhere. Initially, we were including sourcemaps in all the production builds as well (which is why they are referenced by DevTools) but we were getting complaints about the amount of bandwidth this added, so in this PR I removed them, which saved around ~150 MB of disk space and a fair bit of bandwidth.
Source maps are still added when running in dev mode. This means you could build all of the apps in dev mode and then use openmrs assemble with a build configuration file using file: URLs to point to those dev builds (support for that is here).
So, I thought this was down to permissions errors, which I was able to fix by recreating the Docker volumes. However, I’m still having issues. To have a shared place to make sure we’re all reproducing the same environment, test3.openmrs.org has been (temporarily) hijacked to just run the nightly images of all three with no modifications.
@ibacher, I see test3 does respond properly now. I understand you were able to fix it or it fixed itself
We do need sourcemaps. They don’t need to be included in production images. It is possible to serve them from a local machine for an app running in a production environment as laid out e.g. here. Another approach is to be able to produce 2 images. One for production and one for debugging with sourcemaps included. The most important thing is to store them or be able to easily produce them for the specific version run in production. Was the bandwith an issue due to storing too much data or from production users/devs complaining on the size of images?
This probably needs a longer discussion. Our standard practice for developing the frontend modules is to work on one module at a time, running locally with the rest of the app being served from dev3. In this flow, the source maps are loaded locally for the app that you are working on, so while developing an app, you should have access to the source maps for the app. (You can read more about the flow in the OpenMRS 3.0 dev guide). Basically, the idea isn’t to not have source maps at all as it is to restrict source maps to the relevant ones.
This is purely a bandwidth issue; essentially, I was trying to cut down on the amount of data transferred from dev3 as this was costing some of our community volunteers quite a bit in bandwidth. I don’t see a way around this issue without changing the frontend dev workflow to use Docker images locally, in which case, bandwidth becomes a non-issue.
I’m just saying that we need to be able to debug the application as the whole with all building blocks in place. It’s going to be more and more complex especially given the number of components, versions and different possible configurations. They all interact with one another and may cause issues that occur only when run together. It’s not trying to change the standard practice for developing the frontend modules, which is the way it should be.
All right! Then let’s try to make it so that they are only fetched, if you need them.
Yeah, I’m not trying to say that our current setup is ideal. Having source maps is useful, just trying to contextualise the problem that was being solved by not having them. If there’s a way in which we can get this working, I’m happy to do so. I think the trick is to find a way to make downloading the source maps explicitly opt-in.
So this seems to require doing the following commands to get it to work:
docker-compose up -d
# wait for install to complete
docker-compose restart backend
# now the server works
It would be nice if we could avoid the need for the second line. Not quite sure why that happens.
I’m also a bit stumped on the patient chart loading issue. If I run the patient-chart-app locally, everything works as expected… There are also no obvious errors, which makes it a bit hard to know where to start… Even if we had source maps with out an error to start from it’s going to be a bit weird.
The problem seems to be in this component. In particular, the isLoadingPatient doesn’t seem to be getting set to true, hence the loading widget showing forever.
From the Network tab, I can see it actually loads the patient 3 times, which seems a bit off…
The underling cause of the breakage might be in another esm-module @ibacher (just thinking). Does our docker instance allow us to override the version of any esm module running? If so how can the version/instance be overriden?