Help to unblock issue with Reporting & Reporting REST modules in O3?

, ,

A quick note on setup: while we have yarn.lock files in the root of our monorepos, those may or may not be determinative of the versions used because (at runtime) we only ever load the apps themselves and the components bundled in them. Most of the apps are built using this webpack configuration, though here the version will vary depending on the version declared in the lockfile (it doesn’t change much, so this is usually a non-issue).

Most of our apps have an analyze script which can be used to examine the built bundle in some detail (though it’s mostly useful for seeing what ends up in what chunk and the overall size).

That’s helpful, thanks! Apparently, o3.openmrs.org runs the 3.3.1 version of esm-framework, whereas the latest version used when you build locally is 3.4.1-pre.139, which might be broken.

Even when you force openmrs build tool to use 3.3.1 it fetches the latest 3.4.1-pre.139 as it is instructed in openmrs-esm-core/package.json at v3.3.1 · openmrs/openmrs-esm-core · GitHub

Not sure what’s the way to force the build tool to use 3.3.1 of esm-framework.

Hmmm… this postversion script is supposed to ensure it always points to a fixed version, but obviously, that’s not working as intended… I guess they need to be moved back into the version hook?

Yeah, actually, it is. The problem is the REST module reports it’s version number like this: 2.36.0.7803f0, which is not a valid semver version. The other one I see is that some of the backend modules declare a dependency on FHIR2 at ^1.2 where what they really want it ^1 or 1.x or (in most cases) *.

1 Like

Thanks @ibacher for helping me out here to better understand things in o3.

I discovered that window.spaVersion was returning the @openmrs/esm-devtools-app version defined in spa-build-config.json and not the version set for the npx openmrs@SPA_VERSION build.

There’s a lot of moving pieces in the o3 framework and it’s not clear to me how to determine the exact versions of esm apps used for the specific environment. To make things harder the o3.openmrs.org is now broken as well for the patient search and registration… things that worked a few days ago. I don’t even see a quick way to determine, which commit broke them.

Is there a way to determine the exact versions of esm apps at runtime that end up in the final bundle for a distro?

How to quickly connect a version to specific commits?

Is there anyone in the community who does know what is broken in o3 or local builds of distro 3.x and simply does not have time to address these issues? Or do we have a bigger problem that we don’t really know? Personally I don’t have a clue what’s broken and what needs to be debug as I don’t know versions to debug and I don’t know a way to debug the final build.

Working on that (see this commit).

Technically, spaVersion is actually set here which should be this build-time constant, so it should be the version of the app shell that’s used. You pointed out that the openmrs tool was depending on a relative version rather than a strict version. It was the dependency on a strict version which allowed the version of the openmrs tool run to predict the version of spaVersion created. I’ve fixed that so we should be back (for newer versions of the openmrs tool) to the version of openmrs dictating the SPA version.

We don’t have a good answer for that for prerelease versions (there are Git tags for actual releases). This is similar to the problem of tracing back a -SNAPSHOT artefact to it’s commit… except that it is (in principle) solvable (prerelease version all have the GitHub run number as the -pre version and the GitHub run number should be resolvable to a commit hash).

o3 is using (relatively) old versions of things because it was impossible to deploy for a while. Nothing should’ve changed recently though… except for the migration to Jetstream2?

That would most likely be me, except that I don’t know exactly what’s wrong. :grin:

The real issue was that we couldn’t promote builds to o3 for quite a while due to disk-space limits on the Bamboo agents and a rather disk-heavy previous build system. Moving things to Docker and using Docker tags was (supposed to) fix that. I don’t think we’ve yet deployed the new nightly images to dev3 to confirm that they work, which is a pre-requisite to getting both test3 (which only semi-exists) and o3 up and running.

It should be fairly easy to append git short commit hash (8 chars) to the prerelease version (e.g. 3.1.1-pre.123.a3fascd4) if it doesn’t break ordering or set it in some variable in the final js package when building see e.g. oclweb2/set_build_version.sh at master · OpenConceptLab/oclweb2 · GitHub

It’s possible that I tested on dev3 recently and mixed it up with o3, which was indeed broken for longer.

Yes, we have the new nightly images deployed to dev3 with Reference Application - Distribution 3.x 168: Build result summary - OpenMRS Bamboo It is running images tagged with dev3. It is not deploying the frontend as it is served by CDN and the frontend is broken for the current distro build…

This is great! Thanks!

What about debugging the complete set of apps included in a distro? Do we have a way to build a distro with source maps included so it can be properly debugged in browsers?

That is exactly what i was asking for, a few weeks ago!

Not an easy way because we don’t publish development builds anywhere. Initially, we were including sourcemaps in all the production builds as well (which is why they are referenced by DevTools) but we were getting complaints about the amount of bandwidth this added, so in this PR I removed them, which saved around ~150 MB of disk space and a fair bit of bandwidth.

Source maps are still added when running in dev mode. This means you could build all of the apps in dev mode and then use openmrs assemble with a build configuration file using file: URLs to point to those dev builds (support for that is here).

So, I thought this was down to permissions errors, which I was able to fix by recreating the Docker volumes. However, I’m still having issues. To have a shared place to make sure we’re all reproducing the same environment, test3.openmrs.org has been (temporarily) hijacked to just run the nightly images of all three with no modifications.

Right now, the backend starts, but neither the Legacy UI nor the REST web services appear to be responding as I would expect. I.e., a GET request to https://test3.openmrs.org/openmrs/ws/rest/v1/session returns:

HTTP/1.1 404
Connection: keep-alive
Content-Language: en
Content-Length: 682
Content-Type: text/html;charset=utf-8
Date: Wed, 03 Aug 2022 18:15:01 GMT
Server: nginx/1.18.0 (Ubuntu)
Set-Cookie: JSESSIONID=0FE090100F0DEC733A634BC0E2ECFC75; Path=/openmrs; HttpOnly

Similarly a request for https://test3.openmrs.org/openmrs/index.htm results in a 404 response whereas I’d expect it to serve the Legacy UI.

Conversely, though, a GET request to, e.g., https://test3.openmrs.org/openmrs/ws/fhir2/R4/metadata returns a response as I would expect (response is a bit big to reproduce here), which does imply that OpenMRS is running…

Here is a Gist containing the startup log.

@ibacher, I see test3 does respond properly now. I understand you were able to fix it or it fixed itself :slight_smile:

We do need sourcemaps. They don’t need to be included in production images. It is possible to serve them from a local machine for an app running in a production environment as laid out e.g. here. Another approach is to be able to produce 2 images. One for production and one for debugging with sourcemaps included. The most important thing is to store them or be able to easily produce them for the specific version run in production. Was the bandwith an issue due to storing too much data or from production users/devs complaining on the size of images?

This probably needs a longer discussion. Our standard practice for developing the frontend modules is to work on one module at a time, running locally with the rest of the app being served from dev3. In this flow, the source maps are loaded locally for the app that you are working on, so while developing an app, you should have access to the source maps for the app. (You can read more about the flow in the OpenMRS 3.0 dev guide). Basically, the idea isn’t to not have source maps at all as it is to restrict source maps to the relevant ones.

This is purely a bandwidth issue; essentially, I was trying to cut down on the amount of data transferred from dev3 as this was costing some of our community volunteers quite a bit in bandwidth. I don’t see a way around this issue without changing the frontend dev workflow to use Docker images locally, in which case, bandwidth becomes a non-issue.

I’m just saying that we need to be able to debug the application as the whole with all building blocks in place. It’s going to be more and more complex especially given the number of components, versions and different possible configurations. They all interact with one another and may cause issues that occur only when run together. It’s not trying to change the standard practice for developing the frontend modules, which is the way it should be.

All right! Then let’s try to make it so that they are only fetched, if you need them.

Yeah, I’m not trying to say that our current setup is ideal. Having source maps is useful, just trying to contextualise the problem that was being solved by not having them. If there’s a way in which we can get this working, I’m happy to do so. I think the trick is to find a way to make downloading the source maps explicitly opt-in.

So this seems to require doing the following commands to get it to work:

docker-compose up -d
# wait for install to complete
docker-compose restart backend
# now the server works

It would be nice if we could avoid the need for the second line. Not quite sure why that happens.

I’m also a bit stumped on the patient chart loading issue. If I run the patient-chart-app locally, everything works as expected… There are also no obvious errors, which makes it a bit hard to know where to start… Even if we had source maps with out an error to start from it’s going to be a bit weird.

The problem seems to be in this component. In particular, the isLoadingPatient doesn’t seem to be getting set to true, hence the loading widget showing forever.

From the Network tab, I can see it actually loads the patient 3 times, which seems a bit off…

1 Like

@dkigen any thoughts on this?

1 Like

The underling cause of the breakage might be in another esm-module @ibacher (just thinking). Does our docker instance allow us to override the version of any esm module running? If so how can the version/instance be overriden?

cc @zacbutko @vasharma05

1 Like

thanks for the reply @ibacher at The Patient chart component is loading forever in local (docker) instance. - #10 by ibacher

1 Like