Many of us want to see O3 performance improve (load times, bandwidth requirements; separate footnote re. stress tests below.) Users need the O3 UX to be faster than it is now - at least as fast as the RefApp 2.x experience or better. (eg 1 full second to load a page is not acceptable.)
We need more focused energy to identify and work on O3 performance.
Problem #1: Performance Problems are Technically Hard.
The main blocker is still that performance problems are notoriously some of the most difficult programming problems to troubleshoot. To troubleshoot performance issues in the O3 frontend requires the person has a solid grasp of the architecture and deep understanding of, e.g., React, importmaps and the various bits of machinery we use to load code—just as I would expect troubleshooting backend performance issues to likely require at least some understanding of Spring, servlets, etc.
Solution: We still do not have a solution / person with dedicated time to address O3 performance.
To be clear: It is unreasonable to expect anyone in a fellowship position to solve O3 performance overall. I am hoping this thread will help alert folks to this need. In the meantime, since we have a wonderful QA Engineering fellow (@jnsereko) for the next ~8mos, what can we start doing now? This leads to #2:
Problem #2: We need an Inventory of where performance issues are happening.
We do have an Epic for O3 Performance here: O3-1162
O3-1179 is a good example of a performance issue that needed to be identified and tracked for where the problem was coming from.
But, this epic is very minimal. We don’t have a more exhaustive list where we can show we have reviewed different areas of O3, and then show the places where there are clearly slow issues. @jnsereko I’m hoping this can be part of your work. (FWIW, ICYMI, @bistenes made this helpful tutorial video you can reference, re. how to use browser-based profiling tools to identify performance problems in O3.)
Solution: Let’s discuss here in this thread how to kick-off his work.
@bistenes suggested that “we just want automated speed tests—tests that go through automated processes and report how long everything takes. But actually the way to accomplish that is probably just to make the existing E2E tests report how long everything takes.”
@ibacher suggested “to look into setting up some automated stress / load tests, e.g., automated scripts to go through workflows we envision users doing. This would hopefully give us some framework for discovering which interactions were taking longer than we would like, i.e., for identifying some of the hot-spots we should look into.”
So: we could start by having Joshua look at the existing E2E test suite and see if any are blocked due to performance hot-spots / time-outs. Mind you, only a few are actually passing right now. (I’m thinking of the 3.x RefApp Workflow Tests here:
Overall we need to make O3 performance more of a focus in our O3 squad, which will require time, attention, and maybe at the expense of so much focus on new features added.
*Footnote re. Stress Tests: I’ve been asked lately, “What kind of stress testing has been done on O3?” As @bistenes recently mentioned to me, stress/load tests are only relevant for backend performance. Backend performance is relevant to the overall performance of the app but we expect that it’s not a significant contributor to the slowness we experience right now in O3.
(I should also give credit to @vasharma05 who has been helping look at ~2 of the known issues in O3 performance. There may also be a way to combine the work @pasindur2 is doing on O3 E2E tests, I’m not sure.)
Alright, well, we have both a fellow and a GSoC student primed to work on the 3.x tests. Obviously, the current tests are quite flaky. A few things:
I think we should refactor things to extract the 3.x-related tests into their own repository rather than having them as part of the QA Framework (which has, frankly, too many GitHub Actions flows), say into openmrs/openmrs-contrib-qa3 (I’m bad at naming things so open to suggestions).
We should then work to get the tests more stable. Ideally, these tests are not run against dev3, but rather a Dockerised instance of the 3.x Reference Application. dev3 is intended to be unstable and I would not expect tests to reliably work on it.
We should work to add more tests. The current suite of tests is pretty bare in most areas.
We should come up with some prioritised workflows that can be done with the RefApp (and if we need to add new metadata, new data, etc., we should look into ensuring we have that available) and get those coded as tests.
We should work on making the feedback from these tests more visible to developers. If we can get things working reliably, this might just be as simple as putting a bit red error on the README of the 3.x frontend projects when those tests fail.
Once we have that working, we should be able to extract some reporting from the pages along the lines @bistenes suggested and that should give us an idea of what pages / sequences to focus in on.
This includes 2.x RefApp branches, which have different E2E tests coverage
The same thing Ian mentioned re. the QA Framework repo having too many GitHub Actions flows is likely to happen again. GHA is such a painful UI to navigate as-is; I imagine it would be nice to separate the Actions for the actual RefApp from Actions only for the test coverage.
@ibacher thank you this is an AWESOME response!! (Side note; I don’t know how you find time to be such a thoughtful and well-written person in so many different technical areas, but I really admire you.)
Agreed with this. No better name suggestion. (Is it important to us that we maintain the word “contrib”?)
What about a test environment? eg test3.openmrs.org? (Maybe I’m partly missing your point re. the Dockerised instance)
Agreed. I’ve been hesitant to push for more coverage in many places b/c many features are not yet stable enough, but we should do another inventory of all features and where we want automation. Actually I’m hoping @tendayi.mutangadura can help with this during his QA of O3, and definitely this seems relevant for @pasindur2’s work.
Love this idea re. error tags in README of the relevant esm apps. Right now I don’t think many O3 devs look at the current E2E Pass/Fail Dashboard at all () but that’s fair b/c that feels like such a separate area from the current O3 dev workflow.
Beginning to think that our next key step here is meeting w/ Pasindu and Joshua to break down the tasks…
My main reason for this was that—as a somewhat longer-term plan—it would be nice to be able to run these tests against PRs to the various monorepos. Having it unmoored from the RefApp release cycle seems better for that purpose.
Thank you.
With >200 repos, I think the naming conventions help make it easier to figure out how a repo fits into the ecosystem, e.g., openmrs-module- is a backend module, openmrs-esm- is a frontend module, openmrs-contrib- is essentially anything else in the community that isn’t directly part of the software. But it’s not that important.
A test environment is definitely on my radar to add, but it would almost be nicer to have the tests running against a per-test instance of the RefApp (like we do with most of the rest of the QA Framework)—that, hopefully, means fewer network gremlins interfering with tests and that we can use a test environment more for manual UAT rather than automated tests.
I think so too. Though I do see that @pasindur2 has taken some initiative here!
Agreed that naming conventions are helpful to understand 200+ repos, but it’s less helpful if we put everything under a “contrib” category (which is intended for supporting software).
Testing is more sustainable when it’s part of the developers processes – some would argue the first step when writing any code – if we can’t get the tests inside a repo used by the devs writing the code being tested, then let’s at least avoid labeling the repos with the test in a way that makes them “not part of the OpenMRS software.
I actually prefer the convention of having “-test-”. I was just following openmrs-contrib-qaframework. I’d be hesitant to call this “test-refapp”, though, for a few reasons. First, it’s (primarily) frontend testing. There’s still valuable things being done in the QA Framework that aren’t part of the 3.x tests related to ensuring the platform works. And, second, the 2.x RefApp is tested by the QA Framework—and I think that’s fine where we have a lot of tests that share the same code base.
My plan is to build the e2e test framework from a clean state since the existing test scenarios also currently failing. Once the ci pipeline implementation works fine I will be able to fix the other test cases as well. Then I can make a PR to restore those tests. cc: @ibacher
That folder also has tests for the subscription module which shouldn’t be migrated to the O3 repo (but maybe new ones could be written after the project to create a microfrontend for the OCL module is done).
Yes, first make a PR to the newly created repo.
Later on you could remove if there are any remaining 3.x related tests / other files from the old repo.