This is to share current thinking around how we are assuring quality for the O3 RefApp and for our first packages, the “OHRI packages” (which include COVID, HIV, and other clinical areas). Feedback/suggestions welcome! Huge thanks for input from @christine, @eudson, @dkayiwa, and others.
- Goals: We want it to be easy for devs to commit their feature work into the o3 dev environment. We want to be able to regularly test new/evolving O3 features in a test3 environment (right now we just have dev and demo (production)). That way we can say “this combo of apps is good-to-go as a RefApp kit”, and we can explicitly release & celebrate feature improvements as they mature into our demo (production) environment.
- Problem: Our currently O3 build pipeline is broken. So both our dev→demo pipeline is broken (so demo tends to go months without being updated), and there’s no use in setting up a “test3” environment until this is fixed.
- Plan: Both @rafal and @ibacher are investing time on fixing the O3 dev→demo pipeline process, and will add a test environment when done. Details & updates here.
Tip: We should end up with features going through a more fluid workflow like this:
- This is my #1 concern about O3 right now.
- Goals: O3 needs to be fast to load and not require much bandwidth - it should feel as fast as the 2.x RefApp. Details & updates here
- Problem: We know there are places where O3 is slow, but we haven’t had either (1) an inventory of which places are having problems, or (2) a senior-enough frontend developer with the time to dig deeper into framework-related causes.
- _Plan:_For (1): We now have a single jira Epic for tracking known performance issues: O3-1162. Both OMRS fellows @tendayi and @jnsereko will be adding to this as they uncover more bugs/slow apps. For (2): This is part of the reason for this job posting: Senior Frontend Engineer
Goal: We want O3 RefApp contributors to have confidence that their contributions won’t break things. We also want to confidently release improvements to esms without the need to do extensive manual tests - since this can cause release delays that eventually lead to lack of interest in getting the feature completed.
Problem: Our O3 cypress tests are few, broken in places, and difficult to troubleshoot while knotted together with a chaos of GitHub actions in the qaframework repo.
Plan: Thanks to @pasindur2 for taking this on as his GSOC project. He has already separated out the tests into the o3-specific repo, openmrs-test-3refapp. Huge thanks to mentors @jayasanka and @bistenes for mentoring this project. Eg, recently Pasindu worked on automated tests for Patient Registration and Patient Search. Details & updates here.
Tip: If you’re unfamiliar with E2E frontend tests, @jayasanka, @heshan, @piumal1999, @kumuditha, @pasindur2 and @anjisvj made a great video that explains O3 E2E tests in general: OpenMRS 2021 - QA Automation in 3.x - YouTube
- Goal: We want our O3 RefApp widgets to be in a relatively stable, good looking state, and we want to have a single place/protocol we can reference when we want to manually double-check that things are performing as expected.
- Problem: We know there are numerous O3 RefApps having issues, or not fully completed based on the designs that were user tested, but we don’t have a single inventory of the known bugs/deficiencies.
Plan: @tendayi.mutangadura is creating this manual O3 QA Testing Spreadsheet which we can use for periodic manual smoke tests and pre-release regression testing. (Kudos to @zacbutko for starting this initiative). We are referencing regression test checklists that implementers have shared with us so that we can ensure critical areas used in the real-world in 2.x environments are included in our manual QA list. Tendayi is also reviewing our inventory of O3 RefApp designs in Zeplin to check for explicit feature gaps. This is helping @tendayi.mutangadura to onboard to O3 in his Product Management fellowship, and it seems we will have some help from folks like @irenyak1 and @randila to hand over some of this manual testing to.
- Currently we are doing this manual testing in dev3, but we would move this to test3 once that environment is available.
- Goal: Implementers trust that OpenMRS products are well-tested. This should be a reasonable assumption.
- Problem: Our historic community-based testing has been reliant on ~annual manual reviews that are often understaffed both for testing and for actual bug fixes - because by the time the bugs are caught, the original devs involved have long moved on. This is why full-stack automation is so key especially for us as an OSS community.
- Plan: We have had a gap in a focused, senior QA Developer role for the last year since Joseph’s incredibly tragic loss. Massive kudos goes to @dkayiwa for shouldering extra mentorship and coaching over the last year in the QA Support Team. We need a senior experienced QA Technical lead to help us go deeper into additional, closer-to-code-level test automation we should be putting in place. This is the reason for this job posting: QA Automation Engineering Lead.