QA for O3 + OHRI: Summary of main current qa initiatives

This is to share current thinking around how we are assuring quality for the O3 RefApp and for our first packages, the “OHRI packages” (which include COVID, HIV, and other clinical areas). Feedback/suggestions welcome! Huge thanks for input from @christine, @eudson, @dkayiwa, and others.

O3: Improve Build Pipeline and Release Workflow

  • Goals: We want it to be easy for devs to commit their feature work into the o3 dev environment. We want to be able to regularly test new/evolving O3 features in a test3 environment (right now we just have dev and demo (production)). That way we can say “this combo of apps is good-to-go as a RefApp kit”, and we can explicitly release & celebrate feature improvements as they mature into our demo (production) environment.
  • Problem: Our currently O3 build pipeline is broken. So both our dev→demo pipeline is broken (so demo tends to go months without being updated), and there’s no use in setting up a “test3” environment until this is fixed.
  • Plan: Both @rafal and @ibacher are investing time on fixing the O3 dev→demo pipeline process, and will add a test environment when done. Details & updates here.

Tip: We should end up with features going through a more fluid workflow like this:

O3: Performance (Loading Speeds)

  • This is my #1 concern about O3 right now.
  • Goals: O3 needs to be fast to load and not require much bandwidth - it should feel as fast as the 2.x RefApp. Details & updates here
  • Problem: We know there are places where O3 is slow, but we haven’t had either (1) an inventory of which places are having problems, or (2) a senior-enough frontend developer with the time to dig deeper into framework-related causes.
  • _Plan:_For (1): We now have a single jira Epic for tracking known performance issues: O3-1162. Both OMRS fellows @tendayi and @jnsereko will be adding to this as they uncover more bugs/slow apps. For (2): This is part of the reason for this job posting: Senior Frontend Engineer

O3: Improve E2E Test Setup

  • Goal: We want O3 RefApp contributors to have confidence that their contributions won’t break things. We also want to confidently release improvements to esms without the need to do extensive manual tests - since this can cause release delays that eventually lead to lack of interest in getting the feature completed.

  • Problem: Our O3 cypress tests are few, broken in places, and difficult to troubleshoot while knotted together with a chaos of GitHub actions in the qaframework repo.

  • Plan: Thanks to @pasindur2 for taking this on as his GSOC project. He has already separated out the tests into the o3-specific repo, openmrs-test-3refapp. Huge thanks to mentors @jayasanka and @bistenes for mentoring this project. Eg, recently Pasindu worked on automated tests for Patient Registration and Patient Search. Details & updates here.

Tip: If you’re unfamiliar with E2E frontend tests, @jayasanka, @heshan, @piumal1999, @kumuditha, @pasindur2 and @anjisvj made a great video that explains O3 E2E tests in general: OpenMRS 2021 - QA Automation in 3.x - YouTube

O3: Manual QA Checklist & Taking Stock of Current O3 RefApp Bugs

  • Goal: We want our O3 RefApp widgets to be in a relatively stable, good looking state, and we want to have a single place/protocol we can reference when we want to manually double-check that things are performing as expected.
  • Problem: We know there are numerous O3 RefApps having issues, or not fully completed based on the designs that were user tested, but we don’t have a single inventory of the known bugs/deficiencies.
  • Plan: @tendayi.mutangadura is creating this manual O3 QA Testing Spreadsheet which we can use for periodic manual smoke tests and pre-release regression testing. (Kudos to @zacbutko for starting this initiative). We are referencing regression test checklists that implementers have shared with us so that we can ensure critical areas used in the real-world in 2.x environments are included in our manual QA list. Tendayi is also reviewing our inventory of O3 RefApp designs in Zeplin to check for explicit feature gaps. This is helping @tendayi.mutangadura to onboard to O3 in his Product Management fellowship, and it seems we will have some help from folks like @irenyak1 and @randila to hand over some of this manual testing to.
    • Currently we are doing this manual testing in dev3, but we would move this to test3 once that environment is available.

Going Deeper in QA Automation

  • Goal: Implementers trust that OpenMRS products are well-tested. This should be a reasonable assumption.
  • Problem: Our historic community-based testing has been reliant on ~annual manual reviews that are often understaffed both for testing and for actual bug fixes - because by the time the bugs are caught, the original devs involved have long moved on. This is why full-stack automation is so key especially for us as an OSS community.
  • Plan: We have had a gap in a focused, senior QA Developer role for the last year since Joseph’s incredibly tragic loss. Massive kudos goes to @dkayiwa for shouldering extra mentorship and coaching over the last year in the QA Support Team. We need a senior experienced QA Technical lead to help us go deeper into additional, closer-to-code-level test automation we should be putting in place. This is the reason for this job posting: QA Automation Engineering Lead.
4 Likes

OHRI-Specific

OHRI: Setup E2E Automated Tests

We have found the E2E UI-based automated tests have indeed already paid off for us in OpenMRS(Recent QA Automation Success Story!), and the OHRI team is interested too. The OpenMRS BDD-based Frontend Test Framework can be leveraged for this purpose, and some cypress tests already written for O3 may be useful for the OHRI package. We will try 2-3 simple happy-path E2E tests for major OHRI functionalities. We could start by focusing on items that OHRI devs tend to touch in their day-to-day work. Next steps TBC with @eudson & @christine, and I believe @larslemos has been doing some smoke tests for OHRI environments.

O3 + OHRI: Finding out what happens when we combine them

  • Goal: A distro of OpenMRS should be able to add an OHRI package without their O3 apps being adversely affected.
  • Problem: Manual testing of the OHRI package had been focused on an environment where the OHRI package itself was the only EMR functionality.
  • Plan: Thanks to support from @alaboso, @ibacher, and @eudson, the representative “combo” testing environment was set up at ohri.o3.openmrs.org. Recently @christine (OMRS) and Veronica Muthee (UCSF) worked together to manually test the look, feel, and workflows in this combo setting. You can see a triaged inventory of the **issues identified in their manual QA review here. They also cross-checked if the identified issues exist in the separate OHRI demo vs O3 dev environments. Next they are logging known bugs into the relevant Jiras (eg OHRI vs OpenMRS).
1 Like

Any further reactions, questions, or input? @ibacher @bistenes @zacbutko @mksrom @eudson @mwaririm @jayasanka do you think we’re on track here?

I did walk through this with @dkayiwa and @burke on a recent TAC call as well. And we can review this with the whole O3 Squad on this week’s Thursday 4pm EAT squad call too.