OpenMRS Platform and Ref App QA

Continuing the discussion from Last minute rudimentary testing for Platform 2.3.2:

A couple of questions for the QA team

  1. What are the results of testing across platform and Ref app?
  2. What is the test strategy and current quality metrics?
  3. What has changed in the process since the last platform and Ref App release?
2 Likes

Thanks for this @ssmusoke

FYI , We have been working with qa-team together to converge different testing plan and cases and came with draft here Ref-App Testing Plan - Google Docs which will guide us in sprint testing proccesses. Our testing sprint is scheduling today and will last for two consecutive weeks and ending on 5th\oct

As fur as Ref-app 2.11 is concerned, we are still in a discussion about spa-module inclusion or getting rid of it which basically has been the basic module to either be bundled or not discussion in progress here..

thanks @ssmusoke for this

A number of changes have happened since the last platform release ie upgarding of the main core libraries like liquibase,making the platform able to run on postgres(thanks to @aman) , upgrading our spring , upgrading libraries in the platform bundled modules like the rest module , the intriduction of the fhir2 module in the platform ,you can also refer to this

We have done acouple of rest api end points testing on the current platform being released and we have also tested out the platform running smoothly with the reff app on postgres

@sharif, Steven is not asking what has changed in the actual releases, but rather what has changed in the release process.

2 Likes

We maintained 60% of the work being done in the last release, meaning we didn’t have much core features in this release however, release of modules still on going and we hope to attain more features,improvements in the coming testing sprints of Ref-app 2.11

@sharif your response is not any different from that of @gcliff and that is not what @ssmusoke is asking for. He is not asking for features that you are going to release. But rather, what has changed in the release process, with the help of the QA team leadership.

Nothing so fur has changed, minus what @gcliff has said above about enabling ref-app to run on postgresql, however we are syncing still with qa-team in depth,Also @ssmusoke refapp is going to run on platform 2.3.1, i think i could answer this question best after testing sprints because its where we are to work more with qa-team. does that make sense @dkayiwa

@ssmusoke @dkayiwa @sharif according to the QA road map found here ,the QA has been focusing mainly on the reff app features since Reference Application is one of the largely used distributions in the community. May be @k.joseph and @christine can weigh deeply on how far it has gone in terms of whether this is ready to be included for the up-coming reff app release

1 Like

Thanks @gcliff for this

If this is the QA road map then things are much worse than I even thought :fearful:

@sharif what has changed in the way you’re releasing vs the previous way in which it was done in the past ?

@sharif - don’t you mean on 2.3.2?

1 Like

The QA Support Team is involved in several different efforts to first improve our general, community QA processes and then work with the PM team and squads to improve the QA tools and processes they use as a part of their release process.

Naturally, this means that there are a few different documents related to these efforts, some of which have already been highlighted above:

This will be the first Ref App release since the QA Support Team took steps to improve our QA processes, so we fully expect to see several changes this time around.

1 Like

Thanks for that @mseaton , though its still under progress

Thanks @jennifer for your clarification

@ssmusoke, In addition to @jennifer reply, please see my responses below to the questions above:

  1. What are the results of testing across platform and Ref app?

As indicated this will be the first testing cycle in which the QA support team will be integrating the tools and processes that have been developed. Hence the results should be available towards the end of October.There will be 2 testing sprints in October before the release.

  1. What is the test strategy and current quality metrics?

As per this release, we are working with this as the testing plan. Feel free to include additional feedback on the document as it is being refined.

  1. What has changed in the process since the last platform and Ref App release?

The QA team has developed a structure for teams and squads to follow when it comes to testing. Hence some key changes you will note in the testing process will be there will be proper documentation on the testing process and the results, allocation of adequate time and resources(though not fully but a step in the right direction ), and an increase in our test coverage.

1 Like

@christine I see lots of planning but not enough tests, I assumed hopefully wrongly that the testing infrastructure that has been in place is being leveraged and extended - there are integration tests that already exist.

If 2 QA leads have only added infrastructure and login tests for the last 3-6 months, then I still say that is very little progress. Maybe there is something that I am not getting or missing out on some details.

I see the core testers are community volunteers, so this will be a manual test? What happened to the automation infrastructure put in place? What am I missing?

(Questions for @christine and @k.joseph to help us all be on the same page - I am still learning here so apologies if this was clear for others and went over my head.)

  1. @christine to confirm - do you mean that we’ll be increasing the test coverage in Cucumber? Where is the best place for people to follow along as the tests get added - is it here? CucumberStudio - Agile test management tool
  1. Which are the additional tests that will be added? Is it just the list of 16 things in the Issues tab here? If so - don’t some of these already have some automated test coverage? E.g. line 3 Register A Patient - is this already captured with this old selenium test? Are we aiming to replace some old selenium tests with tests in Cucumber Studio, written in Gherkin?

  2. This list of 95 RefApp workflow tests (54 of which are automated) references quite a lot of old automated tests (from 2015-2017). Were you also thinking that in addition to using Cucumber we’d be adding some more Selenium automated tests to keep filling in the blanks in this list? (I’m all for automated testing, though we should probably be quite intentional about what we choose to automate right now given the hope of a new 3.0 Frontend in our near-ish future. Nice that this existing spreadsheet could help prioritize tests.)

Thanks for helping me get up to speed with the plan, and how it’s different from previous releases (since I don’t have a reference point). :slight_smile: I definitely don’t want to take over QA Support :stuck_out_tongue: but would be helpful to understand how I can follow along.

And, @k.joseph are there other areas related to the RefApp or just the platform that have been automated in the last 3-6 months that we can mention here?