Issue regarding test cases

Hi guys,

I can’t figure out which issues regarding testing really deserve some work. Hereafter some questions:

  1. The issues with type “Test” do they mean that are requests for extending the test suite (e.g, create a new test case/improve ones already existing)?
  2. If so, some of them are in the state “backlog” does it mean someone has been already working on it or are ready for some work since I can’t see the assignee?
  3. I have seen some issues like this which seem to me resolved other like this that I’m not sure they should be worked since they are pretty old. Where should I start from? Since I’m reviewing all the tests on the UI I can provide useful advice regarding the issues (I have already commented some of them indeed), but I haven’t understood it some tests need to be fixed as soon as possible. Thank you for your help.

@domenico, thanks for doing this!

I don’t really know the answer to your questions, but perhaps some of the child pages of this wiki page will provide insight.

If you need further help after reading this, you may want to talk to Rafal and the Soldevelo team. None of them are directly working on this now, but they have the most access to the people who did set things up the way they are set up, and might be able to help investigate. (I haven’t at-mentioned them; you may do so if you think it’s helpful after reading the wiki pages.)

Thank you both.

I read the wiki pages, the approach sounds good. By reading old posts, it seems that you had (in the past) many tests executed manually in a dedicate QA Sprint, one of the thing that we can should do:

  1. Review them in order to remove duplicated tests, useless tests, ambiguous test, not -up to date tests and so on. The objective is to delineate a well-organized, optimized set of tests to automate
  2. Automate them
  3. Include them in regression cycle (or any other cycle).

Is it reasonable to you. FYI, I’m trying to get an integrated view of the all QA activities (past and on-going). I’ll write down an agenda with my proposal to reorganize the QA (preliminary with a short term goal) to submit to OpenMRS development community.

Thank you Domenico

@domenico for sprints, you could take a look at https://wiki.openmrs.org/display/RES/Development+Sprints

We’ve had contributions on testing from the SolDevelo team. Some of those involved adding the Zephyr plugin for test managment, allowing us to create a suite of tests to be run for each release. I believe the idea was that the set of tests could be cloned for each release and (ideally) many could be automagically run and results reported in the corresponding JIRA tickets, allowing any any errors or regressions to be addressed and work tracked in JIRA. And I believe those were separated into 2-3 groups. I thought I recalled seeing those in the dashboard for some projects (thought I’d find it looking at the left panel on RefApp or TRUNK, but not seeing it).