Forming a Quality Assurance team

@tmueller (Soldevelo employee working full time on OpenMRS) has interest in working as a tester and @jslawinski has offered to coordinate efforts and build a bigger team around testing OpenMRS. It would include writing test cases for manual testing, conducting manual tests and writing automatic tests. The Soldevelo team come with experience in the QA field gained in projects for MOTECH.

It would be great to get all interested individuals and implementations to share their experience in testing OpenMRS and discuss the best approach for Soldevelo to take. We could start off the discussion here and/or schedule a call as a follow up.


@jslawinski, do you know of any example of a project we could look at that has the QA setup we could replicate for OpenMRS? I’m interested to see how we could utilize JIRA and I’d love to see some test case templates we could use. Please let us know what is needed infrastructure wise, e.g. do we need any special project in JIRA or other tools?

What we have currently is a CI plan at which deploys the latest version of OpenMRS 2.x to and runs UI tests, which you can find at

If UI tests pass, then the plan deploys the distro to which we use for manual testing.

Do you think it is the setup you could start with or we need to make changes?

1 Like


I would suggest to replicate the processes that we used in the Mifos project. You can look at the following resources:

  • Example feature test plan:

  • Example release test plan:

Please let me know what do you think about this and when can we start with this new QA project.

Regards, Jakub.


I would be happy to be part of the QA Team. :smiley:


@jslawinski, your offer was discussed at the dev call yesterday. The main take away is that people are excited about Soldevelo taking a lead on this and you guys have the green light!

Please let us know whenever you are ready to start.

I went through the docs you provided and this guide on setting up JIRA for QA was particularly interesting:

The user guide at the bottom of that page gives a good insight in how the workflow could look like:

A new test case is defined by creating a new TestCase issue in Jira and filling in the required fields such as Summary, Description and Component. For each release where this test is going to be run, a subtask of the issue is created and the Affects Version field is set to the release. The priority of the test for this release can also be added here. Use assignee as the person who defines the test. When a test is run, the results are entered or updated in the subtask, not in the parent Test Case issue. Comments about the test run can also be entered if appropriate.

When a report is created for the current state of a release, we search for all TestCase issues with the Affects Version field set to the required release. The resulting set of issues can then be sorted and counted by number passed, failed, not run etc. Producing historical reports can be done with the Timecharts plugin for Jira, which shows a graph of how the results in a report change over time.

We can also create a new link type to connect test cases to the bugs that they created or are verifying. This is the biggest advantage that I see of using an issue tracker to track test cases and bugs together. We could also add a box to all bugs to indicate whether a Test Case is expected by someone for a bug.

I do think it would work for us. I understand you used some variation of that for MIFOS. I haven’t found any issues with subtasks, but this query gives some examples of test cases"Test%20Case"

@leebreisacher, FYI (as the author of our ui test framework it may be interesting for you)

Also from yesterday’s dev forum, the consensus was that automated functional testing was highest priority, followed by automated testing against other databases (e.g., MariaDB) and automated testing of deployment sizes (small & big) + upgrades. Performance testing and load testing were felt to be lower priorities for the community.

I’m assuming the JIRA configuration for QA is designed to support manual QA.

@burke, the JIRA config is intended to support manual and automated testing. At MIFOS they use the Automation field on issues to determine if an issue is manual or covered by automated test. See these examples: <- manual - automate candidate with sub-tasks <- automated

I see. So, something like this?

  • We describe all the things that we’d like tested within JIRA tickets in a QA project.
  • We automate as many as possible.
  • Presumably, we’d want a link from automated tests to logs of automated runs on CI.
  • Manual tests would get a subtask entry named with version, build, and/or testing effort with pass/fail status each time someone manually walked through the steps to verify.

Any chance we could make the automation framework capable of reading & executing the test steps described in english or near-english?

If there’s anything needed on Bamboo agents to run more automated tests, please let us know. Both Ryan and I are able to install new capabilities to the agents.


it is great to hear that!

I agree that the mentioned configuration should be OK. I think the next steps should look like:

  1. [Infra Team] You should choose the Jira project to start with.
  2. [SolDevelo] We should configure everything on the copied instance of this project.
  3. [BOTH] If we all agree that everything is configured properly, then we should replicate this configuration to the original project.
  4. [SolDevelo] We will create the initial set of test plans/test cases and start with the first test sessions.
  5. [BOTH] If we all agree that the process fulfills all the requirements and is beneficial for the community, then we will setup the same configuration in the other Jira projects.

We can start ASAP with the above plan. Of course any other contributors can join our efforts to improve the QA related processes in the OpenMRS.

Regards, Jakub.

I think you could experiment with configuring RA for QA purposes. It would be probably best if we copied over the project config to a new RA-QA project for experiments. @michael, do you think it would be possible to create such an experimental project in JIRA and give admin rights to both @jslawinski and @tmueller so they can set it up as they need?

Yes, it’s possible to create a test project. To-be admin(s), please open a new case at and specify the project to copy. Please be sure to review in case anything needs to be different than the other project at creation time.

We had some experiments with JBehave in the past. It allowed us to write tests in english. I didn’t have good experience with that approach. Aren’t tests in Java near-english :smile: ?

I don’t think that’s very practical/feasible. It could perhaps be done for the steps-to-perform, but a vitally important part of the automated UI tests is validating that we got to the right screen, the right stuff appears there, etc. And that is difficult to do with anything other than a precise language (like Java).



On many levels, this entire thread makes me happy and appreciative. Just wanted to pass that on. :smile:

I agree that RA will be the best project to start with.

I also checked two other projects that use Jira and where we are responsible for the QA. It seems that neither of these projects use a separate Jira instance and we combined the QA related stuff directly into an existing workflow.

We can start as soon as we receive access to the new RA-QA project.

Regards, Jakub.

Is this the pattern we would use — i.e., add “QA” to the end of the project key in JIRA for any project/module we want to provide QA? For example, RAQA, TRUNKQA, REPORTQA, METAQA, etc? Each “QA” project would use the QA workflow? Is that the vision?

Well, it could be. Another option is to keep Test Case issue types in the same project (RA in this case). I feel it would be easier to manage versions and releases having things in one project and I like it more. There are some benefits in having a separate project for QA like better visibility and separation, but with right filters and dashboard you should be able to achieve the same having just one project. RA-QA will be a used as a playground for experiments so that things (screens, workflows) can be tested without disrupting RA.

The goal is to have things setup around the half of May and reviewed by the community to decide, which approach to take.

Hi Burke,

no, this is only a temporary project to configure and validate QA process in OpenMRS. After we hammer out the final workflow, we will simply implement it in the RA and any other project that we want to have a more formal QA processes enabled. When we do this, the RA-QA project will be redundant and probably removed.

Regards, Jakub.