@tmueller (Soldevelo employee working full time on OpenMRS) has interest in working as a tester and @jslawinski has offered to coordinate efforts and build a bigger team around testing OpenMRS. It would include writing test cases for manual testing, conducting manual tests and writing automatic tests. The Soldevelo team come with experience in the QA field gained in projects for MOTECH.
It would be great to get all interested individuals and implementations to share their experience in testing OpenMRS and discuss the best approach for Soldevelo to take. We could start off the discussion here and/or schedule a call as a follow up.
@jslawinski, do you know of any example of a project we could look at that has the QA setup we could replicate for OpenMRS? I’m interested to see how we could utilize JIRA and I’d love to see some test case templates we could use. Please let us know what is needed infrastructure wise, e.g. do we need any special project in JIRA or other tools?
The user guide at the bottom of that page gives a good insight in how the workflow could look like:
A new test case is defined by creating a new TestCase issue in Jira and filling in the required fields such as Summary, Description and Component. For each release where this test is going to be run, a subtask of the issue is created and the Affects Version field is set to the release. The priority of the test for this release can also be added here. Use assignee as the person who defines the test.
When a test is run, the results are entered or updated in the subtask, not in the parent Test Case issue. Comments about the test run can also be entered if appropriate.
When a report is created for the current state of a release, we search for all TestCase issues with the Affects Version field set to the required release. The resulting set of issues can then be sorted and counted by number passed, failed, not run etc. Producing historical reports can be done with the Timecharts plugin for Jira, which shows a graph of how the results in a report change over time.
We can also create a new link type to connect test cases to the bugs that they created or are verifying. This is the biggest advantage that I see of using an issue tracker to track test cases and bugs together. We could also add a box to all bugs to indicate whether a Test Case is expected by someone for a bug.
Also from yesterday’s dev forum, the consensus was that automated functional testing was highest priority, followed by automated testing against other databases (e.g., MariaDB) and automated testing of deployment sizes (small & big) + upgrades. Performance testing and load testing were felt to be lower priorities for the community.
I’m assuming the JIRA configuration for QA is designed to support manual QA.
@burke, the JIRA config is intended to support manual and automated testing. At MIFOS they use the Automation field on issues to determine if an issue is manual or covered by automated test. See these examples:
I think you could experiment with configuring RA for QA purposes. It would be probably best if we copied over the project config to a new RA-QA project for experiments. @michael, do you think it would be possible to create such an experimental project in JIRA and give admin rights to both @jslawinski and @tmueller so they can set it up as they need?
I don’t think that’s very practical/feasible. It could perhaps be done
for the steps-to-perform, but a vitally important part of the automated
UI tests is validating that we got to the right screen, the right stuff
appears there, etc. And that is difficult to do with anything other than
a precise language (like Java).
I agree that RA will be the best project to start with.
I also checked two other projects that use Jira and where we are
responsible for the QA. It seems that neither of these projects use a
separate Jira instance and we combined the QA related stuff directly
into an existing workflow.
We can start as soon as we receive access to the new RA-QA project.
Is this the pattern we would use — i.e., add “QA” to the end of the project key in JIRA for any project/module we want to provide QA? For example, RAQA, TRUNKQA, REPORTQA, METAQA, etc? Each “QA” project would use the QA workflow? Is that the vision?
Well, it could be. Another option is to keep Test Case issue types in the same project (RA in this case). I feel it would be easier to manage versions and releases having things in one project and I like it more. There are some benefits in having a separate project for QA like better visibility and separation, but with right filters and dashboard you should be able to achieve the same having just one project. RA-QA will be a used as a playground for experiments so that things (screens, workflows) can be tested without disrupting RA.
The goal is to have things setup around the half of May and reviewed by the community to decide, which approach to take.
no, this is only a temporary project to configure and validate QA
process in OpenMRS. After we hammer out the final workflow, we will
simply implement it in the RA and any other project that we want to have
a more formal QA processes enabled. When we do this, the RA-QA project
will be redundant and probably removed.