Hello,
The QA team is working on improving the community Quality Assurance process. One of the activities the QA team has undertaken is coming up with prioritization criteria that can be used to determine what test cases get prioritized for testing and automation in any particular release for any OpenMRS product. As a result, the QA team, including feedback gathered during the 2019 OpenMRS Implementers’ Conference, has come up with eight-point prioritization criteria one can consider when one would like to prioritize what to test. Additionally, the criteria has been applied to a list of pre-existing test cases that were developed for the Reference application as a demonstration of how the criteria would work.
The QA team would like the following feedback from you:
- Does the criteria comprehensively cover what should be considered when prioritizing testing?
- Do you think these criteria will be useful for in your implementations?
- What else should be included to be part of the criteria
- Any other feedback or recommendations?
Below is the criteria and a brief explanation of each:
Criterion | Explanation |
---|---|
Availability of documented requirements. | Documented requirements will be a key determinant in selecting or developing a test case as it will be the main reference point. The weight of this criterion is high. |
Part of the current release | In applying this criterion, look into the overall purpose of the project to be achieved at the time. For example, if the purpose of the project is to change the User interface then any test case that looks into the look and feel of the product should be prioritized.The criterion weight will be high when one wishes to determine what to prioritize for a specific release, else it will always be low or not applicable if one is planning on conducting a general an end to end testing. |
The complexity of test case. | In applying this criterion, take into consideration the following: Time taken to develop the test case, Time taken to execute the test case and The number of times required to run the test case. This criterion applied in the form of a scale ranked as follows: Simple i.e It takes minimal time and effort to develop and execute the test case, Medium i.e the time taken to develop and execute the test case is minimal, however, the test case requires to execute a number of times for positive results. Hard i.e a substantial amount of time is required to develop and execute the test case. Additionally, the test case needs to run a number of times with a complex data set to get viable results. The weight of this criterion is medium. |
Needs to always be tested (fragile feature). | This criterion to any identified features that easily break once a change is made to the product. The weight of this criterion is medium. |
Value add and impact of the feature. | This criterion looks at core features that would directly impact implementations. The weight of this criterion is high. |
Critical functionality with a direct impact on workflow. | This criterion looks at ensuring the clinical workflows are not broken. For example, in a patient registration workflow, the following features should be tested Search and the registration form should be tested to ensure the workflow is not broken. The weight of this criterion is high. |
Critical functionality with a direct impact on patient safety. | This criterion looks at ensuring the decision-making features are functioning appropriately. For example, high blood pressure is flagged appropriately and displayed in a visible manner.The weight of this criterion is high. |
Information Security Issue. | This criterion looks at ensuring issues to do with authorized access to information are well handled. It also looks into issues revolving around roles and privileges. The weight of this criterion is high. |
Please share feedback by the end of this week, 24 Jan 2020.