QA Prioritization Criteria for Testing and Automation.

Tags: #<Tag:0x00007f56a0f0ef08>

Hello,

The QA team is working on improving the community Quality Assurance process. One of the activities the QA team has undertaken is coming up with prioritization criteria that can be used to determine what test cases get prioritized for testing and automation in any particular release for any OpenMRS product. As a result, the QA team, including feedback gathered during the 2019 OpenMRS Implementers’ Conference, has come up with eight-point prioritization criteria one can consider when one would like to prioritize what to test. Additionally, the criteria has been applied to a list of pre-existing test cases that were developed for the Reference application as a demonstration of how the criteria would work.

The QA team would like the following feedback from you:

  • Does the criteria comprehensively cover what should be considered when prioritizing testing?
  • Do you think these criteria will be useful for in your implementations?
  • What else should be included to be part of the criteria
  • Any other feedback or recommendations?

Below is the criteria and a brief explanation of each:

Criterion Explanation
Availability of documented requirements. Documented requirements will be a key determinant in selecting or developing a test case as it will be the main reference point. The weight of this criterion is high.
Part of the current release In applying this criterion, look into the overall purpose of the project to be achieved at the time. For example, if the purpose of the project is to change the User interface then any test case that looks into the look and feel of the product should be prioritized.The criterion weight will be high when one wishes to determine what to prioritize for a specific release, else it will always be low or not applicable if one is planning on conducting a general an end to end testing.
The complexity of test case. In applying this criterion, take into consideration the following: Time taken to develop the test case, Time taken to execute the test case and The number of times required to run the test case. This criterion applied in the form of a scale ranked as follows: Simple i.e It takes minimal time and effort to develop and execute the test case, Medium i.e the time taken to develop and execute the test case is minimal, however, the test case requires to execute a number of times for positive results. Hard i.e a substantial amount of time is required to develop and execute the test case. Additionally, the test case needs to run a number of times with a complex data set to get viable results. The weight of this criterion is medium.
Needs to always be tested (fragile feature). This criterion to any identified features that easily break once a change is made to the product. The weight of this criterion is medium.
Value add and impact of the feature. This criterion looks at core features that would directly impact implementations. The weight of this criterion is high.
Critical functionality with a direct impact on workflow. This criterion looks at ensuring the clinical workflows are not broken. For example, in a patient registration workflow, the following features should be tested Search and the registration form should be tested to ensure the workflow is not broken. The weight of this criterion is high.
Critical functionality with a direct impact on patient safety. This criterion looks at ensuring the decision-making features are functioning appropriately. For example, high blood pressure is flagged appropriately and displayed in a visible manner.The weight of this criterion is high.
Information Security Issue. This criterion looks at ensuring issues to do with authorized access to information are well handled. It also looks into issues revolving around roles and privileges. The weight of this criterion is high.

Please share feedback by the end of this week, 24 Jan 2020.

2 Likes

Everything Seems Great and Moderate to every issue & functionality.

Thank you @saadkhaleeq for your comment. A further follow up on your comment, when you say it is complex does it mean it is not easy to understand?

1 Like

Not actually, I mean complex to all issues & functionalities; Moderate. :blush:

Thanks to the QA team for putting together this document! It’s a very nice way to get some insight into what other people are doing! I’ve got a few points of clarification below, but this looks like a solid framework to me (though it should be borne in mind I’m a developer, not an implementer and thus my feedback should probably count for less than actual implementers).

Could you say about more about where these documented requirements are drawn from?

So is the criteria here suggesting that Simple tests should take priority over Medium tests, which take priority over Hard tests? If so, I think we need to factor in the value add of the test itself (albeit this may be inherently addressed by some of the other criteria).

Could you give an example of what might be covered by this criterion?

Is this meant to subsume privacy considerations or should that be added as a separate criteria? For example, do we want this to consider only technical security requirements, i.e., do we have proper authentication and authorization controls or do we also want it to address potentially sensitive information (HIV status, pregnancy status, etc.)?

Thank you @ibacher for questions. All contributions are welcome as this is not limited to any group. Please see my responses below:

Blockquote Could you say about more about where these documented requirements are drawn from?

The requirements are derived from user stories which are part of issues being reported if we are looking at bug fixes or documentation which spells out what a particular project is hoping to achieve if we are looking into products. Most of this is available and hence the focus will be ensuring the requirements are elaborate and clear.

Blockquote So is the criteria here suggesting that Simple tests should take priority over Medium tests, which take priority over Hard tests? If so, I think we need to factor in the value add of the test itself (albeit this may be inherently addressed by some of the other criteria).

It is true it may be perceived the criterion is favoring anything tagged as simple. The other thinking around this is to identify any complex feature that is time consuming when testing e.g reports. In some cases, executing manual test cases to determine whether a report is producing accurate results may be taxing. Hence it would be easy to have this automated to save on time.

Blockquote Value add and impact of the feature.

An example of a feature that has an Impact is like the registration feature as this is a key feature used by almost every implementation.

A value add would be a feature that has been requested by a number of implementations and would elevate use of the application if included.

Blockquote Is this meant to subsume privacy considerations or should that be added as a separate criteria? For example, do we want this to consider only technical security requirements, i.e., do we have proper authentication and authorization controls or do we also want it to address potentially sensitive information (HIV status, pregnancy status, etc.)?

The thought behind this criterion was more on the technical security issue. However, it can include privacy consideration.

@christine Thanks for your clarifications on those points. I’m perfectly happy with how things stand.