Manual QA in the O3 RefApp Release Process:
Every two weeks, OpenMRS releases a new version featuring significant contributions from the community. This is followed by a manual Quality Assurance (QA) testing phase that spans approximately one week. During this period, meticulous testing, reporting, and post-release verification are conducted, with a focus on identifying any issues and ensuring that key system components such as the user interfaces and workflows, function as expected.
Why Manual QA Matters:
Manual testing for the O3 RefApp (QA Release) plays a vital role in complementing automated testing by addressing scenarios that automation may not cover, such as UI/UX concerns, nuanced , and edge cases that require human judgment. This hands-on approach offers broader coverage, ensuring a more thorough identification of issues that automated tests might miss, thereby adding a critical layer of quality control.
To maximize the effectiveness of manual QA, testers are encouraged to replicate real-world scenarios and workflows that end-users typically encounter. This ensures that testing is aligned with the practical realities of system usage, delivering insights that automated testing alone may not provide.
Getting Started with Manual QA Testing:
If you’re new to manual QA testing, we’ve got you covered! We’ve developed a comprehensive guide and a range of test case scenarios to help you through the process.
Our A Step-By-Step Manual QA Guide for OpenMRS will walk you through each step, making it easier for you to contribute effectively.
Testing Environment:
The environment used for manual QA testing is: OpenMRS.
- Username: admin
- Password: Admin123
QA Tooling and Process:
Currently, we’re tracking our manual QA efforts using the O3 QA & Clinical Release Checklist, and issues are logged in Projects - OpenMRS Issues. However, we’re actively exploring more efficient test case management tools to streamline the manual testing process. This will enhance the efficiency of tracking progress, and overall test management.
- If you know of any tools that could help optimize our manual testing efforts, please feel free to recommend them!
Challenges:
One of our biggest challenges is resource availability. As a community-driven initiative, OpenMRS often depends on volunteers for manual testing, and these resources can sometimes be limited. Additionally, manual testing can be time-intensive, especially during feature-rich releases that require comprehensive testing. We encourage testers to remain patient and persistent during this crucial phase.
How You Can Help:
We invite more community members to participate in the OpenMRS QA release cycles and actively contribute to manual testing for the O3 RefApp. Your involvement is critical to ensuring the stability and reliability of each release.
We also encourage testers to focus more on exploratory testing and user experience—areas where manual QA can add the most value and uncover issues that automated testing might miss.
Improving manual QA in future OpenMRS releases:
Do you have ideas for improving manual QA in future OpenMRS releases?
We’d love to hear them! Together, we can refine and elevate the testing process to ensure every release meets the highest standards of quality.