I’m excited to be applying for the Google Summer of Code 2023 program, with a proposal to extend the end-to-end (E2E) automated testing framework for the OpenMRS 3.0 RefApp project. I believe that automated testing is crucial for ensuring the quality and reliability of software, and I’m eager to contribute to OpenMRS’s mission of improving healthcare delivery in low-resource settings.
I believe that these improvements will not only benefit the OpenMRS community, but also contribute to the wider ecosystem of open-source healthcare systems. Therefore, I’m open to collaborating with other GSoC participants, mentors, and contributors who share similar interests and expertise.
To kick off the discussion, I’d love to hear your thoughts and feedback on my proposal. Here are some questions to get us started:
What do you think are the most critical gaps or challenges in the current E2E testing framework for OpenMRS?
Are there any specific use cases or workflows that you would like to see covered by the automated tests?
What tools or frameworks do you recommend for improving the test automation process?
How can we ensure that the test suite remains relevant and up-to-date as the OpenMRS 3.0 RefApp evolves?
Feel free to share your ideas, suggestions, or concerns in this thread. I’m looking forward to hearing from you and working together on this exciting project!
Hello @randila, it’s great to hear that you’re excited about applying for the GSoC with a proposal to extend the E2E testing framework for OpenMRS 3.0 RefApp project. I completely agree with you that automated testing is crucial for ensuring software quality and reliability, especially in the healthcare industry where accuracy and dependability are paramount.
To answer your questions:
In terms of critical gaps or challenges in the current E2E testing framework for OpenMRS, I believe that one of the biggest challenges is ensuring that the tests cover all possible use cases and scenarios, especially with a complex system like the 3.0 RefApp. Another challenge is maintaining the tests as the codebase evolves, to ensure they remain relevant and up-to-date.
Some specific use cases and workflows that would be great to see covered by automated tests could include patient registration, appointment scheduling, medication ordering and administration, and medical record management.
As for tools and frameworks to improve the test automation process, Playwright is a good choice and one that we already use in some of our tests.
To ensure that the test suite remains relevant and up-to-date as the OpenMRS 3.0 RefApp evolves, we can establish a process for regularly reviewing and updating the tests to reflect changes in the codebase. This could involve working closely with the development team to understand upcoming changes and potential impacts on the testing suite.
I hope that answers your questions and provides some insight into our current testing framework and potential areas for improvement. I look forward to working with you on this project and helping to improve the quality and reliability of the OpenMRS 3.0 RefApp.