More Automated Tests in our Pipeline

Based on discussions in the QA Team, PM Team, and TAC, some very painful pre-release manual testing, and advice from the Firefox product team… we all want to see this beautiful pyramid come to life at OpenMRS.

Easier said than done.

The layer that is baffling me about where we go next is Integration Tests. Martin Fowler describes these in a couple ways but I think @dkayiwa @k.joseph and I are expecially interested in things we can implement directly in our CI pipelines (e.g. starting in TRUNK) that would run tests quickly so that devs get near-immediate feedback.

@dkayiwa @k.joseph what did you have in mind for things we could get started on w.r.t. baking more test automation into our CI pipeline?

CC @burke @christine


Thanks a lot @grace for starting this thread,

Martin’s article seems a good argument.

We have lots of unit tests in our projects but i think the coverage isn’t enough, we should advocate for TDD and require unit tests before merging changes and new features.

In OpenMRS, integrated tests are parented by BaseModuleWebContextSensitiveTest, samples are available in core and rest module but in short these are api based integrated tests and perhaps according to Martin they fall in his first (Narrow) categorisation of integrated tests. Do you see a need and perhaps alternative approach to improving these @dkayiwa and @burke? would you think it beneficial enough to look through how we can improve these?

Martin categorises our current E2E (User interface) tests as integrated (Broad) tests. We have these running off selenium and the integration with cucumber in QAFramework is aiming at covering at-least 90% of the reference application and form a foundation for other distributions and tools, this project is currently spinning in CI daily at 01:00 and at every commit to QAFramework or Reference application Distribution or UI Test framework within a space of one minute. Am looking through a number of ways to make these most effective and work with the community to improve coverage.

Any suggestions on improving the current infrastructure is highly welcome

1 Like

I would start with automating what our release managers have been routinely begging volunteers to manually test, before each release.


Our existing tests run on an in memory h2 database. Which is good because it is simple to set up and the tests run fast.

We can integrate these exact same tests with MySQL and PostgreSQL (the databases that we currently support). To give a practical example of the value for this, our latest master branch of the platform fails on mysql during the setup wizard, but succeeds on postgresql. It took me a while to figure this out because our CI is green. If we had such in our pipeline, i would not have wasted that time because we would have heard CI scream on the commit that led to this. :slight_smile:

1 Like

Daniel, are you suggesting E2E testing on installation wizard/setup? That would be a good idea

:pleading_face: runing E2E tests on installtion ?? that seems not intresting to the implementer

@k.joseph i am suggesting simply running existing tests on MySQL and PostgreSQL


which kind of tests?

I think the implementer needs assurance that a new installation or upgrade is able to work successfully and that’s why i still think we need E2E tests for installation/upgrading

All tests that are currently using the h2 in memory database.

I completely agree!

@dkayiwa you mean in memory MySQL/Postgres?

I do not care whether it is in memory or not. What am looking for is a MySQL or PostgreSQL database to expose failures that H2 would not.

we can trigger the same test against h2, MySQL as well as postgres

1 Like

i think In Memory is more ideal for test Units/Intergration Tests.

adding a MODE=MySQL param here enables the H2 DB to handle most of the MySQL dialect

to some thing like

String url = "jdbc:h2:mem:openmrs;MODE=MySQL;DB_CLOSE_DELAY=30;LOCK_TIMEOUT=10000";

The point i was driving home was not about in memory vs out of memory. It was simply about running our tests on MySQL and PostgreSQL.

I was not after most but all. You would not catch the current master branch platform error without running a full mysql instance.

1 Like

When Andela was working on the OCL client for OpenMRS, we set up a deploy button on each pull request. Whenever the reviewer clicked this button, it deployed the changes on heroku and enabled him or her to test them in the web application by a mere click of a button, before merging the pull request. @hadijah315 do you still remember this?

@k.joseph do you find the conversation on this thread good enough to know where to get started with the community priorities in regards to more automated tests?


@dkayiwa , the heroku configuration was done by some Andelans that came later, I found it’s PR OCLOMRS-593: Setup Heroku Pipeline for Review Apps by Karuhanga · Pull Request #473 · openmrs/openmrs-ocl-client · GitHub

I am currently setting up to resume ref app automated testing with just selenium by resurrecting the previous framework to build on further on it and thereafter will curate more tasks for the fellowship. I want to as well review so as to integrate the existing ocl tests with ci. I have noticed there’s a new ocl project, i am wondering @burke whether we plan to retire one of the 2 existing projects to actively support the other?

@k.joseph how about having a ci plan for our existing platform tests to run against mysql?

As for the oclclient, I know of only one repository. Which other one are you referring to?

1 Like

sure, i plan to have a new plan active by today running the existing tests against our our qa instance.

It’s probably an upgrade of the existing that i didn’t catch on the one of the call. @burke, how many projects are running simultaneously for ocl client?