Reusing OpenMRS O3 e2e tests vs maintaining our own in an OpenMRS implementation

We have two approaches with regards to adding an e2e test suite to our CI.

  1. Maintain our Own e2e suite.(more reliable):
    Define our own e2e tests independent of the OpenMRS. This approach will include a lot of copy and paste from the OpenMRS community. It might also give us a hard time maintaining the tests but will give us direct control of what, how and when to test in CI and an evergreen e2e workflow. See how it would be done here

  2. Reuse OpenMRS O3 e2e Tests. (preferable)
    Here, we pull an esm repository from OpenMRS based on the version of the esm we are running. This approach saves us maintenance time but raises validity and Autonomy questions in our CI. See how it would be done here .

    1. Some tests are passing at the OpenMRS end but fail at implementation end yet the same esm versions are used.

    2. New fields added the patient registration page break e2e tests pulled from OpenMRS

    3. Some e2e tests assert for specific configuration from OpenMRS yet they can be renamed or removed in OpenMRS implementations example here

At the moment, the disadvantages and advantages of either reusing existing openmrs e2e tests or maintaining our own e2e tests suite tie with regards to;

  1. Maintainability: How easy/hard will it be to keep up with updates from OpenMRS while maintaining an evergreen e2e workflow?

  2. Validity: How do we eliminate False Positives? Aren’t there situations where e2e tests pass at openmrs level but fail at msf level or vice versa, yet the save esm versions are maintained and the latest image is being used?

  3. Autonomy: How much control do we have on the openmrs e2e tests at execution time? Can we limit openmrs e2e to run only the tests we need?

  4. Disparity: Are openMRS e2e tests configuration agonistic? Aren’t there situations where e2e tests are looking for specific concepts, drugs, forms etc that might be missing in our implementation?

  5. Customizability: Shall we be able to test configuration very unique and specific to our implementation? How simple can it be?

Which approach would you choose for your organisation? cc @PIH @OHRI @Mekom @UCSF @dkayiwa @ibacher @michaelbontyes @jayasanka @pirupius @dkigen @piumal1999


Thanks for bringing this up. My suggestion is to reuse as much as possible to minimize duplication of efforts.

E2E tests are specific to each ESM module. You can check out the repository directly within your test environment and run the tests unless you fork the module. For example, see this github action in distro: e2e-on-release.yml.

As a best practice, import the default demo data into your test environment to ensure all assertions work correctly.

If you fork a module, you’ll need to update the pre-written E2E tests to reflect your recent developments and extend or write new tests on your fork. The E2E tests are located in the /e2e directory of each repo.

If you have a new custom module, I recommend using the same E2E infrastructure we use in other modules to maintain consistency easily.

I don’t think this is a scenario that lends itself well to an either-or decision. It’s mostly one that will heavily depend on which customizations your implementation most relies on.

So you’ve identified one clear issue: metadata. The OpenMRS tests are not metadata agnostic. Some of them could be written in more metadata-agnostic ways, but as @jayasanka mentioned, the current tests we have mostly assume that you have the RefApp’s metadata or something that looks quite a bit like it. Some of this might be easy, e.g., re-writing this test to use process.env.E2E_LOGIN_DEFAULT_LOCATION_UUID; others may be very hard, e.g., the generateRandomPatient() functions that most of the patient management tests require.

In general, its to your advantage to re-use the community e2e tests as much as possible. The reason for this is two-fold:

  1. The community will ensure that the e2e tests pass, at least with the RefApp metadata, limiting the amount of work you will need to do.
  2. Errors that are caught by community e2e tests will be fixed before the PRs are merged. However, we cannot make any guarantees about errors caught by custom e2e test suites.

Of course, you always retain full control over what actually runs on your side. This likely requires familiarity both with Playwright’s configuration as well as how our e2e tests use Playwright’s configuration. And, since our tests are designed to be modular, you should be able to implement tests similar to our tests for implementation-specific functionality.