Implementing E2E tests for the Login and Registration workflows of RefApp 3.x

After going through a lot of trials and errors, I was able to complete writing E2E tests for the login, patient search and patient registration workflows. :hugs: The goal of the task was to give users confidence that the Login, Search & Registration workflows of the RefApp 3.x are working as expected.

Here’s the updated feature file of the login flow. I was trying to make it much readable as possible. Please let me know if it’s hard to understand.

Feature: User Login

  Scenario Outline: User login to the dashboard
    Given user arrives at the login page
    When the user logs in with "<username>" and "<password>" to the "<location>"
    Then the user should be "<ability>" to login

      | username   | password   | location          | ability |
      | admin      | Admin123   | Registration Desk | able    |
      | wrong user | Admin123   | Registration Desk | unable  |
      | admin      | wrong pass | Registration Desk | unable  |

Here’s a demo of the test.

This is the PR:

The next step was to implement the registration flow.

Here’s the feature file:

Feature: Search & Registration

    Given the user login to the Registration Desk

  Scenario Outline: Search for a patient
    When the user search for "<patientName>"
    Then the result should be "<result>"

      | patientName | result           |
      | Kevin Jones | Found 1 patient  |
      | 100MQ       | No results found |

  Scenario Outline: Register a patient
    When the user clicks on the add patient icon
    And the user enters "<validity>" details for Andria Faiza
    And the user clicks on the create patient button
    Then the patient registration should be "<status>"
      | validity   | status       |
      | right      | successful   |
      | wrong      | unsuccessful |
      | incomplete | inactive     |

Here’s a demo:

This is the PR, which is still in WIP state. The login flow PR should be merged before this PR.

Also, I have implemented Github action for each workflow. The status would be reflected on the readme file as a badge.


Discovered issues

When I’m implementing the tests for the above workflows, I discovered the following issues:

  1. Issues in the registration page

    1. When clicking on the “Create Patient” button,

      1. If the user haven’t select a gender:

        The page neither show any validation errors nor create a patient. It remain silent.

      2. If the details are correct:

        A popup message is displayed, and the patient is being created on the backend. But the page doesn’t redirected to anywhere else. When the user try to navigate to another page, it displays a confirmation message as follows:

    2. The country has auto filled with “Uganda” and needs to clear it every time if the patient’s country is different. However, it’s not a problem if the default country is configurable.

  2. Different versions

    The UI is not identical when running it using npx openmrs develop (within the openmrs-esm-template-app repo) and directly visiting the demo server.

  3. Long page loading time

    Sometimes it takes long time to load pages. Also, it pretended to be loaded in some cases while loading components slowly. Therefore, I had to increase the maximum waiting time to 40 seconds in cypress to avoid getting the tests failed. Tested on: Browsers: Chrome and Safari Devices: Macbook Pro M1 16GB, Macbook Pro 2015 8GB Connection: ~10Mbps

Other issues/doubts

The following are some other issues or doubts I’m having,

  1. Creating an unidentified patient

    On ticket RATEST-150 ,it instructs to create an Unidentified patient with only a gender. But the UI doesn’t allow to do it. I have raised this question on the ticket.

  2. Searching a patient

    It’s impossible to achieve the given instruction on searching a patient. Especially for searching for a non-existing user “Andria Faiza”. The next scenario creates a patient with the same name. Therefore, the test will fail when running it for the second time, because “Andria Faiza” now exists on the database. Since the server is deployed on the cloud and the state of the data in the server is unpredictable, searching for an existing user also might fail because anyone can edit the data on the test server.

  3. When to use scenarios and scenario outlines

    I have seen that we have used scenario outlines in most of the places. I’m not clear when to use scenarios and scenario outlines. It would be appreciated if someone could share an online resource to read some more.

cc: @k.joseph @bistenes @grace @kdaud @sharif @christine


Thanks @jayasanka , Does MFE currently support the implementation of undentified Patients, if it doesnot then there is no way how you can automate that, if its then there should be a way i think

Here the system should first search for a non-existing Patient, when the Patient is not in the database inotherwords when it returns nothing , then the automation should go ahead and create that patient with exact name and other attributes provided

I agree , because i also faced this, however to cater for this, we should write a logic perhaps that logically tells the registrationPage that wen the given name is loaded while registering a patient , it should not fail but rather success without failing with same name

Am not sure about this patients are generic editing a patient automatically should be restrictive i think .

We use scenario Outlines when the same test is performed multiple times with a different combination of values.". Again we can as well use scenario out lines when you want to provide some examples at the end of each scenario out line

Scenario Outline: Search for a patient When the user search for “” Then the result should be “”

  | patientName | result           |
  | Kevin Jones | Found 1 patient  |
  | 100MQ       | No results found |

Then scenario: when at some point you dont provide example at the end of each scenario Looking forward to here from @k.joseph @ibacher @bistenes @kdaud thanks

1 Like

Feature files store use stories known as scenarios which are written in a human readable language. And these user stories mimic user experience on a feature.

AFAICT examples in feature file shade a picture for some user input & expected result, But these examples are not reflected in the steps definition in any way. So including them would be optional, however if implementers do take use of these feature files in understanding the tests then including examples can be a good idea But a few of them do take initiative of looking into these feature files.

1 Like

Wonderful update @jayasanka, this is great and am already planning to review your PR soonest. As the microfrontends team responds to your discovered issues, I would wish to let you know that the workflow scenarios should be maintained EMR standard and fill free to include unsupported sections if listed by the ticket or usecases. Either you would apply an ignore mechanism of removing the trigger tag from such scenarios which would be enabled once the feature gets supported.