executing manual testcases

Hello, I am interested in testing the openMRS application. I want to start with executing manual test cases as it would help me to understand the development and testing process/cycle and the application too. the understanding of the process and the application would help me to contribute more towards the project in future.

I tried to follow this page - “https://wiki.openmrs.org/display/docs/How+to+execute+a+Test+Case”, How ever am not able to view test cycles/test cases. Is QA team accepting new members? if yes, how do i add myself to the QA testing manual.

Yes of course the QA team is accepting new members and thanks for showing the desire to join them. :smile: Did you get a chance to look at this? https://wiki.openmrs.org/display/docs/QA+Testing+Manual If you are logged on the wiki, you will see an “Edit” button at the top which you can click to enable you edit the page. At the bottom of the team members, you can type this character [ and then start typing your name to see it get autocompleted. After this, you only need to save to persist the changes.

Tell us how that goes! :slight_smile:

Hello, Thank you @dkayiwa, I followed the steps above and added my self to the QA group. However am still not able to see the menu ‘Tests’ in the menu bar or option ‘Test Cycles’ in the left column on the dashboard page. am attaching the screen shot of the same for your reference.

am I missing out on something (do i have to install zephyr on my system?) or do those options appear after release of RA 2.6?

Thank you for your time in advance

I believe the problem is that our Zephyr For JIRA add-on is disabled. Can you please ask at help.openmrs.org for this to be looked into?

That was correct :slight_smile:

Zephyr was not renewed, so it was disabled. I believe the problem is solved now.

Yes the problem is solved. thank you @cintiadr.

If it was disabled, then it means there was no Manual Testing being done. right. May be only Automated Testing is active.

also for RF 2.6, I see only Adhoc testing under testing cycles and its blank so if am not wrong test cases may be out dated too.

Then where do we stand now. What am i supposed to do. Where to start from? am totally confused. :slight_smile:

please give me a start point.

Also which wiki page can i refer for an overview of present testing process for both manual as well as automation.

good day :slight_smile: Thank you

@rajesh, thanks for your interest in doing QA! @domenico and @teleivo have been working on QA most recently so they may be able to say off hand where help is needed.

You are right that we are not doing manual testing anymore since all manual tests have been automated by @SolDevelo developers and are run for each build of Reference Application. The most up to date state of our UI testing is described here. We need to update docs on the wiki. Would you like to take care of that?

One area, which we need to tackle is setting up automated performance testing, see here.

Another thing to look at is slowness of our UI tests.

When looking at https://saucelabs.com/u/openmrs you can see that some of our tests take a lot of time e.g. https://saucelabs.com/beta/tests/b6d2898df7944a439d01341e0ebda12d/commands

AddProviderTest takes almost 3 minutes. It seems like the search widget is not handled effectively by our test framework, because it waits 2 minutes for results even though they are fetched after a few seconds.

in my view (in order of priority) it would be of great help to

  1. add tests to openmrs-core there are lots of files with few or even 0 tests, you can easily find work here :slight_smile: and become our hero in no time :heart: just check https://coveralls.io/builds/10563827 or locally run mvn clean package && mvn jacoco:report report is then in api/target/site/jacoco/index.html.
  2. establish performance tests https://issues.openmrs.org/browse/TRUNK-5129

number 1 is super important, since we are adding features or might refactor old code but if we dont have tests we might introduce bugs.

@raff, yes lets update the docs on the wiki. However I need some guidance. like what to update, when to update and is there any template that I need to follow while updating?

@teleivo, yes am interested in adding tests for those files. I followed the first link and noticed that coverage is 56%. Also I have glanced the list of files with 0 coverage. I tried to open one of the files but got this message ‘source not available on github’. how to move further, please provide more information. like do I have to signup with coverall?


@rajesh, welcome on board. I’m working on the UI tests, I have been working on that since a few months therefore I still lack a full view of the test activities. However, I’m trying to recover some of the tests marked as ignore, indeed I manage to get back some of them. Right now I’m fixing a test that I refactored out (I can make it work properly locally, but it breaks when executed on Saucelab). I can definitely provide you some guidance if you are willing to work on the UI tests.

@raff as soon as I can, I’ll look at those slow tests.

It seems that the tests at “core level” are unit tests (here a unit is a class), aren’t them? Just curiosity

I dont know yet why that happens (no need to signup on coveralls). just use the link I provided or best execute locally as I described already.

@domenico, yes am in. presently am reading wiki docs about ui tests.

@domenico, I have read some wiki docs and some instructions on github repositories, but i seem to be going no where. so far i was able to only install openmrs sdk. thought of using it to run the application locally so that i can fork and run UI tests locally.

as you are already into it, pl guide me as to what to do, what resources i need etc.

Thank you

@rajesh, Basically, I followed the steps here. Since they are a bit out of date, I recommend the following instructions (by the way I’ve created a new issue regarding the readme, as soon as my last committed test works, I’ll update it). Preface: if you just want to work on the UI testing side, you don’t need to download the SDK, neither you don’t need to download an openmrs package, said that:

  1. Clone the UI test cases repository. If you don’t remember the command, you can look at this.
  2. You can import the project in Eclipse (I don’t have experience with other IDE, but I think they should work as long as they support maven). From Eclipse, you should use File -> Import …-> From existing Maven project
  3. You can try to run one of the test case there with the command “Run as Junit …”

At the moment, I’m trying to recover ignored tests (the ones annotated with @ignore). In order to avoid to overlap each other, we should agree on which test to work. To this end we can exchange private messages, we don’t need to bother others :wink:

For developing/fixing tests we have to:

  1. open an issue
  2. create a branch named with the issued id Still the instructions here should guide you. Also, take a look at this while coding Keep in contact!

FYI I sped up some of our UI tests, see e.g. AddProviderTest at https://github.com/openmrs/openmrs-distro-referenceapplication/commit/ce08105d6a41e07f50c70a8c96feb214eb821399 by getting rid of waitForStalenessOf, which was the culprit. Basically the webdriver waited for 2 minutes before completing waitForStalenessOf for some reason. We should probably get rid of that method from the uitestframework library to prevent the use of it in tests.

Also we have some tests like AddProviderTest, RetireProviderTest, which should be really one test, e.g. RetireProdiverTest first adds a provider basically doing what AddProviderTest does. @domenico, do you have some time to identify such repetitions and make them one test e.g. ManageProviderTest replacing all …ProviderTest test classes?

@raff, I saw your modification. I’ll try to understand if we can definitely get rid of that method from UI test library.

About test redundancy, I already noticed it, indeed I’ve worked on AddEditBlockTest (or something like) ManageProviderScheduleTest and DeleteBlockTest, sorry I don’t remember the exact names and I can’t check them. Anyway, I’ve already proposed to merge some of them, but there is a tradeoff as you make me notice a post ago. We have to deal with the size of the test itself. Therefore, in my case I’ve kept the original classes, but: a) aggregate common behavior in the Manage … .java b) Differentiate the behavior of Add… and Delete… in a way that if Delete fail for sure you know that it is not due to the edit of existing appointment. if only Add fail is likely due to the edit operation because Delete should fail as well. Despite my effort to separate the behavior they still have some similar actions.

You can look at my pull request. Said that, I’ll look at the classes you mentioned above and discuss with you or whoever interested in possible solutions. Maybe, I can do this:

  • examine all the test cases
  • identify which overlaps
  • discuss with a strategy to apply to all test cases (and eventually to document if it turns to be successful)

Thank you.