Platform 2.0 Beta testing against RefApp on UAT server

We are trying to figure out how to test Platform 2.0 Beta and based on previous discussions on talk and PM calls, we have zeroed in on the following 2 ways :

  1. Deploy reference application on Platform 2.0 beta (this setup currently lives on uat-platform.openmrs.org). Then run Automated and Release tests for the RefApp.
  2. Test rest endpoints for rest-ws and fhir modules on the same uat server manually or by automating the process

With regards to point 1, we would like to leverage some of the work you did on creating Test Cycles on Jira for the Reference Application JIRA project. Specifically, we wish to clone the Automated Tests and OpenMRS 2.2 release tests and run them(manually/ automatically) against the RefApp instance on uat-platform.openmrs.org. Would you have any tips/ suggestions on how we should go about cloning and running these tests?

// cc @raff @tmueller

@tmueller is no longer actively involved in testing.

For setting up release tests please see the QA Leaders section point 4 at https://wiki.openmrs.org/display/docs/QA+Testing+Manual

Most tests have been automated, thus you can find them under Automated Tests Cycle, but only a few of them can be reliably run in a CI environment due to delays and timeouts handling. We are in the process of fixing the rest. However, you should still be able to run them all locally (less prone to delays and timeouts as opposed to CI), which I would recommend for the platform release. I’ll add instructions on how to do that to the wiki by tomorrow.

Some automated tests may be failing and they will need to be run manually to verify why. There’s a few people listed under the QA team at https://wiki.openmrs.org/display/docs/QA+Testing+Manual whom I’d invite to a testing sprint and ask to run tests manually as described at https://wiki.openmrs.org/display/docs/How+to+execute+a+Test+Case

1 Like

Thanks for the info @raff We have RefApp running against Platform 2.0 beta on uat-platform.openmrs.org, I think we are good to go ahead and create the following test cycles and also a testing sprint. (it would be great to have it this week)

  • Clone the test cycle “OpenMRS 2.2 Release Tests” to “Platform 2.0 Beta Release Tests”
  • Clone the test cycle “OpenMRS 2.2 Functional Tests” to “Platform 2.0 Beta Functional Tests”

I was wondering who should be handling these tasks, officially. (sorry, new to QA). I would definitely love to help out if we are looking for someone! :smiley:

I just cloned openmrs-distro-referenceapplication and did a mvn clean install to run the tests locally. Only 8 tests ran of which 2 were skipped. This behavior was exactly similar to how these tests run on Bamboo. i.e the same tests ran in both local environment and on bamboo. Could you direct me to the updated instructions to run all of the tests locally. :smiley:

That would be great!! Also, If we need more manpower here, there are 19 awesome folks from our community who had reached out to me in order to help us test Platform 2.0 Beta. I’d be happy to connect them with sprint lead whoever he/she would officially be.

Please see https://github.com/openmrs/openmrs-distro-referenceapplication#running-ui-tests-locally

@maany, feel empowered to create test cycles and add any failing automated tests to the Platform 2.0 Beta Functional Tests cycle so that they are executed manually to confirm, if something got broken or a test simply needs to be updated.

1 Like

@maany,

I took the list of automated tests for the reference application and marked some as high priority here. If folks aren’t able to get through all the tests, I would prioritize these.

https://docs.google.com/spreadsheets/d/1wR_tQlqOfVEemLzQnhnzYguMA9q3tKoXSdVFr09qguw/pubhtml?gid=0&single=true

1 Like

Thanks @raff and @burke I’ve incorporated your advice and created the following 2 Test Cycles

  1. Platform 2.0 Release Tests
  2. Platform 2.0 Beta and OpenMRS 2.4-SNAPSHOT Functional Tests

I have also updated the Platform 2.0 Beta testing Wiki page to include accurate instructions on executing the tests listed in the above test cycles.

@wyclif @dkayiwa could you look into the following failing tests whenever you get a chance this week :smiley:

  1. https://issues.openmrs.org/browse/RA-674

  2. https://issues.openmrs.org/browse/RA-682

  3. https://issues.openmrs.org/browse/RA-694

  4. https://issues.openmrs.org/browse/RA-747

  5. https://issues.openmrs.org/browse/RA-772

These tickets seem to belong to the reference application project, why are they considered as blockers to the platform? Are their underlying root causes coming from the platform? And why should tests hold back the release if the features are actually working?

Hey @wyclif, [quote=“wyclif, post:8, topic:6393”] Are their underlying root causes coming from the platform? [/quote]

We want to figure this part out. If the root cause is the platform, we intent to fix them before the release. :slight_smile:

I see, ProviderTest seems to marked as ignored and doesn’t appear on this page, the instructions on this page say you shouldn’t have to run the tests manually, they should be marked as fail if they fail on CI, when I look at CI everything is green, so I don’t seem to understand what needs to be fixed.

@wyclif, the CI only runs 8 tests of which 2 are skipped. https://ci.openmrs.org/browse/REFAPP-OMODDISTRO-INTTESTS-4659/test

There are about 80 tests. In order to execute the remaining tests on CI, they need to be modified as described here : Automated Testing Guidelines - Documentation - OpenMRS Wiki

That’s why we decided to manually do the tests for now.

In case of ProviderTest, the reason causing it to fail seems to be step 2 here : https://issues.openmrs.org/secure/enav/#/540?query=issue%3DRA-747&offset=1

I understand the ambiguity surrounding the exact description of the task. It’s not a very rigid task and we just want a dev/5/ to take a look at the failing tests. The idea, like you mentioned before, is to figure out the root cause for the failing tests. If in order to fix the root cause, we need to make changes in Platform code, we intent to make those changes now itself. If the code changes are required to be made in RefApp, a ticket should do! I hope this makes sense :slight_smile:

It doesn’t make much sense to me as to why ui tests would fail just because you switched to platform 2.0, first thing that comes to my mind is that the data and metadata on the uat01 server is different from that on the int02 server the tests are scripted against, I will look into this and see if it is the cause

Something is still not clear e.g where does one run the tests from? According to the instructions on one of the wiki pages about these tests it seems like you can’t do it manually so how do we expect a developer to debug the tests and fix them? And if they can run them manually, where is the code they are supposed to check out to work on things? Because for sure it can’t be the master branch of the distro project.

Anyways, I took some to look closely at some of the tests and noticed some glaring issues, some key ones are:

  • Seems that I guessed right that the metadata on different test servers makes it impossible to have the tests pass against all of them, e.g the LoginTest uses the unknown location to authenticate but on uat server unknown location doesn’t have login tag so you can’t log in anyways.

  • Some tests appear incorrectly written, e.g when the web driver instance is started, the code automatically authenticates the user as admin but some tests the way they are written, they attempt to re-authenticate a user may be as a different user for test purposes which means they go to the login page when user is already logged in which of course will get redirected to back to the home page so this makes some of these tests’ assertions fail. Also some tests expect a certain app count which can vary based on the modules and their versions installed on a given test server.

  • Some tests don’t actually exist, e.g the tests mentioned in the descriptions of RA-772 and RA-747, Delete Visit Note Test doesn’t exist as mentioned in RA-682

In summary, there is nothing I found that suggests that the failures are caused by platform 2.0, the tests are just incorrect. It appears like may be we need to have a separate branch for each test server or make the metadata unique on all test servers . We need to move some hard coded things out of the tests, e.g the app count because it depends on the version of a module or the ref app distro installed on the test server you are running the tests against.

Hi @maany! I want to fix the logout issue but I don’t know how you’re building the distro that you’re running on the uat server, are you doing it manually?

After committing and ensuring that the corresponding module CI plan has run successfully, just manually run this: https://ci.openmrs.org/browse/REFAPP-DOM

The server that the plan deploys to is completely different and is not running platform 2.0 and the updated modules

Just do the above and you will see your changes reflected on http://uat01.openmrs.org:8080/openmrs

My question is not really how to deploy to the uat server but rather where to get the code to setup my dev environment, that way I can set it up on my machine, reproduce the issue locally and try to fix it.

Oh i see! Here we go: https://github.com/dkayiwa/openmrs-distro-referenceapplication

Great! This is what I’ve been looking for all along, thanks @dkayiwa