We are trying to figure out how to test Platform 2.0 Beta and based on previous discussions on talk and PM calls, we have zeroed in on the following 2 ways :
Deploy reference application on Platform 2.0 beta (this setup currently lives on uat-platform.openmrs.org). Then run Automated and Release tests for the RefApp.
Test rest endpoints for rest-ws and fhir modules on the same uat server manually or by automating the process
With regards to point 1, we would like to leverage some of the work you did on creating Test Cycles on Jira for the Reference Application JIRA project. Specifically, we wish to clone the Automated Tests and OpenMRS 2.2 release tests and run them(manually/ automatically) against the RefApp instance on uat-platform.openmrs.org.
Would you have any tips/ suggestions on how we should go about cloning and running these tests?
Most tests have been automated, thus you can find them under Automated Tests Cycle, but only a few of them can be reliably run in a CI environment due to delays and timeouts handling. We are in the process of fixing the rest. However, you should still be able to run them all locally (less prone to delays and timeouts as opposed to CI), which I would recommend for the platform release. I’ll add instructions on how to do that to the wiki by tomorrow.
Thanks for the info @raff
We have RefApp running against Platform 2.0 beta on uat-platform.openmrs.org, I think we are good to go ahead and create the following test cycles and also a testing sprint. (it would be great to have it this week)
Clone the test cycle “OpenMRS 2.2 Release Tests” to “Platform 2.0 Beta Release Tests”
Clone the test cycle “OpenMRS 2.2 Functional Tests” to “Platform 2.0 Beta Functional Tests”
I was wondering who should be handling these tasks, officially. (sorry, new to QA). I would definitely love to help out if we are looking for someone!
I just cloned openmrs-distro-referenceapplication and did a mvn clean install to run the tests locally. Only 8 tests ran of which 2 were skipped. This behavior was exactly similar to how these tests run on Bamboo. i.e the same tests ran in both local environment and on bamboo. Could you direct me to the updated instructions to run all of the tests locally.
That would be great!! Also, If we need more manpower here, there are 19 awesome folks from our community who had reached out to me in order to help us test Platform 2.0 Beta. I’d be happy to connect them with sprint lead whoever he/she would officially be.
@maany, feel empowered to create test cycles and add any failing automated tests to the Platform 2.0 Beta Functional Tests cycle so that they are executed manually to confirm, if something got broken or a test simply needs to be updated.
I took the list of automated tests for the reference application and marked some as high priority here. If folks aren’t able to get through all the tests, I would prioritize these.
These tickets seem to belong to the reference application project, why are they considered as blockers to the platform? Are their underlying root causes coming from the platform? And why should tests hold back the release if the features are actually working?
I see, ProviderTest seems to marked as ignored and doesn’t appear on this page, the instructions on this page say you shouldn’t have to run the tests manually, they should be marked as fail if they fail on CI, when I look at CI everything is green, so I don’t seem to understand what needs to be fixed.
I understand the ambiguity surrounding the exact description of the task. It’s not a very rigid task and we just want a dev/5/ to take a look at the failing tests. The idea, like you mentioned before, is to figure out the root cause for the failing tests. If in order to fix the root cause, we need to make changes in Platform code, we intent to make those changes now itself. If the code changes are required to be made in RefApp, a ticket should do! I hope this makes sense
It doesn’t make much sense to me as to why ui tests would fail just because you switched to platform 2.0, first thing that comes to my mind is that the data and metadata on the uat01 server is different from that on the int02 server the tests are scripted against, I will look into this and see if it is the cause
Something is still not clear e.g where does one run the tests from? According to the instructions on one of the wiki pages about these tests it seems like you can’t do it manually so how do we expect a developer to debug the tests and fix them? And if they can run them manually, where is the code they are supposed to check out to work on things? Because for sure it can’t be the master branch of the distro project.
Anyways, I took some to look closely at some of the tests and noticed some glaring issues, some key ones are:
Seems that I guessed right that the metadata on different test servers makes it impossible to have the tests pass against all of them, e.g the LoginTest uses the unknown location to authenticate but on uat server unknown location doesn’t have login tag so you can’t log in anyways.
Some tests appear incorrectly written, e.g when the web driver instance is started, the code automatically authenticates the user as admin but some tests the way they are written, they attempt to re-authenticate a user may be as a different user for test purposes which means they go to the login page when user is already logged in which of course will get redirected to back to the home page so this makes some of these tests’ assertions fail. Also some tests expect a certain app count which can vary based on the modules and their versions installed on a given test server.
Some tests don’t actually exist, e.g the tests mentioned in the descriptions of RA-772 and RA-747, Delete Visit Note Test doesn’t exist as mentioned in RA-682
In summary, there is nothing I found that suggests that the failures are caused by platform 2.0, the tests are just incorrect. It appears like may be we need to have a separate branch for each test server or make the metadata unique on all test servers . We need to move some hard coded things out of the tests, e.g the app count because it depends on the version of a module or the ref app distro installed on the test server you are running the tests against.
Hi @maany! I want to fix the logout issue but I don’t know how you’re building the distro that you’re running on the uat server, are you doing it manually?
My question is not really how to deploy to the uat server but rather where to get the code to setup my dev environment, that way I can set it up on my machine, reproduce the issue locally and try to fix it.