As shown in OCLOMRS-1042, we have this re-occuring issue where errors happen at the time of trying to import concepts from OCL to an OMRS EMR by using the OCL Module.
This is quite a negative experience for implementers: You’ve done a bunch of work either in a CSV or in the Dictionary Manager, then you think you have all your concepts set up and ready to go, and you import them, and then… you discover a whole bunch of errors! We thought we had fixed this a few months ago, then it happened again at a critical moment right before a real-world implementation, which blocked the implementation from using the OCL Module for their site launch.
We should really have some automated test coverage in the OCL Module for this problem (kudos @michaelbontyes for this great point). That way if something changes that would cause the OCL Module to have a bunch of errors, we could fix it pro-actively, rather than only finding out about it after a user has had a bad experience. We do try to do regular manual testing, but this does not catch everything, and our PM resources are stretched thin.
Thank you to @moshon for working on the current bug. But to prevent this in the future, what can we do next?
Ideally, the automated testing would cover in detail both the ability to populate/update a source in OCL, and the ability to import a dictionary in OpenMRS from OCL to improve stability and reliability.
Based on a set of mutualized test cases, it would make sure that all values/meta-values/mappings are correctly imported in an OCL source using the new CSV bulk import UI, and that a dictionary populated with that source is correctly imported in OpenMRS using the OCL subscription module. Otherwise, we would have to verify each term individually every time we have new terms/implementations, or for regression testing in both OCL sources and OpenMRS implementation in case of future releases.
@ibacher, do you think that such data-layer testing could even be useful for content management (like source and dictionary cloning) in the Dictionary manager as well?
I haven’t studied this in too much detail, but I agree with @ibacher that this doesn’t seem to be something that requires Selenium or front-end testing. I don’t know the nature of the bugs being discussed, but the three steps that @ibacher describes seems like the key things to test.
3. Verify that the correct metadata is setup: find a way to compare the content of the collection and its release with what was imported in OpenMRS (for each field of each concept)
@ibacher and @michaelbontyes thanks for the clear explanation. which Dictionary manager server are you running? Am using my local instance and as you can see I can’t see the release versions
They’d appear in the “Versions” box if any existed. You’d actually need to click on the “Create New Version” button to create a new version and marked it as released there.
@ibacher, @dkayiwa am stack on how to proceed with this issue! between the openmrs-ocl-client and the openmrs/openmrs-distro-oclqa repositories which one should I use for this ticket. I had already pushed the feature file on the openmrs-ocl-client before coming a cross the openmrs/openmrs-distro-oclqa repository. I wish to put things right.
I thought we established a while ago that our plan is to only stick to writing QA automation stuff in the OCL client repo itself (or in the OCL Module repo if the tests impact that module and not the webapp). Did that plan change?
This is a bit of a weird one… In my view, we already have a decent amount of testing in the OCL module itself, but this ticket is really about building testing so users of the OCL module have confidence that it works correctly (i.e., it’s part of what Michael has referred to as end-to-end data tests). I guess I must’ve created the openmrs-distro-oclqa with the intention of that being the test environment, so I think these tests either belong there or in the broader QA Framework (so this one test being the exception to the general rule).
Thanks @ibacher , i do think this can be achieved like
Create all the automated automated test in cypress following open concept lab instance.
Then in the last feature file and step definition file, this is where we need to add a logic that will pick openmrs qa instance, Inotherwords, we shall be dealing with two instances at ago.
Then with in our tests, we need to call a URL for qa instance and do the other automation remaining, What am sure is, we wont separate them, implying using two instances at go, cc @ibacher how do you see my suggestion.
So, I think it’s important to bear in mind that these test are not tests of the Dictionary Manager itself nor are they centrally tests of the OCL API. They are tests of the functioning of the OCL module in an OpenMRS instance. This is different from both the existing testing that we already have for the Dictionary Manager (the Cypress tests) and the existing testing we already have for the OCL module (essentially some unit and integration tests).
So, the decision to spin up or connect to a QA instance is something that’s central to this; all of the tests will be communicating with the OpenMRS server. Frankly, we can even drop the requirement to make a new release on OCL.
The questions we want to answer through these tests are:
When the module is setup to subscribe to OCL, can it connect to OCL and download the concepts?
Are the concepts from OCL loaded into the OpenMRS concept dictionary correctly?
@sharif I think this somewhat revises how you outlined these tests working. Hopefully that’s clear enough?
You probably need to release a new version. Unfortunately, exports aren’t as stably available on staging as they are on production, since it’s actively used for testing, etc.