We would like to know your opinion about the best approach to test reports.
Here is what I would like to do, roughly:
Start from the test data set shipped with the Reporting module.
In a context-sensitive test: run our report against that data set.
Extend the data in our own ‘reports module’ set to keep covering edge cases, limit cases… etc.
Verify that the pre-rendering result set is as expected.
For the last step (and that’s without having dug into it yet) I hope that the pre-rendering result object is indeed exploitable for asserts? If not perhaps could we start adding test tools in the Reporting module to allow other modules to do such checks.
I don’t know if this is current best-practice or not, but back in 2014 Mike and I were working on “TestDataManager” that has a fluent API for setting up test data programmatically.
As @darius says, the TestUtils project could be used / expanded / moved to OpenMRS. Darius also wrote some custom Hamcrest matchers (which could also be expanded upon), which can be found at org.openmrs.module.reporting.common.ReportingMatchers in the api-tests maven module.
In your unit test, you won’t be wanting to queue reports and run them asynchronously as is done via the UI. Rather, you would want to use the ReportService.runReport(ReportRequest) method. This returns a “Report” object, which has everything that you likely want to inspect, including the raw ReportData and the raw byte[] of the rendered report output (if applicable).