Best practices to unit test reports?

Hi @mseaton,

We would like to know your opinion about the best approach to test reports. Here is what I would like to do, roughly:

  • Start from the test data set shipped with the Reporting module.
  • In a context-sensitive test: run our report against that data set.
  • Extend the data in our own ‘reports module’ set to keep covering edge cases, limit cases… etc.
  • Verify that the pre-rendering result set is as expected.

For the last step (and that’s without having dug into it yet) I hope that the pre-rendering result object is indeed exploitable for asserts? If not perhaps could we start adding test tools in the Reporting module to allow other modules to do such checks.

Thoughts?

Thank you!

Cc @mksrom

I don’t know if this is current best-practice or not, but back in 2014 Mike and I were working on “TestDataManager” that has a fluent API for setting up test data programmatically.

I guess this is just PIH code, and never made it into the OpenMRS org: https://github.com/PIH/openmrs-contrib-testutils

An example:

As @darius says, the TestUtils project could be used / expanded / moved to OpenMRS. Darius also wrote some custom Hamcrest matchers (which could also be expanded upon), which can be found at org.openmrs.module.reporting.common.ReportingMatchers in the api-tests maven module.

In your unit test, you won’t be wanting to queue reports and run them asynchronously as is done via the UI. Rather, you would want to use the ReportService.runReport(ReportRequest) method. This returns a “Report” object, which has everything that you likely want to inspect, including the raw ReportData and the raw byte[] of the rendered report output (if applicable).

Mike

Thanks both, I actually had used TestUtils a while ago when working on a EMR API ticket. I thought it was part of EMR API at the time.

Ok I will try this all out and report back!