Hi, At one of our implementation sites, there is a requirement of providing a one-shot view of data inconsistencies in the system and an easy way to fix them. We are planning to leverage this opportunity to build a data quality dashboard. The following are the design considerations
- The rules should be extensible & configurable. Implementers of Bahmni should be able to write groovy, sql rules.
- The evaluated inconsistencies should be grouped by patient if possible (instead of inconsistencies by rules). Its a good user experience feature to show all the inconsistencies for a patient so that the data managers can fix issues of one patients at a time. The following is the screenshot.
This poses a question of when the rules should be run. Obviously running all the data quality rules in a single API call will not scale. This might demand for a scheduled job to run at a regular interval. At the same time, the users would want to see that the data fixes should result in the record being removed from the inconsistency list.
We have explored data integrity module. The rules are driven mainly through sql here. We are looking for more flexibility in terms of groovy/java if possible. Also, there is no provision of storing the actual patient list that is inconsistent. It would just highlight if the rule is successful or not (Please correct me if I am wrong here).
We would like to take inputs of broader openmrs community in this regard. Our initial thought is as follows.
- Have an openmrs scheduler task to run all the groovy files. Store the results of the data like Patient_id, rule_name, notes and addnl_info in some table. This is a temporary table that is cleared before starting every run.
- The API call on DQ dashboard will just return the response from this intermediate table.
- Once the user fixes the issue (like fixing the form data or updating the drug orders), then (s)he will be able to run all the rules through a REST call for this particular patient. So, that he can see the data being reflected on the UI.