The database used in Bahmni/Openmrs has an EAV data model, and hierarchical data (For Ex: Obs table). This makes it difficult to plugin the database to any analytical tools (Tableau, Strata etc) as unstructured hierarchical data is limited and multiplies existing complexity.
In order for implementation engineers to extract data from the existing production databases is said to be very difficult without totally understanding the openmrs data model.
For an implementation to generate indicators or reports, lot of development work has to be done by writing sqls to generate the data from which the indicators can be extracted from.
One of the solution thought through is to flatten the hierarchical database. This flattened database can be connected to analytical tools (Tableau, Stata etc) for further analysis of data. The generation of the analytical database will be common across all the implementations. We depend on the model of Openmrs data model to make it as generic as possible.
More details about the approach and implementation are available at
Please go through it and let us know your comments.
Just for context, is this is something that your team is planning to build for a client soon, and youâd like to get feedback up front on the design and approach, so that what you build is more generally applicable across Bahmni implementations?
Do you have a vague timeline in which feedback is especially valuable?
Yes Darius. We are building it for a client and would like to get the feedback on the design and approach.
It would be helpful if we can get the feedback in a week.
We have tried this for Kenya by splitting obs table into multiple sub table per encounter type for reporting purposes using the reporting framework with no intend of exporting it to statistical packages for analysis. In Mozambique we have used Pentaho for creating specific tables for reporting purpose. It would be great to have a generic way that can be plugged in by anyone with little effort.
@pramidat, thanks for creating this discussion. I am very interested to see where this leads, and agree with those who indicate that this is something that would be great to design out for any âgenericâ OpenMRS implementation, not just one based on Bahmni.
At PIH, we have started taking a similar approach, though in our current iteration we use the Pentaho Kettle libraries as the basis for our data processing pipelines. These were chosen as they enable a graphical interface for defining jobs and are accessible to a broader range of users than our software development team. Although we have not yet fully implemented a set of flattened tables per form, where each Obs represents a column value, we have started down this path with other parts of the data model, namely flattening much of person/patient/person_name/person_address/patient_identifier/person_attribute into a single âpatientâ table.
We have a lightweight spring boot application that we have authored that runs our jobs (and could ultimately provide some level of a web application on top of it, though this is not done yet). That code can be seen here. This then runs one or more jobs in defined in our collection of Pentaho jobs and transforms, which are available here.
The documentation is probably not super up-to-date or comprehensive, but this is all available for sharing. In theory, one should be able to execute the load-from-openmrs job on a generic OpenMRS installation, after configurating the appropriate settings in their config file.
As @darius has mentioned in some of his commentary, we are looking now at how we might evolve into a solution that does more progressive updates over time, as currently this runs nightly (or weekly) on a given database. Whether or not we stick with Pentaho for our framework long-term is up for debate. Like you, we really just needed something more accessible to people for whom the OpenMRS data model was too complicated to query against, and this was an attempt to accomplish that.
I would definitely like to stay involved in learning about and/or brainstorming and/or contributing solutions that can generally solve these kind of problems within OpenMRS.
I understand that this is an effort primarily lead by a team working with Bahmni and that the primary concern is therefore to end up with an adequate solution for Bahmni. Of which will be keen to benefit.
However as I attempted to say over the last PAT call, letâs not close the door to having this being used with OpenMRS in general. Letâs hear from the senior devs and OpenMRS architects what they have to say about the database âflatteningâ that will be undertaken here. It will cost an hour of developers forum to hear their thoughts about it.
Let me give you one real world experience that we donât want to endure twice as implementers. We have one non-Bahmni implementation that aims at transitioning to Bahmni⊠when the appropriate resources will be available to do so, if ever. In the meantime they would like to use appointment scheduling, which is available outside of Bahmni. However we are reluctant to enable this feature as it would not be compatible if/when they transition to Bahmni, leading to a data migration headache bigger than what it already is. This is a âlow-resource deadlockâ that arises, because when it came down to it, the developers behind the appointment scheduling feature of Bahmni did not really consider recycling/expanding the existing appointment scheduling module.
I want at least to raise the alarm when it comes to important and impactful features, people involved on this thread will converge to whatever decision they deem fit.
I propose that we discuss this on Mondayâs OpenMRS Design Forum (4pm UTC, 9:30pm IST, see more) so that we can provide at least some opportunity for OpenMRS folks to give input, while respecting the teamâs desire to move fast. @pramidat would you be able to join this call? (If not I still think itâs worthwhile to do a group OpenMRS review of the document.)
(@jthomas, @burke FYI. This would preempt the OCL topic that I currently have scheduled to discuss on Monday.)
There are few approaches we have seen and/or tried to deal with this generic problem of how to report in a performance way from schemaless data without repeatedly writing long SQLs which are hard to maintain. There are three points of tradeoff:
ETL to Schemaful model - which has been talked about in this thread. So not going into it. Here the complexity resides in transform part of ETL. DHIS 2 does something like this.
Performance for user is best in this. Operational complexity is high because any change in form means regeneration of at the tables and data affected by it. And if it is 80-20 scenario, you would start wishing the form which have 80% data doesnât change. Performance of creating new database could be slow, but usually works out fine. But, I have also seen systems where such jobs can take hours and days when DB grows large. Usually one can keep on going the rabbit-hole of trying to improve the performance of ETL by making it smarter and smarter. One may imagine that it is fine because users can use the old database till new one is cooking. But what is not understood always is that, the extract process loads the production database affecting main users. Doing ETL from a replicated database will solve this.
We tried this in Bahmni by introducing a level of indirection via small configuration support (in Bahmni it is in JSON). Basically, this configuration drives how the SQL is generated. So hopefully you write one type of SQL once in sort of a template and then generate different SQLs by configuring them. In this case, I would imagine configuration will be about form name. I would like to verify the performance of CASE WHEN kind of SQL with EAV model. I donât remember clearly from Bahmni about this.
We have been trying this on another project, though the data model is not EAV but JSONB (postgres). Define database views per-form. Views are schemaful here. Since the data for each instance of form is filled in one column, one can use JSON expression to get the data out. I think this idea could be applied to OpenMRS EAV too, but one should check the performance first of CASE WHEN, because the data is in multiple rows. Jury is out on it, as we will know more over time, but I feel confident this idea has enough legs - to be a valid approach at least. I threw this one our here, so that we can think of different approaches.
IN terms of tooling.
I would recommend looking at metabase. Much lightweight, modern and improving fast.
Few things to watch out for as they are not immediately obvious.
How do you model observations which have multiple coded answers? Essentially you may need to create tables for it in approach 1. Although I found that number of observations which have multiple coded answers are quite few, hence you can hold your nose just create a few more tables anyway. It will not explode the number of tables for you. Or you can use, command separated values etc too - suboptimal for all scenarios.
Lastly, if I have a lot of money and time, I would choose 1.
2 & 3 are bit cheaper but test for performance first.
Donât make it Bahmni-specific unnecessarily (E.g. donât require CentOS for some reason)
Most things (except obs) could have a common model in the data mart (e.g. patient)
donât need to define all of these up front but as you do come across them while doing the work
Look at existing work (PIH to share their flattened model, we will request that AMPATH share any flattened examples)
Prioritize extensibility, e.g. anyone should be allowed to add their own job, or disable the standard jobs. (We think this is already included in your design, but just want to make sure)
Prioritize incremental handling (of at least the obs table). It is harder to justify this as a core part of the Bahmni product if itâs only suitable for smaller implementations who do analytics infrequently.
Donât make it Bahmni-specific unnecessarily (E.g. donât require CentOS for some reason)
-> We are dockerizing the application so that it can be run anywhere. Even while flattening we are considering non-Bahmni use cases
Most things (except obs) could have a common model in the data mart (e.g. patient)
donât need to define all of these up front but as you do come across them while doing the work
-> Yes, we are joining few tables to have a common model. Apart from that we are also trying to come up with some generic views
Look at existing work (PIH to share their flattened model, we will request that AMPATH share any flattened examples)
-> It would definiltely help. Can I get some point of contact.
Prioritize extensibility, e.g. anyone should be allowed to add their own job, or disable the standard jobs. (We think this is already included in your design, but just want to make sure)
-> Yes. This is included in our design
Prioritize incremental handling (of at least the obs table). It is harder to justify this as a core part of the Bahmni product if itâs only suitable for smaller implementations who do analytics infrequently.
-> we are looking into several possibilities for incremental updates. Will update once we find something
@mseaton, @mogoodrich, @toddandersonpih, @jdick, @nkimaina one of the point that came from this call is that even if we donât all standardize on a toolset for ETL to an analytics db, maybe we can standardize on what a flattened data model looks like for common OpenMRS things.
Can you share the table structure (or nosql equivalent) of some of the tables in your analytics DBs?
For AMPATH, in our initial approach, we first run a process to create a flat_obs table of key=value pairs where key is the question concept_id and value is a stringified version of the value_x column in the obs table. We then use this table as the basis to create other transformations (all in sql). The problem with this approach is that any information within a nested structured allowed for by using obs_groups is lost (not ideal). But, this makes it much faster to do additional transformations AND we donât hit the obs table again when transforming (we are using the same database for both openmrs and our etl tables). Some examples of our âcalculatedâ (i.e. transformed) tables can be seen in the calculated directory.
A possible place to start for the community as whole might be a representation of an encounter object which could be used for further transformations. We are currently experimenting with an approach of mysql --> debezium (kafka adapter for msyql) --> kafka --> spark
We are just in the beginning of coming up with representations for the encounters (@fali, perhaps you could share basic structure youâve been working on).
@darius, I remember you had raised this very question yourself a while back (though I donât remember the thread) and suggested possibly using the rest api representation. We trialed this but initial tests showed that hitting the rest api made this process a bit slow (probably surmountable) but slow so weâve been trying to come up with a function that directly handles the obs data coming in from the bin log.
We have spiked on an approach using Pentaho. Our model is not particularly sophisticated, but the initial aim has been to simplify the OpenMRS data model such that someone coming at the data would be able to make sense of things relatively easily, and get up to speed quickly in order to do their own analyses. Unlike the approach that @jdick describes for Ampath, it was not designed with performance as itâs initial goal, but rather accessibility and approachability of the data (eg. demystifying OpenMRS).
Our first stage transforms into a generic flattened structure:
The main things this aims to accomplish are:
Flattening person, patient, person_name, person_address, person_attribute, preferred patient_identifier into a single âpatientâ table.
Flattening metadata into varchar fields (eg. encounter.type = âAdult Initialâ, rather than a foreign key to an encounter_type.name table)
Moving Obs Groups into a separate table from Obs
From here, we (optionally) add further pipeline stages that transform data into implementation-specific tables. Some examples of these for our Malawi implementation can be seen here:
@mahitha thanks for sharing this. There are a couple of comments that Angshu and I made by voice last week, that Iâll mention here too, since I donât see them incorporatedâŠ
The end user should have to know what is in âperson_detailsâ versus âperson_informationâ versus âperson_addressâ versus âpatient_identifierâ. I would expect a single flattened table combining all single-value-per-patient things. (Includes things that you currently have split across person_details, person_address, patient_identifier, patient_allergy_status, and person_information)
Similarly for other parts of the domain model, I would expect that anything that can be flattened is flattened. E.g. encounter_provider_and_role can probably be combined into patient_encounter_details (unless weâre supporting >1 provider for a given role in the encounter).
And still looking there, I see you have a provider table (and encounter_provider_and_role refers to it). I would expect you to flatten the provider id and name (e.g. provider.person.*_name) into the encounter-related table. (For things like this it probably makes sense to have both the provider_id and also the flattened version.)
For âattributesâ (person, visit, location, provider), you should treat them like obs on forms. I.e. in most cases they would be single-valued so you can just flatten them into a single column in their parent table; in the multi-valued case you could have a subsidiary table just like is done for obs/forms.