We’d like to start working on how to measure the impact that OpenMRS is making on healthcare and patient lives. Currently, the only way we have ever been able to understand impact “in the field” is through self reporting from implementers, which has been widely varied in accuracy and consistency and types of data reported. The summary of the discussion so far is that we need to start with simple metrics and work on obtaining those accurately and consistently before adding more complex ways of measuring impact.
Since we agreed to start simple, here are a few proposed ways of looking at impact:
Implementation metrics suggested by @janflowers:
- How many facilities have OpenMRS implemented? Stratified by service (HIV, TB, MCH, etc)?
- Do we care which OpenMRS is installed?
- Community versions or distribution distinction?
- What countries have OpenMRS as a national system (need to define: what is a national system)
- What services
- how many sites
- How many active patients are being tracked in OpenMRS (need to define: active, maybe we create the query for people to use to give us this info so it is consistent?)
- Do we care about numbers of patients over the entire implementation? That is impact, but maybe not as relevant as the currently active patients being tracked?
Clinical impact suggested by @jteich:
Does the scope of this discussion extend to clinical/health impact measures? The measures that are proposed are mostly about OpenMRS operations: downloads, implementations, patient records – all very important to track. I would suggest that equally important, especially to public health agencies and funders – would be things like number of patients with recorded Tb treatment, HIV treatment; number with recorded immunizations and how many of those are up-to-date; implementation and use in health crises such as Ebola; and so on.
These are somewhat harder to measure in our environment, which has not forced medication and diagnosis standard codes; however, measurement is still quite feasible. The importance of these findings could be great for OpenMRS’ sustainability as well as to genuinely track impact on health.
Other interesting ways to look at impact:
- How many jobs are created by OpenMRS being implemented
- Costing evaluations – ROI
- This ones a bit tricky, but Starley Shade at UCSF (my PI on the Mozambique work) has done extensive studies in this area in multiple countries with OpenMRS. So that might be useful to bring her into this type of discussion
There are two discussions identified so far as needing to happen in order to do these metrics:
- Metrics for evaluating the impact that OpenMRS is making on healthcare / patient lives
- How to gather and what to gather for metrics evaluating individual systems that could ultimately contribute to #1
Do we need to go through both of these discussions now, or can we prioritize one over the other for now?
For the second discussion, “how to gather and what to gather at the individual system level”:
@michael Just as a note, we are hoping to deploy a system as shown at http://s.bitergia.com/db-fosdem16 for tracking internal community metrics. This system could also host and visualize impact metric data as well, if it can be continually collected (or probably even so if it is more periodic). So feel free to take a look at the above demo to whet your appetite and imagination on the types of reports we could to get long-term trends over time.
@hamish It is a challenge for OpenMRS to have such a limited knowledge of what people are doing with OpenMRS beyond the well known and usually long standing partners. I have a book on evaluation that said something like: “when you have no evaluation data even a simple study can give you a much better idea what is happening”. I found the views and downloads metrics that Michael ran for me a couple of years ago very helpful. They may have helped get me my fellowship :- ) “In the last 12 months there were 32,550 downloads of OpenMRS from 177 countries and in the last 30 days (August 2014) there were 31,714 web visits from 184 countries.” Michael and Burke and others have tried to get people to give us feedback on what they are doing but responses and rates seem limited. It probably requires a more active and hands on approach to surveying sites by email, phone call or site visit. This is the approach we have taken in Rwanda and Tanzania with EMR users. It will be interesting to see how many sites are functioning well when we install the server, usage and data quality monitoring software in Rwanda in a month or so.
@lober: can come in part from “instrumenting” openmrs instances - that’s a really good idea. . One thing we’ve had in Haiti from early on was a sense of how well the systems were being used. One think we’ve lacked in Kenya (despite planning for something better than we’d done in HT) was the same - direct measurement of performance and usage indicators. We built a module and visualization framework to do that, but always good to move to something someone else is developing and maintaining.
@janflowers Just to add to Bill’s comment about monitoring… @pascal and I worked together mentoring GSOC students over the past 2 years to create a module that measures performance and usage in the Mozambique implementations. We’re just getting ready to pilot that in an upcoming release and upgrade to the implementations. Maybe those types of tools can feed into what Michael is creating? On the flip side – I think atlas was intended to do some of that, but the data in atlas seems to be pretty inconsistent and unreliable as a concise measurement of implementations. Maybe I’m wrong about the intention of the atlas tool though…
@lober: We had a framework of metrics in Haiti.We improved it for Kenya and created a hierarchy of metrics - this is relevant to @jteich ’s comment. that hierarchy started with low level metrics - “can I ping the system?”, “is the system up?” I created 5 levels, as I recall, that went up to the kinds of measures Jonathan mentioned (level 4 - health services delivery). I think level 5 was outcomes. I can try to dig that up - it might be in the “PUMP” tools documentation though I’m sure it’s in a grant proposal. @janflowers has a handle on what tools were actually developed - I think we concentrated on level 1 and level 2 metrics.
@lober, @terry, @hamish, @jteich, @michael, @paul, @darius - more to add from our discussion so far, or is this a good enough summary to continue the discussion from here?