Hi all! As part of the work to provide great guidance around implementation, we are trying to figure out what types of large scale implementations there are and what kind of use cases do folks have for managing these large deployments, and what kind of tooling would help with that management? Pre-build dashboards and monitoring comes to mind.
There are a few types of implementations I’m thinking of:
Those with on-site servers, with connections for updates and to connect to central HIE components (Client Registry, Shared Health Record, Facility registry, National data repository, etc). I’m interested to know what kind of tooling would be useful here, and what sort of metrics we’d need (EG: Server Hardware Monitoring (disk space, CPU usage, RAM usage), OMRS is up or down, last time it managed to successfully send data, what version of OpenMRS is used, etc). What are some of the bottlenecks and pain points that exist in managing large deployments for on-site servers?
Those with multiple sites hosted centrally at a data center of some kind: How many sites are using a centralized server, what sort of connections to HIEs are supported, and what if any are unique metrics for these kinds of implementations? Is this feasible now with internet availability or is this part of a future strategy? What sorts of bottlenecks are you facing?
What sorts of tooling and guidance would be helpful? I’m thinking about htings like a scalable backup solution, a semi-automated deployment/update tooling, monitoring tools, and guidance on secondary data use pipelines for dashboarding and reporting at a greater than facility level.
This is all brainstorming, and I’m hoping you can brainstorm with me. We will be setting up some more opportunities for contributing your use cases as well.