There is rising interest among organizations in the OpenMRS community & ecosystem in the role of AI and what the explosion in Generative AI’s capabilities mean for the EMR world. For example:
IntelliSOFT is prototyping how GenAI could help summarize discharge instructions
Regenstrief & DIGI are excited about the role GenAI could play in Better Chart Search
Madiro is using LLMs to help convert paper forms into O3 Forms
…and more!
Goal: Hear from community organizations & members about the problems/needs they hope to solve, and plan next steps (e.g. AI squad?)
When: March 10, Monday, 3pm UTC / 6pm EAT / 8:30pm IST / 8am PST / 11am EST
@grace another one that @akhilmalhotra advertised for Bahmni is their integration with Medispeak, quote:
Medispeak is a voice-to-text solution designed to enhance clinical workflows by enabling seamless data entry through voice commands. This integration aims to improve accessibility and efficiency for healthcare providers using Bahmni.
Terry Mochire will organize a follow-on general community session in 2-3 weeks.
Detailed presentation and demo of Content & Mappings Automation project coming to community in a few weeks!
What was discussed
The key ideas shared were:
1. Automated Concept Mapping help
@michaelbontyes from @Madiro and @paynejd from @OpenConceptLab shared their project to help implementers / form-builders to rapidly match form content with existing Concepts / Terminology codes. Saves days to weeks of refining content and terminology picking.
Next steps:
Demo coming in March/April 2025!
Get involved and ask questions on Slack at: ocl channel
Michael & Vero to work on ticketing a community Epic.
Jing Chang & @bashir from Google Health / Open Health Stack team shared they are interested in partnering on an open-weight FHIR-based Search; locally hosted. “Ask a FHIR-store questions” approach.
Next steps: OHS and Regenstrief to discuss further; Intellisoft interested as well.
4. Query Support: For Clinic Managers, or M&E Officers, or Report Generation
Clinic Query Support: A UI connected to an LLM, where a user (like a clinician or clinic manager) could ask questions of their EMR, and in the background the LLM would convert their plain english questions into OpenMRS-friendly SQL queries, then query the OpenMRS DB; e.g. “How many patients are sick with 1,2,3…”
Or, could first focus on helping M&E Officers: an LLM helping data managers with crafting SQL. Why focus on M&E staff? Because: End-user-generated SQL via LLM (other than very simple examples) sounds like a bridge too far at the moment. Knowing the types of questions clinicians ask and what it takes to answer these properly usually requires a data manager & skills beyond today’s LLMs.
Report Generation: Not waste a lot of time writing complicated report queries.
@bennyange from @EMR4All shared helpful detail about the practical investigations EMR4All has been pursuing with using LLMs in this regard:
Natural Language to SQL for Cohort Builder – AI to convert plain English queries into SQL to simplify data retrieval for Clinical Decision Support – AI to analyzes patient records for insights and recommendations.
EMR4All tested Ollama (Smollama2) allowing offline AI execution without relying on OpenAI APIs. We also tested DeepSeek More accurate but slower due to hardware constraints.
Next steps: EMR4All team will explore agent-based AI for OpenMRS automation for Medical Data Processing & Summarization to automate extraction of key insights from patient notes. They will keep us posted on how this goes.
5. Writing Clinical Visit Notes
Not discussed in detail, but this was raised as an example use case worth considering.
6. Use as data source for Drug-Drug Interactions
@muppasanipraneeth19 proposed the following ideas shared publicly on GitHub that would involve using existing LLMs to inform the content needed to help catch drug-drug interactions in Prescription Workflows: GitHub - Muppasanipraneeth/aashayams