đź§  AI and OpenMRS call this Monday: Share your work & Join our next-steps discussion!

:speech_balloon: Let’s talk this week about AI and OpenMRS

Since we last met as a community about AI and OpenMRS in March 2025 (recording and notes here), many teams have been busily trying new things!

Come join us this Monday, August 18, to share/hear:

  • progress and ideas on OpenMRS + AI :light_bulb:
  • Demo or explain what your team has been working on :tada:
  • brainstorm how to move forward top-priority use cases :brain:

When: Aug 18, Monday, 2pm UTC / 5pm EAT / 4pm CAT / 3pm WAT / 7:30pm IST / 7am PST / 10am EST

Zoom Link: https://om.rs/zoomopenmrs

GCal Invite Link: :date: Google Calendar Invite :date:

:light_bulb: Did You Know About these Examples?*

Examples of new/ongoing Community AI interest/discussion since May:*

:cry: Can’t make it this Monday? Don’t worry - This call will be recorded. Let me know if you are interested in a subsequent call, and then we will set that up :slight_smile:

8 Likes

I know people are really focused on LLMs, but over the last month, I have taken an interest in small language models and their potential for specialised use cases in resource-constrained environments. I am thinking of models that can be run on a CPU only and still produce good results :thinking: .

5 Likes

we at @EMR4All have done some experiment with SLM’s on offline edge devices like PI’s, but the one’s we’ve tried are fast but don’t have enough context, they therefore hallucinate almost 90% of the time**(remember we’re dealing with patient data, any slight mistake can mean a death sentence to the patient),** , that’s why we’re bringing these combinations, agents, models + tools, RAGs, models + mcp e.t.c. , but better callibration is the now the way I guess :grinning_face_with_smiling_eyes:

Both SLMs run smoothly, LLM’s of a few billion params(1-7), run well, but need just about enough context

But will be happy to hear of your findings with SLM’s :blush:

3 Likes

@tendomart for the SLMs, it may be a matter of brain storming on the non critical use cases where a few hallucinations are acceptable. :slight_smile:

I agree 100% , brain storming will be a good place, to give the models some degree of liberty.

This conflicts with the OHDSI call, but I am sure it will be recorded, right? :slight_smile:

1 Like

@EMR4All We are adapting the existing English based queries that return results and SQL queries run to a generic OpenMRS Module. The goal is for the existing features to be made available via an OpenMRS module and also for Clinicians to be able to query the EMR for population-level health questions by typing their questions ( e.g. “What percentage of my patients have diabetes?”) into a frontend built-in search bar.

The query will be sent through this module to locally deployed LLMs and the results will show, explaining the derived answer. Citations and the SQL used will be included, so that surprising findings can be confirmed.

The module uses tools and utilities common to the OpenMRS environment, such as the Spring Framework, Hibernate, Liquibase, Slf4j, JUnit, and Mockitoand includes only LangChain4j, Testcontainers and Lombok as additional Libraries.

particular focus on Unit & Integration Testing | Testing LLM Responses to reduce Hallucination.

UserMessage userMessage = UserMessage.from("What is the name of the process by which the body breaks down food?");
ChatResponse response = model.chat(userMessage);

AiMessage aiMessage = response.aiMessage();
assertThat(aiMessage.text()).contains("digestion");
assertThat(aiMessage.toolExecutionRequests()).isEmpty();

ChatResponseMetadata metadata = response.metadata();
assertThat(metadata.modelName()).isEqualTo(MODEL_NAME);

Sample;

If we gave a wrong Answer for instance; the result would be as follows.

[ERROR] Failures: 
[ERROR]   ExpertSystemServiceTest.chat_should_generate_valid_response:62 
Expecting actual:
  " The process by which the body breaks down food is called digestion. It involves several steps that break down complex carbohydrates and proteins into simpler, more usable forms. These simpler substances can then be absorbed by the bloodstream to be used as energy or stored in the form of glycogen for later use."
to contain:
  "Respiration" 

README here.

4 Likes

Hi All , Given the rapid advancements, AI is becoming a critical tool for Health .In our recent discussions, people have showcased impressive work of AI . However, a key question remains: how does this work benefit the wider community ? To maximize our collective impact, let’s sue this forum for brainstorming and begin designing a core AI module for OpenMRS ; thing we can start with

  1. Identify priority use cases for AI within OpenMRS (e.g., clinical decision support, predictive analytics, data quality, reporting).
  2. Design AI architecture for starting a Module that will be integrated in OpenMRS
  3. Engage developers to build up the moduel workig together on this common goal

I believe that by starting a focused, community-driven effort to design a core AI module, we can develop tools accessible to strengthen the OpenMRS ecosystem.

2 Likes

Recording available yet? :slight_smile:

1 Like

Recording from community AI Discussion on Monday, August 18:

https://iu.mediaspace.kaltura.com/media/t/1_sarlouop/120124211

Whiteboard from that session (which was further updated on the subsequent call, Aug 25): https://openmrs.atlassian.net/wiki/x/BAA_Fw

1 Like

1 Like

This is what we have been up to for the ESM, we are adapting the existing @Emr4all A.I work to OpenMRS, through the weeks we have talked about our progress during the A.I calls…Join the bi-weekly meetings to get more.. :slight_smile: :slight_smile:

3 Likes