Hey folks!
I’ve been playing around with how we can bring AI into OpenMRS, and built a prototype that generates a structured visit summary using an LLM.
Basically, the users would be able to launch the summary generation workspace from the visits table which triggers a call to a proxy server.
The proxy server is a ExpressJS server, which, based on the endpoint, has a system prompt which hands the LLM a set of clinical tools and tells it to fetch the data it needs from OpenMRS — vitals, diagnoses, medications, allergies, presenting complaints, and more.
The LLM produces a structured, readable summary of a given template that the clinician can review or regenerate. The summary can also be printed as a PDF directly from the workspace.
What I found cool was that, if you test this out on the demo data of the patients, it can flag out things to look out for, even when it wasn’t explicitly prompted to do so. Here are some examples:
Link to the server - GitHub - NethmiRodrigo/openmrs-ai-proxy-server: A lightweight Express proxy that sits between an OpenMRS frontend and LLM provider APIs. · GitHub
Link to the frontend - openmrs-esm-patient-chart/packages/esm-patient-ai-summary-app at main · NethmiRodrigo/openmrs-esm-patient-chart · GitHub
Please note - The server is meant to support different providers but I’ve only yet tested it out with Anthropic.
A link to the demo video -


