Reporting with AI: Flattening Obs, Writing SQL Queries

Is anyone using AI to help create your OpenMRS Reports? Has anyone had any success getting AI to understand the OpenMRS data model?

PIH is wondering who they can possibly learn from.

Case Examples:

  • (Main Priority): Quickly Generate a Report Flat File based on a New Form: 15 fields on a form–> potentially up to 15 obs. Want it to generate it a flat file for each field in that form, and use SQL to do that.
  • (Later): Create Report Based on Plain English Query about key indicators, such as “How many patients had a high viral load within 3 months of HIV program enrollment?”

The team at @UWash’s @reagan @ibacher did use OpenELIS to set up something similar, and the idea is this same workflow could be used for OpenMRS too.

Others who I believe have been working on AI-generated/supported reports for OpenMRS: @PalladiumKenya @EMR4All @Intellisoft @Madiro

Keen to hear from anyone with thoughts/experience!

1 Like

And, for those generating SQL with LLMs: How have you made sure the SQL is correct?

PIH is finding often that, in general, the SQL suggested by LLMs needs serious QA because it can be objectively wrong.

2 Likes

Thanks for sharing @grace. I completely agree with PIH’s observation. LLMs often require significant contextual guidance to generate accurate SQL queries. It’s not surprising though, like with any development process, getting things right takes iteration. While the queries LLMs produce based on the descriptions given it can sometimes be quite close, they can also occasionally miss the mark and diverge from the expected results in complex situations. Because of this, they aren’t solely reliable on their own and still need careful review and validation.

Given this, I think LLMs are quite good at generating good general(broad, undirected) summaries even without supervision. But again, this largely depends on the volume and quality of data the model is given to work with.

1 Like

Hi @grace is the team using RAG technique. In our experience the higher the quality of the prompt (including the context of the database structure with relevant addition information) the higher the chances of more accurate responses

1 Like

Thanks for inquiring @grace for starting this. after successfully demonstrating the text-to-sql agent at @EMR4All we developed a few months ago, next in the pipeline is Report generation AI, These are the observations from our experience for implementing the text-to-sql agent

  1. Good choice of LLM is paramount, we tried out qwen2.5 , deepseek-r1:1.5b. These performed better while doing inference for online hosted models via smollagents
  2. They perform better while doing inference on models hosted by third parties like OpenAI, Hugging face and other vendors. We experimented with Hugging face smolagents
  3. We have experimented with the Offline text-to-sql agent (i.e inference on ollama powered LLM’s Qwen, deepseek hosted on PC and Raspberry PI ) this is for those who care about data privacy and want to keep their transactions completely on-prem. Results where satisfactory, though greater improvements for accuracy and alot of refining is still a work in progress.

[quote=“grace, post:2, topic:46525”] PIH is finding often that, in general, the SQL suggested by LLMs needs serious QA because it can be objectively wrong. [/quote] I strongly agree, we want to standardize QA practices for the various LLM’s , frameworks and agents we’re using tools like Deepeval or any other that is at our disposal

We’d be happy to share the work we have so far to OpenMRS, and collaborate with those specifically interested in working on AI generated reports. Next Steps setting up standards, tools and best practices for implementing AI with OpenMRS, documentation, security, human in loop, agents, tools and patterns. e.t.c

Since we’ve also been looking at other practical use cases, for example the ones you highlighted above, we’d be happy to collaborate with PIH /others and come up with those MVP’s to demonstrate

@grace i’d suggest that you organize a brainstorming meeting on how to move this forward. We could especially take advantage of the forth coming Hackathon Calling for mentors and peer reviewers to join the EMR4All Hackathon.

cc @bennyange

We haven’t started work on it but providing an AI prompt to consume Ozone Analytics flattened data leveraging Meditron-3 is on Ozone’s roadmap (as of May 2025):

@jesplana has been experimenting with various tools, including Meditron, on top of the OpenMRS data model and could report back.