GSoC 2024: Project Brainstorming

Hi all,

Unfortunately, we have to cancel the Project Brainstorming call that was scheduled for Today. We’ve encountered availability issues, which have made it challenging to find a suitable timeslot where everyone can participate. As a result, we’ll be continuing our discussions in asynchronous mode.

Sorry for any inconvenience caused.

1 Like

Hi @grace , we had the same thing in mind. I created a protoype a while back: Medguide Plus - a Hugging Face Space by jmesplana.

Also, here’s an idea on chatting with data using LLM. I did a prototype using ChatGPT via Zapier and connected it to Google Sheets. The idea is to attach it to any data source (e.g. openmrs) and the users can chat with the data. Of course we have to figure out a way on how to filter the data so the user only accesses records s/he has access to. Here’s the prototype: clinical_data_analyst_walkthrough1

link to Medium post: Harnessing the Power of Integration: ChatGPT Meets Google Sheets via Zapier | by John Mark Esplana | Medium


Hey @jayasanka, for UI Form Validation. We can use Zod and RHF which will ensure the logic. As you said, for early developers it might be challenging to write javascript expression and leads to error also. I am interested in this topic, I will look into in. Happy to connect with u @jayasanka.


Update re. the Machine Learning feature - after ++ discussion with my Machine Learning Engineer contact, he explained it would likely be >$10,000 for the hardware to host the model (Llama2-70B @ 10 tokens/second) on for the Robot Resident feature I described above (since no production org site would be able to use OpenAI’s hosted APIs to exchange real patient data). That seems mis-matched with the resources organization have on-hand, and the corresponding low community organization need (or lack thereof) for this Robot Resident feature idea - at least for now.

@jesplana the reporting query idea is super interesting. For now though, the same hardware limitations/costs would apply, because in production, I assume we don’t want the data going through OpenAI’s servers. Does that impact your thinking at all? If it’s okay to send the data over an API call to an OpenAI server, then that’s another story. (Also there’s the Enterprise Licensing Option since OpenAI claims not to train on that data; but of course, it’s another node.)

1 Like

p.s. - As soon as our Wiki Migration (to the newest version, Confluence Cloud) is completed, we’ll suddenly be able to leverage the many GPT-Enabled Documentation/Support Chatbot tools in the Confluence marketplace :smiley: Atlassian Marketplace


Hi @grace , that’s exactly why we cannot move this in prod (yet). It’s expensive to host it internally, policies are not yet drafted/finalized, AI regulations remain dynamic and will continue to be as countries adapt and apply their own rules. The idea is to use a medical LLM (e.g. GitHub - epfLLM/meditron: Meditron is a suite of open-source medical Large Language Models (LLMs).) and run it internally within our organization’s infrastructure.

Another idea is to process the data inside of OpenMRS (given the patients are aware of the type of processing to be conducted with the data and it is within the legal basis) and share the model using - GitHub - epfml/disco: Decentralized & federated privacy-preserving ML training, using p2p networking, in JS and run it using a module within OpenMRS and implement a CDSS.


Hey @grace, For UI-based validation. I have an idea, that we can use the existing zod. Currently, we are using the Zod for validation only, not for logical conditions. But, by using Zod, we can implement the conditional logic, and make it as reusable. So, in the future, if someone would like to create a form. They won’t struggle to implement zod validation as well as conditional logic. Will share u sample web app by 2mrw for your reference.

Hey @senthilathiban, thanks for showing interest and sharing your thoughts! Our focus is on crafting a user friendly interface for creating validation expressions in the same format we’re already using. This way, folks comfortable with manual input can still work their magic while enjoying the benefits of a UI. Plus, it’ll seamlessly work with our existing forms.

1 Like

I am not sure I am totally following, but it is clear that we can’t send patient data to the OpenAI’s or anyone else’s servers. One thing that might be interesting is to use AI to craft a query which could be run on a local server… Find me patients with this and that using the OpenMRS data model, etc. Then that query could be run via a local query against the OpenMRS server.


so, the model selection is key here considering all constraints of cost, accuracy, efficiency, and compliance with the dynamic regulatory standards and definitely our usecase.

so as @jesplana said, selecting the model that breaks even like Meditron, BERT ,etc capable for on-premise deployment, seems a viable solution.

And obviously responsible AI comes in handy here to provide accountability and other ethical considerations which we need to take note of.

Hey @jayasanka , @grace . I’ve developed a simple web app for UI-based form validation (like tally forms). React.js was used for this project, and Zod is utilized for both validation and custom logic conditions. I developed with the motive of a Validation rule builder for the form builder project idea. By doing these ways, we can create a UI-based form with custom logic, the newbie does not have to worry about the validation and condition, for that we’ll be giving UI-crafted validation. Where user can select based on their requirements for a particular form. Your feedback will be helpful to me. Thanks in advance. Last thing: apologies for the simple- UI/U, and not responsiveness also. It’s just a sample prototype for your reference.

Hosted URL:

GitHub: GitHub - senthil-k8s/UI-Based-Form-Validation

ScreenRecorder : Loom | Free Screen & Video Recording Software | Loom

Hey @grace, @jayasanka, Looking for your comments.

1 Like

Hello, @jayasanka, I’m Harsh Dewangan, a 3rd-year student from NIT Raipur. I’m highly interested in the GSOC 2024 project on “Animated Loading Visualization”. With experience in SVG animation and React, I’m eager to contribute to this project idea and want to interact.

  1. I have previously worked on my Classlocator project where I used an SVG map animation (Class Locator - Ground Floor)

  2. I have created a basic loader animation on the Openmrs logo please check and guide me

File : SVG file

I can implement it even using Lottie too

1 Like

Hello Jose & @frederic.deniger. Can we have a discussion regarding

Building an Offline-Capable Android Application for LMICs Integrated with Enhanced OpenMRS 3.0 FHIR Module

Please help me understand the requirements for this project.

Any details about the Android project?

@frederic.deniger @grace @jwnasambu @jennifer @herbert24

Hi everyone and happy Easter :smile:

I was OpenMRS GSoC Student in 2014 (time flies, already 10y :sweat_smile:), and now working in the field of IoT, Wearables, and AI/ML as Head of research.

I’d like to know if there is any interest in proposing and helping to mentor a Wearables and AI-Driven Healthcare Module

With the huge trends in wearable, continuous and remote patient monitoring and sensing, I think building a module able to collect and use patient data coming from IoT devices, wearables, or other monitoring tools to further rely on a series of AI models to perform different tasks like predictive healthcare, proactively raise alerts, forecast patient outcome, etc. can benefit a lot to the community and AI research for healthcare.

What do you think about that? I’ve been involved in a few industrial/academic research projects in the field so I would be able to give some guidance on the AI/ML side, as well as relevant wearables data and their integration.

My concern and what I don’t know is about ethics and what is today’s OpenMRS Community position on AI and patient data?

I’m looking forward to hearing your thoughts!


time really flies :grinning:

this is very interesting especially collecting patient data from wearables, such as heart rate, activity levels, sleep patterns, and vital signs, really opens up exciting possibilities for personalized medicine and early intervention.

am not sure about the organisation needs and cost implications this may bring about and the affordability given the resources orgs have at hand.

1 Like

Well, I’m not worried about the cost involved by AI.

The ML model relevant for theses use cases (IoT, wearables, monitoring data, etc) are usually lightweight and the inference part can run on light VM with CPU only, so we can easily embed it directly in the module. That also resolve any privacy concerns.

The training part may need GPU/TPU to be efficient and not time consuming, but we can rely on free tiers of Google Collab for e.g. or apply for free TPU credits from Google (for research groups, open source project).