Modern EMRs all include a voice transcription service that allows doctors to focus on talking with their patients instead of typing on a keyboard during the visit.
I think doctors using OpenMRS would benefit from this technology and so am planning to add this feature for the dev team to review and hopefully incorporate.
From reviewing the codebase and the current structure of the OpenMRS site I am planning to start by adding an AI transcription service to the Visit Note section that will allow the doctor to record and capture key details of the conversation.
Down the line it would be nice if this conversation would auto-populate sections of visit note that were covered but for now I will focus on setting up the transcription functionality.
If folks have engineering thoughts or questions I’d be happy to hear them. Or if there are any doctor/clinician/healthcare admin users of OpenMRS reading this, or that anyone can connect me with I would love to talk to you.
I am planning to use openAIs whisper API for the Speech to Text, and then pass off that data to the 4o model with a system prompt that turns the conversation into a format that is relevant and usable for the doctor.
Can you talk a bit more about this? Are you planning to develop some kind of openmrs esm module that can be plugged into an O3 Reference Application distribution?
Please keep in mind that in order for the solution to be used in production settings, Patient data must not be sent to an outside service, such as a SaaS offering like ChatGPT’s SaaS offering, as this violates many country’s health data privacy laws and/or may breach patient consent. And also there’s the practical limitation of internet access in most OpenMRS sites; so a solution that can run performantly on a local machine (on-prem or laptop) would be really excellent! Of course using a web API call to a GPT service just for demo / test purposes is still very interesting; just wanted to share these 2 limitations (privacy and internet) for our on-site settings.