GSOC 2023: o3-Draw-on-body-diagram app- Project Updates & Discussion

Hello folks :wave:,

I will be working on the project o3:Draw-on-body-diagram for the OpenMRS community with very high priority for orgs like @ICRC and many more.

The main objective of this project is to improve the diagramming feature in OpenMRS, a medical record system, by allowing the upload of any diagram as an image, annotating certain areas of the diagram with different shapes, saving and retrieving these diagrams, and downloading them as an image with edited annotations. The project has two parts, essential and desirable objectives(more to come in the further implementation), with the former providing basic functionalities for uploading and editing diagrams while the latter enhances the user experience by adding more features such as drawing free shapes and downloading annotated diagrams for easy sharing and printing.

This thread will be used for project updates and discussions.


Primary: @heshan :raised_hands:

Backup : @jayasanka :raised_hands:

Student: @thembo42 :innocent:


Project Details(still being worked on): Link

The ICRC war surgery manual : Link

Istanbul protocol to document torture/ill treatment: Link

All technical thoughts are welcome!

@grace @jesplana @pauladams @cduffy @mksd @ibacher @ball @johnblack @mogoodrich @burke @jayasanka @kdaud @heshan


FIRST MENTOR MEETING :partying_face: :partying_face:

When: 2023-05-12T12:00:00Z

Where: Google meet

Link: Gsoc mentor meeting Friday, May 12 Time zone: Africa/Nairobi Google Meet joining info Video call link:


  1. Pro Tips and Guidance

  2. Project Timeline(realistic break down)

  3. Technologies Required(update a few things)

  4. UI/UX Designs(moving forward, I will love to hear from the @ICRC team)

@heshan @jayasanka @kdaud @jesplana and all that are interested :partying_face:


Some random thoughts:

  • Avoid building a body-specific solution when an image notation solution that can use a body could also be used for notation of other clinical images
  • Provide methods for defining regions and then answering questions like “what notations are in this region” and “which regions include this notation”
  • Design a portable solution that can be introduced where needed in the app rather than making assumptions about where the diagram tool will be used.
  • Make it really easy to add a simple notation, since the most common use case for diagrams like this are one or two notations (e.g., show where a specific finding is located with detail provided elsewhere in a clinical note)

Thanks so much @burke , that was insightful,

Just if I got you right(you have detailed; portability, flexibility and scalability)

  1. Body-specific vs. general image notation: To avoid building a body-specific solution, the diagramming feature should be designed in a way that can be used to annotate any type of clinical image, not just body diagrams. This would require designing a flexible annotation tool that can be applied to different types of images.

Some general questions that can arise to make a more reliable solution and get a well rounded and grounded understanding of the whole project. Feel free to drop your thoughts. :hugs:

  1. How will the diagramming feature be integrated into the existing OpenMRS system? Will it be developed as a standalone micro-frontend or integrated into an existing module? How will it interact with the Java and Spring components of the core OpenMRS system?

  2. What image formats will be supported for uploading and annotating images? Will the diagramming feature be able to handle large images, such as high-resolution medical images?(addressed some how in the proposal)

  3. How will annotations be stored and retrieved from the system? Will they be stored as part of the patient record or in a separate database? How will annotations be associated with specific patients and encounters?

  4. What user roles and permissions will be needed for the diagramming feature? Will different roles have different levels of access or permissions for creating, editing, or viewing annotations?

  5. How will the diagramming feature be tested and validated(both ui/ux and the system functionality), both during development and after release? What metrics will be used to measure the success of the feature?

  6. What overall tools and libraries will be used for developing the diagramming feature? Will they be compatible with the existing OpenMRS technology stack?

  7. How will the project be scoped and prioritized, given the constraints of the GSoC timeline and the size of the project? What features or functionality should be considered essential versus desirable, and how will they be implemented and tested?

Some insights were shared in the proposal but now we can scope it down to the needs of openmrs and other orgs. I suppose answering these will get me up to speed for the next phase.

@heshan @jayasanka @ibacher @burke @grace @jesplana

1 Like

Previously I had come up with an epic that mainly was tailored to refactoring the legacy drawing module, I know refactoring is quite expensive.

I suppose i need guidance whether to continue with refactoring or get a fresh module(I know @heshan & @jayasanka will guide me here)

This partly answers question one;

Portable solution: To ensure the diagramming feature can be introduced where needed in the app, it should be designed as a portable component that can be integrated into different parts of the OpenMRS system as needed. This might involve using standard APIs or designing a modular architecture that can be easily customized.


On May 12, 2023 3:00 PM i had my first mentor meeting and it was insightful with the agenda covered and mainly focusing on leveraging the workflow of uploading an annotated diagram with the best data structure and api implementation. With curiosity i first looked at the GitHub - openmrs/openmrs-module-attachments: UI components and backend web services to upload, view and manage attachments within OpenMRS. “Attachments” module that provides functionality for uploading and managing file attachments, including images.

Testing it out, it lacked some APIs(though it has most functionalty, will need guidance on how to still use it to avoid re-inventing the wheel @heshan @ibacher @dkayiwa @jayasanka ). I know I can Integrate the necessary APIs from the Attachments module into this project to enable image upload functionality but i think we can look at the new implementations then advise accordingly


                     **BACK-END FOCUS**

I wanted to provide us with an update on the progress I’ve made so far on the scoped-down solution we discussed with @heshan .This week, the focus has been/is on allowing users to upload annotated images, storing them in the database, and implementing the necessary APIs for storage, retrieval, and display on the frontend.

Here are the details of the implementation:

  1. Database Implementation:
  • Created the required tables in the database to store image files and annotations.
  • The Images table includes columns for filename, filesize, filetype, metadata, and data (LONGBLOB type).

Screenshot from 2023-05-16 18-01-46

  • The DiagramAnnotations table includes columns for diagram ID, x and y coordinates, and description.

Screenshot from 2023-05-16 18-03-04

  1. API Implementations:
  • Implemented the necessary API endpoints for handling image upload, retrieval, storage, and download.
  • The ImageController class includes methods for handling HTTP requests related to image operations.
  • The endpoints accept the necessary parameters, such as file uploads, and interact with the corresponding service methods.
  1. Service and Repository Implementations:
  • Implemented the ImageService class responsible for handling the business logic of image operations.
  • Implemented the ImageRepository class to interact with the database and perform CRUD operations on images.

Preparing a demo for this

Please let me know if there are any specific aspects you would like me to focus on or if there are any additional requirements or suggestions you have for the project. I’m looking forward to your feedback.

Thank you

Hi @burke and @thembo42 , please see my comments below:

  • Avoid building a body-specific solution when an image notation solution that can use a body could also be used for notation of other clinical images
    Could we select from a template? For example, select the diagram name from a drop down, then the diagram loads. Once loaded, then we can start drawing or add notation. Additionally, the list from the drop down can also be based on the gender and other variables.

  • Provide methods for defining regions and then answering questions like “what notations are in this region” and “which regions include this notation”
    → Could we then have the region question in the notation pop-up window? Also, could we pre-filter the diagram templates based on the region? For example, the user selects “Lower Limb (LL) - Right” in the region, then the diagram is filtered to show only diagrams? But i’m not if this can be saved in the table (i.e. obs_group to store region info, value_text to store the notation, etc.?)

1 Like

sure this makes perfect sense. Some thing similar you shared some time back @jesplana

@burke does this make it possible to have varried clinical images and i suppose the data structure will handle the different diagrams . Did i get this right?

i suppose its possible one can display a prompt or a form field asking them to define the region. This way, when the notation is saved, it will be associated with the corresponding region information.

i suppose this is too possible, just to refer to the table i defined above DiagramAnnotations table, I would suggest to add columns region_id or region_name to store the region information associated with each annotation.

Needs quite a bit of modification though. Am not sure if i got that well!

Quite confusing to get the right data structure sufficient here. the scope seems tricky :face_with_open_eyes_and_hand_over_mouth: @ibacher @heshan @jayasanka can we scope out this am in a valley of whether to setup the project with the Attachment module apis and make modifications(am skeptical of the cost) or i set up fresh structure and apis(afraid of re inventing the wheel)


When: 2023-05-19T12:00:00Z

Where: Google meet

Link: 03-Daw-On-body meetings Friday, May 19 · 3:00 – 4:00pm Time zone: Africa/Nairobi Google Meet joining info Video call link:


  1. updates on weeks tasks
  2. Tips and guidance moving foward
  3. AOB

@heshan @jayasanka @kdaud @jesplana and all that are interested :partying_face:



Quite abit of tasks and learning areas have gone through the last week. The main focus has been;

  1. Organizing the software engineering thought process around building this o3 app. my mentor @heshan took me through a scoped down solution focusing on dividing the system into manageable pieces.


Looking at the data structure of my 03 application especially the diagramming and annotating structure, I was challenged to visualize what users will expect to see on the front end and then structure needed to incarnet this i.e User Centric Design. So there was need to update the previuosly designed data base structure to include more design anthetetics;

Diagram table;

DiagramAnnotations table;


  • Looking at the entire UI/UX and scoping down to ;
  1. Visualizing the list of diagrams created

  2. Create diagram with annotation visual workflow

  3. Upload diagram visual workflow

  • APIs for;
  1. Pick diagrams from db to the front end
  2. Array of diagrams
  3. Create diagram
  4. Delete diagram
  5. Replication of the diagrams

Further Testing the APIs of the GitHub - openmrs/openmrs-module-attachments: UI components and backend web services to upload, view and manage attachments within OpenMRS.

@jayasanka you could guide me on leveraging this REST API

Getting a referesher in spring

Thanks @heshan and @jayasanka

cc/ @grace @jesplana @burke @ibacher @reagan @mozzy

1 Like


A few questions I used to gather requirements that i hope would address this week’s tasks(Annotated Diagram upload);

  • in what situation do they need to draw something on the body in an EMR? During the admission or assessment process, doctors will indicate which part of the body was wounded or need a specific operation.

    Screenshot from 2023-05-26 10-19-19

  • what number one thing do you want to do on the screen especially editing/anoatating screen

  1. First, the user needs to be able to fill in a form related to the admission or the assessment being conducted,

    Screenshot from 2023-05-26 10-26-10

  2. then when necessary, choose a template to document the injury or body parts involved for the intervention.

    Screenshot from 2023-05-26 10-27-45

  3. draw on the image

    Screenshot from 2023-05-26 10-28-33

  4. image is saved as part of the form

  5. Be able to view all of the images/attachments for the patient Screenshot from 2023-05-26 10-29-51

    Screenshot from 2023-05-26 10-32-17

  6. Do a side-by-side comparison (but this can be done by viewing in a new window)

Screenshot from 2023-05-26 10-34-25

Screenshot from 2023-05-26 10-38-30

Request: GET /diagrams


    "id": 1,
    "filename": "diagram1.png",
    "filesize": 1024,
    "filetype": "image/png",
    "description": "Pain measurement",
    "created_at": "2023-05-17T10:30:00Z",
    "updated_at": "2023-05-17T10:30:00Z",
    "annotations": [
        "id": 1,
        "diagram_id": 1,
        "x_coordinate": 100,
        "y_coordinate": 200,
        "description": "saures",
        "created_at": "2023-05-26:45:00Z",
        "updated_at": "2023-05-26:45:00Z"
        "id": 2,
        "diagram_id": 1,
        "x_coordinate": 300,
        "y_coordinate": 400,
        "description": "stars",
        "created_at": "2023-05-26:00:00Z",
        "updated_at": "2023-05-26:00:00Z"
    "id": 2,
    "filename": "diagram2.png",
    "filesize": 2048,
    "filetype": "image/png",
    "description": "RTI scan",
    "created_at": "2023-05-26:15:00Z",
    "updated_at": "2023-05-26:15:00Z",
    "annotations": [
        "id": 3,
        "diagram_id": 2,
        "x_coordinate": 150,
        "y_coordinate": 250,
        "description": "circles",
        "created_at": "2023-05-17T11:30:00Z",
        "updated_at": "2023-05-17T11:30:00Z"

More of the API implementation to be seen in week one that is next week. I want to welcome suggestions and adjustments to the designs as we prepare for week one coding period.

Thank you

@heshan @jayasanka @cduffy @pauladams @jesplana

1 Like


Yesterday I had a fruitful meeting with the @icrc’s own @jesplana and my mentor @heshan

We had a comprehensive discussion of the o3-Draw-on-body-diagram feature, where and how to be implemented.

  • Agenda for the meeting was discussed and introductions were suggested.

  • @heshan requested for feedback on project requirements and usage.

  • @jesplana said;

    • The company needs a specific feature to be implemented

    • Need to regionalize wound data for analysis.

    • The app needs to be able to annotate wounds and attach X-rays.

    • The system needs to allow for annotating diagrams and attaching multiple images.

    • we can schedule calls with clinicians to validate mock-ups

    • Develop a configurable template for documenting injuries and a way to upload images.

    • Implement a simple gallery for clinicians to annotate diagrams.

    • Create a gallery for template loading and attach it to a form

    • Implement documentation and image annotation for wound classification

    • Develop standalone module and integrate into form builder for patient workflows

    • Develop a mechanism to attach diagrams to patients and forms.

    • Simplify foundation, copy basic features, expand on new/existing features.

    • Revamp table of project tasks and create timeline.

    • Make the MVP available in the demo to get more hits and build on it.

    Discussion about attachment module and foam builder for product development

    • The module should be aimed for OpenMRS demo where it’s live.

Workflow of drawing and annotation;

Screenshot from 2023-05-31 23-39-10

@jayasanka @burke @ibacher @michaelbontyes @ball @mksd



with an aim to scope down the feature and get a

  1. quickly deployable
  2. Unique
  3. Exciting feature

This feature I suppose from discussions , it will be foundational/fundamental for other orgs to pick up and enhance given the 3 months timeline i have. So my aim now is to fundamentally put in all to implement three main functional features;

  1. Template loading
  2. Gallery of diagrams/images
  3. Attachment to form entry

So this will be the project task list;

Any suggestions on improvement are welcome(thoughts about the break down)

Week Tasks Deliverables
1 Requirement gathering and analysis Finalized requirements and prioritized feature list
Design and architecture System architecture design and data model
Backend development Initial backend implementation
2 Backend development Completed backend APIs and services
Frontend development Initial frontend interface
3 Frontend development Refined frontend interface
Integration and testing Integration with OpenMRS O3 and Attachment module
4 Integration and testing Thoroughly tested and bug-fixed application
Refinement and documentation Updated documentation and user guides
5 Refinement and documentation Finalized documentation and addressed usability feedback
Deployment and demo Prepared application for deployment to OpenMRS demo
6 Deployment and demo Functional MVP deployed on OpenMRS demo
Stakeholder feedback and feature prioritization Prioritized feature list based on feedback
7 Backend development Implemented regionalized wound data(i suppose this is specific to @icrc) ()functionality
Frontend development Implemented diagram annotation and image attachment
8 Backend development Completed template configuration and upload functionality
Frontend development Implemented gallery for template loading
9 Integration and testing Tested integration with Form Builder module
Refinement and bug fixing Addressed bugs and performance issues
10 Refinement and bug fixing Refined user interface based on feedback
Documentation and finalization Completed documentation and user guides
11 Deployment and demo Conducted final testing and bug fixing
Stakeholder presentation and feedback collection Gathered feedback on MVP and future enhancements
12 Final refinement and bug fixing Addressed any remaining issues and polished the solution
Final documentation and project conclusion Completed documentation and concluded the project

Just to get me started with week 1 task;

How does the existing OpenMRS data model align with the requirements of the wound data regionalization, annotation, and image attachment features?

I want ensure that my implementation aligns with the OpenMRS data model and allows for smooth integration and interoperability within the OpenMRS ecosystem.

I need guidance here :innocent:

@ibacher @burke @mksd @michaelbontyes @heshan @jayasanka @jesplana @dkayiwa

Dedicating more time to pilot work

I had a few forks in the road last week;

Then I jumped on call with OpenMRS’s own @jayasanka and @ICRC 's own @jesplana with a live demo of the 2.o drawing module instance;

Users(patients, clinicians, providers,) Work flow

2.o @ICRC Instance of the drawing module

A few notes;

  1. o3 does not have a drawing module which is requirement for orgs

  2. Develop an mvp with drawing, annotations,saving and attach to a form

  3. Implement the image attachment feature as simple as possible

  4. The forms need to be stored(they need to hard coded for now or can be configured)

  5. Using obs grouping for multiple images

  6. Developing a UI to annotate on a predefined image and save it.

Check out the implementation tools here



week 02 is here:

Some questions that provoke thinking from my mentor @heshan ;

How should the architecture look like is actually your decision to make. The logical model of the system is something we expect you to come up with. Consider these few options when you’re coming up with a solution.


  1. Does the current openmrs backend already have the features that supported the 2.0 version of this?
  2. Where exactly should this solution fit into the current O3 system.
  3. How should the implementation look like for the end user and what views, APIs, database structures do we need to facilitate them.

Go through the current O3 system and previous 2.0 drawing module throughly inorder to answer those questions.

Having read a bit of documentation I responded as follows;


we already have the backend java attachments module and a miro frontend attachments app to handle images So yes some the features are existent only that there will need abit custom control using the AMPATH forms that will support annotating images (and save the annotated images as complex obs within encounters) for now without going into the whole complexity of measuring the coordinates of the annotation(s) which we can improve later.


It should become a module or a component or a micro front end app within the o3 framework, specifically, I suppose having an openmrs-esm-drawing-app fresh app using openmrs-esm-template and then add it to the openmrs-esm-patient-chart as dependency? Just thinking that it can be within the esm-patient-chart app’s package directory just as the esm-attachment-app is Does this make sense or is there a more seamless way?


For now; Views:

  1. ui for selecting predefined body diagrams
  2. drawing and annotating on the diagram(if there is not much research about ui/ux, can we use a powerful frontend drawing lib like reactannotate or OHIF viewer) ?
  3. Attaching images, reviewing/editing annotations.

APIs: I suppose we can leverage existing APIs and enhance along the way

Data Structure: I suppose we can consider the complex obs feature for management of the data model unless otherwise Though not familiar with all these nuances apparently. I hope to learn along the way

Looking at the implementation tools at Seeking Clarity and Guidance for Implementing the Unique o3-Draw-on-body-diagram app in OpenMRS 3.x - #10 by thembo42 especially thoughts shared by @mksd and @mozzy

Does this seems a possible solution strategy ?

@samuel34 @ibacher @dennis @vasharma05 @heshan @dkayiwa


Its been a while since i made an update, have not been seated anyway.

Have engaged stakeholders the likes of @jesplana @dennis @ibacher @achachiez @mksd @samuel34 @mozzy @burke @anjisvj and can’t forget my mentor @heshan and @jayasanka …among others to validate the work in progress. Thanks all

Just a summary of the work I have been doing or discussing ;


We are gonna encapsulate our widget into an “Ampath Forms” control, this control will eventually be available for use within forms. This means people will directly consume this within forms and it will persist attachments as complex obs.

More community thoughts here;


At the moment we don’t support custom-components yet on the React version.

However, there are work arounds forexample two reslotions were reached to in a discussion;

  1. Native Incorporation or
  2. Extension:

Native Incoporation

Imagine having "annotation tools" native to the workspace.

Involved natively incorporating support for the widget within the form engine itself. This would mean integrating the drawing widget as a core feature of the OHRI form engine, allowing seamless usage and tight integration with other form-related functionalities.


Explored the option of utilizing an extension to integrate the drawing widget with the existing form engine. In this case, the drawing widget would be developed as a separate extension module that can be added to the form engine as an optional feature.

Sharing State between Workspace Items and Tools:

strategies for sharing state were discussed, including the use of observables. I suppose the order basket is a workspace tool.

Its all about sharing state with between items on the workspace and some natively existing workspace tools.

File Picker Development:

The file picker component, which plays a crucial role in the functionality of the drawing widget, was identified as a work in progress. The need for further development and enhancement of the file picker component to meet the specific requirements of the project was acknowledged. The file picker serves as a user interface element that enables users to select and manipulate files, such as images, if added can be used for annotation within the drawing widget.

if the filepicker/rendering file type is to be improved, then this ca be the process workflow btn file picker and the drawing widget;

  1. User Interaction : The user initiates the process by interacting with the file picker component. This can be done by clicking on a button or a specific area within the form interface that triggers the file picker functionality.

  2. File Selection: The file picker component provides a user interface for selecting a file from the local file system or other available sources. The user can navigate through directories, choose a file, and confirm the selection.

  3. File Metadata and Data Transfer: Once the user selects a file, the file picker component retrieves the metadata associated with the selected file, such as file name, file type, and file size. The file picker component then transfers the file data to the drawing widget for further processing and display.

  4. Drawing Widget Integration: The drawing widget receives the file data from the file picker component. It processes the file data and renders the image within its interface. The user can now interact with the drawing widget to annotate or edit the displayed image.

  5. Annotating and Editing: The user utilizes the drawing widget’s editing tools and functionality to annotate the image. This can include drawing shapes, adding text, highlighting areas, or applying various editing effects. The user’s annotations and edits are reflected in real-time within the drawing widget.

  6. Saving or Submitting: After the user completes the desired annotations and edits, they can choose to save or submit the annotated image. The drawing widget captures the annotated image along with any associated metadata or additional data required for form submission.

  7. Data Handling: The drawing widget processes the annotated image and prepares it for further handling within the form engine. This may involve compressing the image, converting it to a suitable format, or applying any necessary transformations or adjustments.

  8. Integration with Form Engine: The annotated image, along with any other form data, is integrated with the form engine. It becomes part of the complex observation or data entry within the form, allowing for comprehensive patient charting or data recording.


Does this scale or does it make sense??

With this, all engines can invoke this widget with information about the image, edit it and save it back.

Where does the openmrs form come in because the functionality to annotate image should not be tied to the form engine but the end goal here is to have the ability to annotate diagrams in forms.

@dkibet @ibacher @samuel34 @dennis @mksd any thoughts here??


The esm-draw-app

Created a fresh new react front end module from the esm-template-app to do implementation;

Made some initial project setup and configuartions and implementation of the drawing widget @dennis was kind enough to make PR and bump more configs;

merged at;

(fix) Various fixes to get this frontend module working by denniskigen · Pull Request #1 · jona42-ui/openmrs-esm-draw · GitHub

community discussion: Slack

The annotation tool

@heshan @jayasanka @grace