I will be working on the project o3:Draw-on-body-diagram for the OpenMRS community with very high priority for orgs like @ICRC and many more.
The main objective of this project is to improve the diagramming feature in OpenMRS, a medical record system, by allowing the upload of any diagram as an image, annotating certain areas of the diagram with different shapes, saving and retrieving these diagrams, and downloading them as an image with edited annotations. The project has two parts, essential and desirable objectives(more to come in the further implementation), with the former providing basic functionalities for uploading and editing diagrams while the latter enhances the user experience by adding more features such as drawing free shapes and downloading annotated diagrams for easy sharing and printing.
This thread will be used for project updates and discussions.
Avoid building a body-specific solution when an image notation solution that can use a body could also be used for notation of other clinical images
Provide methods for defining regions and then answering questions like âwhat notations are in this regionâ and âwhich regions include this notationâ
Design a portable solution that can be introduced where needed in the app rather than making assumptions about where the diagram tool will be used.
Make it really easy to add a simple notation, since the most common use case for diagrams like this are one or two notations (e.g., show where a specific finding is located with detail provided elsewhere in a clinical note)
Just if I got you right(you have detailed; portability, flexibility and scalability)
Body-specific vs. general image notation: To avoid building a body-specific solution, the diagramming feature should be designed in a way that can be used to annotate any type of clinical image, not just body diagrams. This would require designing a flexible annotation tool that can be applied to different types of images.
Some general questions that can arise to make a more reliable solution and get a well rounded and grounded understanding of the whole project. Feel free to drop your thoughts.
How will the diagramming feature be integrated into the existing OpenMRS system? Will it be developed as a standalone micro-frontend or integrated into an existing module? How will it interact with the Java and Spring components of the core OpenMRS system?
What image formats will be supported for uploading and annotating images? Will the diagramming feature be able to handle large images, such as high-resolution medical images?(addressed some how in the proposal)
How will annotations be stored and retrieved from the system? Will they be stored as part of the patient record or in a separate database? How will annotations be associated with specific patients and encounters?
What user roles and permissions will be needed for the diagramming feature? Will different roles have different levels of access or permissions for creating, editing, or viewing annotations?
How will the diagramming feature be tested and validated(both ui/ux and the system functionality), both during development and after release? What metrics will be used to measure the success of the feature?
What overall tools and libraries will be used for developing the diagramming feature? Will they be compatible with the existing OpenMRS technology stack?
How will the project be scoped and prioritized, given the constraints of the GSoC timeline and the size of the project? What features or functionality should be considered essential versus desirable, and how will they be implemented and tested?
Some insights were shared in the proposal but now we can scope it down to the needs of openmrs and other orgs.
I suppose answering these will get me up to speed for the next phase.
Portable solution: To ensure the diagramming feature can be introduced where needed in the app, it should be designed as a portable component that can be integrated into different parts of the OpenMRS system as needed. This might involve using standard APIs or designing a modular architecture that can be easily customized.
Testing it out, it lacked some APIs(though it has most functionalty, will need guidance on how to still use it to avoid re-inventing the wheel @heshan@ibacher@dkayiwa@jayasanka ).
I know I can Integrate the necessary APIs from the Attachments module into this project to enable image upload functionality but i think we can look at the new implementations then advise accordingly
I wanted to provide us with an update on the progress Iâve made so far on the scoped-down solution we discussed with @heshan .This week, the focus has been/is on allowing users to upload annotated images, storing them in the database, and implementing the necessary APIs for storage, retrieval, and display on the frontend.
Here are the details of the implementation:
Database Implementation:
Created the required tables in the database to store image files and annotations.
The Images table includes columns for filename, filesize, filetype, metadata, and data (LONGBLOB type).
The DiagramAnnotations table includes columns for diagram ID, x and y coordinates, and description.
API Implementations:
Implemented the necessary API endpoints for handling image upload, retrieval, storage, and download.
The ImageController class includes methods for handling HTTP requests related to image operations.
The endpoints accept the necessary parameters, such as file uploads, and interact with the corresponding service methods.
Service and Repository Implementations:
Implemented the ImageService class responsible for handling the business logic of image operations.
Implemented the ImageRepository class to interact with the database and perform CRUD operations on images.
Preparing a demo for this
Please let me know if there are any specific aspects you would like me to focus on or if there are any additional requirements or suggestions you have for the project. Iâm looking forward to your feedback.
Avoid building a body-specific solution when an image notation solution that can use a body could also be used for notation of other clinical images
â Could we select from a template? For example, select the diagram name from a drop down, then the diagram loads. Once loaded, then we can start drawing or add notation. Additionally, the list from the drop down can also be based on the gender and other variables.
Provide methods for defining regions and then answering questions like âwhat notations are in this regionâ and âwhich regions include this notationâ â Could we then have the region question in the notation pop-up window? Also, could we pre-filter the diagram templates based on the region? For example, the user selects âLower Limb (LL) - Rightâ in the region, then the diagram is filtered to show only diagrams? But iâm not if this can be saved in the table (i.e. obs_group to store region info, value_text to store the notation, etc.?)
@burke does this make it possible to have varried clinical images and i suppose the data structure will handle the different diagrams . Did i get this right?
i suppose its possible one can display a prompt or a form field asking them to define the region. This way, when the notation is saved, it will be associated with the corresponding region information.
i suppose this is too possible, just to refer to the table i defined above DiagramAnnotations table, I would suggest to add columns region_id or region_name to store the region information associated with each annotation.
Needs quite a bit of modification though.
Am not sure if i got that well!
Quite confusing to get the right data structure sufficient here. the scope seems tricky @ibacher@heshan@jayasanka can we scope out this am in a valley of whether to setup the project with the Attachment module apis and make modifications(am skeptical of the cost) or i set up fresh structure and apis(afraid of re inventing the wheel)
Link: 03-Daw-On-body meetings
Friday, May 19 · 3:00 â 4:00pm
Time zone: Africa/Nairobi
Google Meet joining info
Video call link: https://meet.google.com/qne-ewgm-vpv
Quite abit of tasks and learning areas have gone through the last week. The main focus has been;
Organizing the software engineering thought process around building this o3 app.
my mentor @heshan took me through a scoped down solution focusing on dividing the system into manageable pieces.
Firstly;
Looking at the data structure of my 03 application especially the diagramming and annotating
structure, I was challenged to visualize what users will expect to see on the front end and then structure needed to incarnet this i.e User Centric Design. So there was need to update the previuosly designed data base structure to include more design anthetetics;
A few questions I used to gather requirements that i hope would address this weekâs tasks(Annotated Diagram upload);
in what situation do they need to draw something on the body in an EMR?
During the admission or assessment process, doctors will indicate which part of the body was wounded or need a specific operation.
what number one thing do you want to do on the screen especially editing/anoatating screen
First, the user needs to be able to fill in a form related to the admission or the assessment being conducted,
then when necessary, choose a template to document the injury or body parts involved for the intervention.
draw on the image
image is saved as part of the form
Be able to view all of the images/attachments for the patient
Do a side-by-side comparison (but this can be done by viewing in a new window)
More of the API implementation to be seen in week one that is next week.
I want to welcome suggestions and adjustments to the designs as we prepare for week one coding period.
This feature I suppose from discussions , it will be foundational/fundamental for other orgs to pick up and enhance given the 3 months timeline i have. So my aim now is to fundamentally put in all to implement three main functional features;
Template loading
Gallery of diagrams/images
Attachment to form entry
So this will be the project task list;
Any suggestions on improvement are welcome(thoughts about the break down)
Week
Tasks
Deliverables
1
Requirement gathering and analysis
Finalized requirements and prioritized feature list
Design and architecture
System architecture design and data model
Backend development
Initial backend implementation
2
Backend development
Completed backend APIs and services
Frontend development
Initial frontend interface
3
Frontend development
Refined frontend interface
Integration and testing
Integration with OpenMRS O3 and Attachment module
4
Integration and testing
Thoroughly tested and bug-fixed application
Refinement and documentation
Updated documentation and user guides
5
Refinement and documentation
Finalized documentation and addressed usability feedback
Deployment and demo
Prepared application for deployment to OpenMRS demo
6
Deployment and demo
Functional MVP deployed on OpenMRS demo
Stakeholder feedback and feature prioritization
Prioritized feature list based on feedback
7
Backend development
Implemented regionalized wound data(i suppose this is specific to @icrc) ()functionality
Frontend development
Implemented diagram annotation and image attachment
8
Backend development
Completed template configuration and upload functionality
Frontend development
Implemented gallery for template loading
9
Integration and testing
Tested integration with Form Builder module
Refinement and bug fixing
Addressed bugs and performance issues
10
Refinement and bug fixing
Refined user interface based on feedback
Documentation and finalization
Completed documentation and user guides
11
Deployment and demo
Conducted final testing and bug fixing
Stakeholder presentation and feedback collection
Gathered feedback on MVP and future enhancements
12
Final refinement and bug fixing
Addressed any remaining issues and polished the solution
Final documentation and project conclusion
Completed documentation and concluded the project
Just to get me started with week 1 task;
How does the existing OpenMRS data model align with the requirements of the wound data regionalization, annotation, and image attachment features?
I want ensure that my implementation aligns with the OpenMRS data model and allows for smooth integration and interoperability within the OpenMRS ecosystem.
Some questions that provoke thinking from my mentor @heshan ;
How should the architecture look like is actually your decision to make. The logical model of the system is something we expect you to come up with. Consider these few options when youâre coming up with a solution.
qtns
Does the current openmrs backend already have the features that supported the 2.0 version of this?
Where exactly should this solution fit into the current O3 system.
How should the implementation look like for the end user and what views, APIs, database structures do we need to facilitate them.
Go through the current O3 system and previous 2.0 drawing module throughly inorder to answer those questions.
Having read a bit of documentation I responded as follows;
qtn1
we already have the backend java attachments module and a miro frontend attachments app to handle images
So yes some the features are existent only that there will need abit custom control using the AMPATH forms that will support annotating images (and save the annotated images as complex obs within encounters) for now without going into the whole complexity of measuring the coordinates of the annotation(s) which we can improve later.
qtn2
It should become a module or a component or a micro front end app within the o3 framework, specifically,
I suppose having an openmrs-esm-drawing-app fresh app using openmrs-esm-template and then add it to the openmrs-esm-patient-chart as dependency? Just thinking that it can be within the esm-patient-chart appâs package directory just as the esm-attachment-app is Does this make sense or is there a more seamless way?
qtn3
For now;
Views:
ui for selecting predefined body diagrams
drawing and annotating on the diagram(if there is not much research about ui/ux, can we use a powerful frontend drawing lib like reactannotate or OHIF viewer) ?
Attaching images, reviewing/editing annotations.
APIs:
I suppose we can leverage existing APIs and enhance along the way
Data Structure:
I suppose we can consider the complex obs feature for management of the data model unless otherwise
Though not familiar with all these nuances apparently. I hope to learn along the way
Just a summary of the work I have been doing or discussing ;
The AMPATH FORMS
We are gonna encapsulate our widget into an âAmpath Formsâ control, this control will eventually be available for use within forms. This means people will directly consume this within forms and it will persist attachments as complex obs.
At the moment we donât support custom-components yet on the React version.
However, there are work arounds forexample two reslotions were reached to in a discussion;
Native Incorporation or
Extension:
Native Incoporation
Imagine having "annotation tools" native to the workspace.
Involved natively incorporating support for the widget within the form engine itself. This would mean integrating the drawing widget as a core feature of the OHRI form engine, allowing seamless usage and tight integration with other form-related functionalities.
Extension
Explored the option of utilizing an extension to integrate the drawing widget with the existing form engine. In this case, the drawing widget would be developed as a separate extension module that can be added to the form engine as an optional feature.
Sharing State between Workspace Items and Tools:
strategies for sharing state were discussed, including the use of observables.
I suppose the order basket is a workspace tool.
Its all about sharing state with between items on the workspace and some natively existing workspace tools.
File Picker Development:
The file picker component, which plays a crucial role in the functionality of the drawing widget, was identified as a work in progress. The need for further development and enhancement of the file picker component to meet the specific requirements of the project was acknowledged. The file picker serves as a user interface element that enables users to select and manipulate files, such as images, if added can be used for annotation within the drawing widget.
if the filepicker/rendering file type is to be improved, then this ca be the process workflow btn file picker and the drawing widget;
User Interaction : The user initiates the process by interacting with the file picker component. This can be done by clicking on a button or a specific area within the form interface that triggers the file picker functionality.
File Selection: The file picker component provides a user interface for selecting a file from the local file system or other available sources. The user can navigate through directories, choose a file, and confirm the selection.
File Metadata and Data Transfer: Once the user selects a file, the file picker component retrieves the metadata associated with the selected file, such as file name, file type, and file size. The file picker component then transfers the file data to the drawing widget for further processing and display.
Drawing Widget Integration: The drawing widget receives the file data from the file picker component. It processes the file data and renders the image within its interface. The user can now interact with the drawing widget to annotate or edit the displayed image.
Annotating and Editing: The user utilizes the drawing widgetâs editing tools and functionality to annotate the image. This can include drawing shapes, adding text, highlighting areas, or applying various editing effects. The userâs annotations and edits are reflected in real-time within the drawing widget.
Saving or Submitting: After the user completes the desired annotations and edits, they can choose to save or submit the annotated image. The drawing widget captures the annotated image along with any associated metadata or additional data required for form submission.
Data Handling: The drawing widget processes the annotated image and prepares it for further handling within the form engine. This may involve compressing the image, converting it to a suitable format, or applying any necessary transformations or adjustments.
Integration with Form Engine: The annotated image, along with any other form data, is integrated with the form engine. It becomes part of the complex observation or data entry within the form, allowing for comprehensive patient charting or data recording.
With this, all engines can invoke this widget with information about the image, edit it and save it back.
Where does the openmrs form come in because the functionality to annotate image should not be tied to the form engine but the end goal here is to have the ability to annotate diagrams in forms.