My Fellowship Journey : Cliff Gita

I am Cliff Gita , from Uganda ,Kampala, and i have been involved with the OpenMRS community since January 2019 acting as a developer and am currently volunteering as a platform release of 2.4.0.

Am very excited and happy to join the first OpenMRS felowship 2020/2021 as a fellow in the area of development ,focusing on the PLIR ,FHIR ,Analytics Engine Projects for OpenMRS. I will be working directly with @mozzy as my fellow mentor.

In the first Month (November) of the fellowship.

  • I was learning and reading about the project , getting more familiar with the various frameworks used in the PLIR and analytics work.

  • Contributed to the analtics engine, FHIR and PLIR repos.

  • Attended the calls with my mentor and fellow partner and the best way to pool off the proof of concept for PLIR.

  • Attend weekly calls of the FHIR/PLIR and analytics meetings.


Hello everyone

I the past few weeks as a fellow i continued getting more familiar with the analytics engine streaming modes plus the openHIM framework, currently working on shifting debezium specific settings to a json config file.

I also worked on the HAPI FHIR JPA server to enable it support basic authentication for any clients apps making requests . I have been engaged in weekly calls and holding sessions with colleagues ,thanks to @akimaina

I came up with a blog post on the modes used in the analytics engine.

@mozzy @k.joseph @jennifer @ayesh


Hello community

In the past three weeks, i have been working mainly on the analytics engine ie

  1. Fixing Debezium events to FHIR mapping where uuid is missing for specific tables
  2. Adding code formatting in the analytics xml files
  3. Finishing up with moving debezium configs to a json config file
  4. Added the necessary javadoc to util methods

Also been holding personal calls with my mentor(@mozzy ) and @akimaina

The next tasks are;

  • Start working on tasks related to contributing to the development of the OpenHIM mediator for intergration with OpenCR

  • Getting up to speed and helping out in contributing to defining Fhir measure resources and integrating of QA into the analytics work

@k.joseph @jennifer @grace


Hello community

In the past few weeks been finnishing up tasks in the analytics engine related to the PLIR work and focusing more on the collect_data_operation with in the HAPI-SERVER which can process the Measure resource to extract the data that would be necessary for the calculation of the TX_PVLS indicator

Am currently working on the integration tests for the recently implemented collect data operation to test different behaviors and making sure the respective data is being returned firom the server as per the defined measure expression.

Also working on implementing read hard coded Basic Auth credentials from a properties files in Hapi Fhir and having mentorship session with my mentors @mozzy and @k.joseph .

The tasks will be working on tasks emerging from possible CQL intergrations into PLIR from other teams like Intra-health plus also spiking on e2e testing for the debezium bin-log streaming in the analytics engine.

@jennifer @grace


Hello community;

In the previous days i finished up with some tasks in the analytics engine and hapi fhir JPA server ie the intergrations tests for the collect data operation. Debugging some issue related to reading hard coded basic auth credentials from a yaml file

Started work in the analytics engine about the end to end integration testing for the debezium-binlog streaming and also finalizing other tasks related to PLIR work in the analytics engine .

Also contnued attendance of weekly PLIR and FHIR squard calls and having fellowship sessions with my mentor @mozzy


great work @gcliff


In the past two weeks.

I started working on the end to end testing of the stream mode pipeline processing of the analytics engine and also finishing up on other issues in the analytics engine needed for the Ampath deployment.

Also continuing to have weekly fellowship sessions with @mozzy & @k.joseph plus the analytics & FHIR/PLIR weekly calls


great work @gcliff


Hello everyone

In the past two weeks i have been working on the finishing up some issues in the anyltics engine particularly for the e2e testing of the binlog pipeline.

Also started on some tasks in the fhir2 module and holding sessions with my mentor @mozzy

For the next week , ill focus creating a qa-framework for the PLIR integration setup.

1 Like

Hello everyone,

In the last two weeks of March i have been working on developing a script for the end-to-end test of the stream pipeline of the analytics engine with bash shell, the task also involved making some refactoring in the end-to-end batch script to factor out the common pieces in both scripts ie setting up the test environment(spin-up the openemrs and fhir docker containers),this has given me a frist time exposure to script programming , getting a deep dive scripts and becoming familiar with how they work.

I have also been doing some work in the fhir2 module which is basically to implement the has search parameter on the service request resource referring to the Observation, this issue has tasked me to dig into the fhir2 module DAO code to learn how the patterns of java code are being used to write code that generates sql queries that stored and retrieve data from the openmrs DB(thanks to @ibacher for the motoring). During this process i got to learn and appreciate more the beauty of the hibernate framework plus the criteria API in mapping java classes to database tables and from java data types to SQL data types, i also got learn more about java 8 stream and optional API plus lambda expressions which provides a clear and concise way to represent one method interface using an expression in the collections library .

I have also been doing more personal studies on the analytics batch mode code to better understand how the segmentation between the batch fhir and batch jdbc is done using the search segment descriptor, also took a look into some concepts ie the parquet files and the beauty they bring to the table of storing data in columnar format for easy storing and querying plus flattening of tables in DBs .

Having sessions with my mentor @mozzy and weekly check-ins with @k.joseph & @jennifer

Planning to continue with work in analytics and fhir2 plus dockerizing the plir-widget, focussing on QA with in the FHIR/PLIR project


Hello folks

For the past 17 days , i have been working on setting up setting up a remote instance of openmrs with the debezium running pipeline since this is needed to have a full remote set-up for the PLIR project to easily to tests remotely, currently we only have openHIM and hapifhir remotely as seen here… This has helped me learn more about docker containerization and how to configure docker files ie mapping volumes, setting up a dependency hierarchy for the execution on different services in a single docker file forexample for this use-case i hade to come up with a single docker file containing the mysql, openmrs reff-app and streaming -binlog services. Also got insights into the deploying of openrms docker-compose files used to set-up the OpenMRS infrastructure forexample how the proxy servers are used here to plus the configuration of various DNS names for different applications and how the proxy server restricts all this to port 443 for https client requests and port 80 for http client requests.

Continued work in the analytics engine ie getting more familiar with how the indicator calculation is done on the stream data in the datawarehouse which in our case is the data in form of parquet files . This has helped learn more about spark api and its fundamental cluster architecture plus the python api for spark and pyspark.sql which are the tools being used for report and PEPFA indicator calculation to generate reports for analysis as part of the MVP for the analytics work.

Next tasks:

Continuing work in analytics mainly around indicator definition library and learning about tools and frameworks used in big-data processing .

Start working on issues related to e2e testign for the PLIR work, we are looking into the possibility of using HIE Automation testing framework


hello everyone ,

I the past 3 weeks i have been working on the analytics engine around the reporting side of it to introduce the time based of data in the indicator library because currently we only have age based and gender aggregation which has helped me learn more about pyspark api and the pyspark sql module with doing pivots, flattening of data in parquet files for better aggregations, also had sessions with @akimaina from Ampath about the best structural approach to implement the time based aggregation in the sample TX-PVLS indicator that was implemented in the analytics engine

Leaning more about the fhir2 module and its DAO level workings of how the joins are being done between various tables to generate the necessary pseudo-SQL to extract needed data from openmrs using the hibernate criteria API. Looking into taking up more tasks around HIE automation testing framework for the PLIR work.

Finished work on the end to end for stream mode and refactoring in the batch model script (pending review), also implemented the time based aggregation in the library(pending some review ), about the infrastructure for the remote openmrs and debezium pipeline for our e2e testing for PLIR we are still seeing if we can leverage the amazon credits allocated to openmrs to have these instances up and running. cc @jennifer & @k.joseph

Having weekly sessions with my mentor @mozzy to continue on the work in fhir and analytics engine


Hello everyone

I have been reading more and getting familiar CQL that we are going to use to build the TX_CURR ( Number of adults and children currently receiving antiretroviral therapy )as an extra POC CQL based indicator calculation in addition to TX_PVLS. Using cql we shall define the CQL logic and all the CQL related resources for TX_CURR Calculation in a typical HIE setup.

I have also learnt more about the data model domains, how to query the different data in tables using HQL and the hibernate criteria api to write sql statements while implementing the basedOn and the has search parameter task in fhir. Also continued work in the analytics engine and in regards to the time based dis-aggregation tasks is was implementing on top of the age based dis-aggregation i was advised by Bashir to halt work on it until the ETL reporting module is more mature and we know exactly how we want to use it for PEPFAR reporting.

In addition to my main fellowship tasks and goals in deep personal side extra studying in other arears around concepts, how they work around and the concept domain.

I have also had fellowship evaluations with my mentor @mozzy and continuing to have more sessions and weekly fellowship check-ins .

Next task:

This coming month am going to be focusing on the epic sprint for building the TX-CURR indicator and all the related sub-tasks around it as my final work for the PLIR project.



hello folks,

As we are winding the fellowship journey for the PLIR work, i am working on a mini epic sprint for TX-CURR indicator through which i have implemented the CQL logic for calculation the TX-CURR indicator as a second POC for PLIR In addition to the TX-PVLS indicator. The has helped me learn more about the CQL frame work and how it can be used to develop logic for various indicators.

Through the sprint i have implemented the TX-CURR measure resource with the expression targeting coded concepts for CIEL: 160119 (Currently taking ARV ) and CIEL:1065 (yes), this measure resource references the needed TX-CURR library resource having the embedded cql logic encoded with base64 for calculating the indicator, this library resource also depends on CIEL and FHIRHelpers to do the calculation.

Currently working automation tests for the TX-CURR indicator using the openmrs hie automation framework. Adding the TX-CURR measure and library resources to the plir dockerised set-up, also finishing up on the pending tasks in the analytics engine(pending review) plus FHIR2.

Continuing to have sessions with my mentor @mozzy and also doing final evaluations

Added the wiki page setup for the new indicator (TX_CURR)

The next task :-

   Integrate a CI build pipeline for the PLIR Automation Tests

Hello community

Final Reflections

As we are wrapping up the PLIR fellowship project at the end of this month, i have completed all the pending tasks which were to finish up with the TX_CURR indicator calculation sprint, add the automation testing bit of it using HIE automation framework plus integrating a CI build pipeline for the PLIR automation tests as the second proof of concept.

Special thanks thanks to my mentors @mozzy and @k.joseph for their wonderful technical guidance. Allow me to extend sincere thanks to @jennifer and @grace for their incredible PLIR project management.

I have created the Slide deck and Video Presentation Below Summarizing all My work during My Fellowship Journey .

Slide Deck

Final Video Presentation

Take care, Cliff


Thanks a lot @gcliff for this update, could you please share the Slide Deck for the public to have access to it

1 Like

thanks @k.joseph ,

should be clear now