My Fellowship Journey : Cliff Gita

hello everyone ,

I the past 3 weeks i have been working on the analytics engine around the reporting side of it to introduce the time based of data in the indicator library because currently we only have age based and gender aggregation which has helped me learn more about pyspark api and the pyspark sql module with doing pivots, flattening of data in parquet files for better aggregations, also had sessions with @akimaina from Ampath about the best structural approach to implement the time based aggregation in the sample TX-PVLS indicator that was implemented in the analytics engine

Leaning more about the fhir2 module and its DAO level workings of how the joins are being done between various tables to generate the necessary pseudo-SQL to extract needed data from openmrs using the hibernate criteria API. Looking into taking up more tasks around HIE automation testing framework for the PLIR work.

Finished work on the end to end for stream mode and refactoring in the batch model script (pending review), also implemented the time based aggregation in the library(pending some review ), about the infrastructure for the remote openmrs and debezium pipeline for our e2e testing for PLIR we are still seeing if we can leverage the amazon credits allocated to openmrs to have these instances up and running. cc @jennifer & @k.joseph

Having weekly sessions with my mentor @mozzy to continue on the work in fhir and analytics engine

3 Likes