The promise of OpenMRS ETL: Advanced Reporting/Analytics for OpenMRS

Yes, potentially a more lightweight solution and also easier to deploy, e.g., you just add another module and configure it.

If it were my infrastructure, I would rather go through the pain to Dockerise my MySQL instances progressively and ensure that bin logging is turned onā€¦ rather than trying to install the Atom Feed module (and whatever else will be needed with it) 100+ times on instances that might still run ancient versions of OpenMRS.

I like the non-intrusive pattern enforced by Debezium, for me this is a big winner.

2 Likes

Thanks @mseaton for bringing this up! When I started working with debezium I had the same concerns.

Practically speaking, deploying the debezium CDC is quite easy; would take the same effort as installing a new module. It entails adding two lines to the MySQL config then powering up the required container stack. We could write a bash script to automate this process. The config of interest are:

log_bin           = mysql-bin
binlog_format     = row

Switching binlog format is the easiest part, this can be done at runtime by running this query :

SET GLOBAL binlog_format = ā€˜ROWā€™;

Although this can not be switched when the replication channel thread is currently running, thus we will need to switch after 5pm, when no one is performing data entry.

We should encourage the community to use ROW based binlog format instead of statement base.

1 Like

@wyclif, thanks for the write up.

Do I understand correctly that you hit the performance limit that makes you consider the switch from DB triggers to Debezium? What do you use for queuing right now? Could you please share some stats? Weā€™d have to have more data on the setup to recommend the path forward.

Debezium may indeed be an overkill for smaller setups. Can you leave a gate open to switch between DB triggers or Debezium depending on the scale of operation?

If we use this approach, it will be essential to upgrade atom or create a new module that caters for the CDC processing guarantees downstream the pipeline. Most importantly, can we guarantee data consistency between Openmrs (monolith) and the intermediate store of interest (lake/warehouse)? Remember, it is very easy to lose updates downstream the analytical pipeline. Something which debezium is designed to prevent

Operationally Debezium/kafka shields us from missing any updates downstream the analytical pipeline even beyond the E of ELT, even in case of power failure or internal server/application faults. Unlike debezium/kafka, Most CDC systems fail to guarantee the exactly-once processing semantics

The question is do we want to go down this path or instead us a modern and out-of-the-box solution for our CDC requirement?

Hi @raff, @wyclif to provide further details but the short answer is no. We are just testing tech stacks that enforce best and reusable practises. And it looked like Debezium was doing off the shelf a lot of what we needed and could cover the whole ā€˜Eā€™ process for multiple use cases: ETL for data analytics, pure low level sync, data out streams for EIP (with Camel for instance)ā€¦ etc.

It looks like that @wyclifā€™s approach with DB triggers is working perfectly fine. However, those triggers require maintenance and a setup that is more invasive than Debezium. They also imply to keep sync records journal tables within the OpenMRS database.

Again @wyclif to confirm but it appears that Debezium is reducing the overhead in terms of maintenance.

2 Likes

I see. So you are still exploring your options and you havenā€™t rolled out a solution yet.

In my opinion the Debezium is the way to go in such a case. It comes at the price of having beefier machines to run the whole stack, though it could be mitigated by using the embedded version as @wyclif mentioned. Have a look at https://camel.apache.org/components/latest/debezium-mysql-component.html which nicely couples the Debezium embedded engine with Apache Camel. It may be a great fit, since you are already using Camel.

3 Likes

Thanks @raff for sharing this. This will solve most of the concerns we have been having. Apart from eliminating Kafka and zookeeper, Iā€™m glad this component provides the ability to recover from faults. This will definitely work for small deployments as well as large-scale

1 Like

We have options with all that we have tried so far (basic management DB based on the last sync date, then more recently the DB triggers evolution), so at worst we have plans B. But it looks like Debezium starts looking like plan A :slight_smile:

Thanks for your input again @raff.

1 Like

@akimaina, well large-scale is pretty vague, but it definitely gives you a considerably easy upgrade path to Kafka and clustering to handle a truly large-scale if you start hitting performance limits.

1 Like

I am afraid I do not fully understand the point here. I understand that in a truly distributed environment exactly-once semantic is a difficult problem but I thought earlier we established that is not the case here. Sure, OpenMRS is a monolith but so can be the change stream processor that updates the warehouse, no? For example, in the atomfeed example, the state of both the feed producer and consumer are tracked in the DB and it is not clear to me how they can miss an update or process it twice (the order issue can be another story though).

BTW, I mentioned atomfeed mostly as an example of another potential approach; relying on Debezium sounds like a great option too, as you mentioned. What I am not sure about is whether complicating the infrastructure with something like Kafka, which is mainly designed for a truly distributed system with many producers/consumers, is warranted for this problem.

One of the advantages that i can see of Debezium over our approaches that use the hibernate interceptor, is ability to also capture database changes that happen outside of the openmrs application. For instance when one directly makes changes to data in the database or via some other application.

@raff I think DB triggers work fine if you want to monitor a few tables and donā€™t plan to be adding new tables or change the trigger logic continuously, to be honest triggers do the job pretty well. The primary reason why we are considering debezium is that it solves 3 things that had to be manually implemented in the code base i.e. monitoring db events, publishing events and Apache camel already has built-in components to bring the glue together. The main issue weā€™re foreseeing with debezium is how to avoid going in circles i.e. replay a change back to the origin, you donā€™t want to update a patient on server A, the event is consumed by server B, during sync the change is again written to the binlog of server B and then gets sent back to server A, this is not an issue with a one-way sync but would need to be addressed for a 2-way sync.

@bashir as @dkayiwa pointed out sync 2.0 just like any other tool based on the hibernate interceptors has some holes, the most obvious being you miss any change that takes place outside of a hibernate transaction. The other big flaw with sync 2.0 is that it uses the rest API to extract and load sync entities, the rest API is heavily based on the java api with all the entity relationships, it also excludes a lot of required fields that a rest client doesnā€™t need to know but on the other hand need to be synced. I think REST is great for integration but not db sync.

Regards,

Wyclif

1 Like

That is a good point but is this kind of DB changes (from outside OpenMRS) common (beyond restoring snapshots)? If that is the case, then this might be a deal breaker for some of the design options I mentioned above.

Thanks for pointing this out. I have not looked at the rest API much. From atomfeed/Sync 2.0, I have been mostly looking at using the FHIR API, which is provided beside the rest API, AFAICT.

Sorry for not making myself clear. Exactly-once processing/delivery applies to any CDC component, including nondistributed systems. In this context, Iā€™m referring to the propagation of changes between openmrs CDC component and the warehouse especially during T of EL. If we are going to create a custom CDC, we will need to make this propagation process tolerant to disconnections/faults, such as server restarts, e.g., keeping track of offsets.

On a lighter note, If we need to avoid Kafka entirely, we can go for debezium-camel suggested by @raff and @mksd which guarantees exactly-once semantic at normal operation. However, there are some possibilities of duplicate delivery after an ungraceful shutdown.

I just want to throw my endorsement and full support behind this thread. It is very exciting for me to see this conversation happening like this, and especially to see so many people articulating the problems and pros/cons of various solutions that I have also felt and expressed but have not articulated as clearly as this thread is doing. Iā€™m really encouraged by where this is heading!

5 Likes

We just had a great OpenMRS ETL/ELT Stand-Up! Over 20 participants from the OpenMRS community joined together to share their work in the space of ETL/ELT solutions.

  • VecnaCares presented their data warehouse work
  • AMPATH presented their ETL architecture as well as ideas for the next generation architecture
  • @bashir (from Google) presented a portion of his ideas & questions for the community

We definitely need to continue this conversation. There were several more people who had ideas & experience to share and we had little to no time for discussion. So, stay tuned for round 2!

For those who couldnā€™t make todayā€™s call, you can review it here:

2020-05-20 OpenMRS ETL/ELT Stand-Up

6 Likes

This whole conversation is one of the highlights of my month!

And yes, we still have ideas and experiences to share - and much to discuss.

We can consider holding Round 2 at the same time on Wednesday (4pm Nairobi | 1pm UTC | 9am EDT | 6am PDT), so that option is included in the poll. Please respond by 3pm UTC on Friday so I can announce the next round by the weekend.

@bashir the issues with using REST for loading/extracting entities in the context of doing dbsync are the same even if you used FHIR, there is several OpenMRS internal logic that is hidden from REST/FHIR APIs which impact values stored in the database, you would miss all these values during a sync.

@raff sorry I didnā€™t respond to one of your questions, no we didnā€™t hit any dead end with DB triggers and we havenā€™t run any stats against them, we just didnā€™t want to sort of re-invent the wheel if the binlog is already doing something similar. The sync record lives in a table within each OpenMRS DB with column values being the affected entityā€™s identifier which is usually a uuid, operation (insert/update/delete), table name and a time stamp of when it happened, the after insert/update/delete triggers populate this table when these events occur.