Gsoc 2020: Expose System Metrics For Monitoring.

Hi @ibacher @mseaton @judeniroshan

I found that it’s possible for all of us to have the call today 18:30 - 19:30pm CET (Central Europe Time). I hope all of you will join the call. But am not quite sure will it be possible for us to connect from slack ?.

Screenshot from 2020-03-27 10-23-26

Time differences might be the constraint here ! but it sounds cool. :slightly_smiling_face:

@joachimjunior the original doodle is set on cet time :slightly_smiling_face:

Oh :smile: got you @ayesh

I can setup a Zoom for this if that would be more efficient. I’ve never done a multi-person call via Slack.

Sure that will be great @ibacher

Here’s the details for the Zoom meeting:

Ian Bacher is inviting you to a scheduled Zoom meeting.

Topic: Expose System Metrics Time: Mar 27, 2020 01:30 PM Eastern Time (US and Canada)

Join Zoom Meeting

Meeting ID: 383 583 838

One tap mobile +13126266799,383583838# US (Chicago) +16465588656,383583838# US (New York)

Dial by your location +1 312 626 6799 US (Chicago) +1 646 558 8656 US (New York) +1 346 248 7799 US (Houston) +1 669 900 6833 US (San Jose) +1 253 215 8782 US +1 301 715 8592 US 877 369 0926 US Toll-free 877 853 5247 US Toll-free Meeting ID: 383 583 838 Find your local number:

Join by SIP

Join by H.323 (US West) (US East) (China) (India Mumbai) (India Hyderabad) (EMEA) (Australia) (Hong Kong) (Brazil) (Canada) (Japan) Meeting ID: 383 583 838

1 Like

Hi @ibacher @judeniroshan will u be joining the call ?

@mseaton @ibacher @cintiadr @judeniroshan

What do you think about these two tools that @cintiadr mentioned I think both are general-purpose metric instrumentation libraries

I can see we can expose metric reports as JMX, CSV and JSON objects in Dropwizard I know these ways of exposing already enough for most of the monitoring tools which are currently in use. But it seems Micrometer has more support to monitoring tools which is a plus point for it?

  1. Drop Wizard :-
  2. Micrometer :-

Also Micrometer does seem to be being adopted by Spring (in the form of Spring Boot) which might be a good argument for it. I’m pretty agnostic about the framework chosen, though; just that we aren’t reinventing the wheel.


+1 to what @ian said. I’d probably lean towards micrometer for the reasons above, but I have done no actual analysis on either.

1 Like

Thanks all for the feedback :slightly_smiling_face:

Hi @ibacher @mseaton @mozzy @judeniroshan

I was looking at the atom feed module to understand how it captures events related to object types.

I actually found a HibernateInterceptor which capture the db events and serveEvents for the atomfeed module.Which actually amaze me a little. Because I thought the events were served based on the Events module activemq message broker.

Few questions I have

  1. Is Events module actually engaged in the atomfeed module if so where is it ?
  2. I think up to saving to the event to the db we can use the same flow for our module. Because it has a mechanism to queue the events and do the db save transactions one by one. I think that is a good advantage for us since for a module that captures data points It’s a must to make sure not to lose any data points at any point. What do you think the are we going to use the same flow till saving the event details in the db. The events table looks like below.

For the data points, that we are going to add additional to what already captured by atomfeed can be extended easily with the way they have developed it.

         1 Include the openmrs class of the object type in the atomfeed config.
         2 If there is no subresource builder make sure to add it. 

Then it will start capturing the events of the new class that we included.

But one thing I noticed it’s using transaction classes in ICT4H commons module. I am not sure whether this is good or we have to migrate them to openmrs organization.

  1. What are the data points we can capture other than what atomfeed does?

Here is the list of classes that are included for now.

I see some of them as enabled : false any paticular reason ?

  1. Why do we have api , api 1.9 , api 2.0 separations ?

Hi @ayesh!

Thanks for your digging on this.

Yes, the Wiki says. that atomfeed runs on the Events module, but actually it looks like that was never implemented. The Hibernate interceptor is basically identical across the two except that: in Events it uses the afterTransactionCompletion() event and checks that the transaction was completed before firing notifications, where in Atom-Feed, it uses the beforeTransationCompletion() event and seems to generate events regardless of if the transaction was committed or not (the change was made in this commit). This seems to have been related to some funkiness with accidentally creating a sub-transaction to store data in a the feed, but there’s a lot more in this Talk post. It appears that the future state Darius alluded to was never implemented. That might be worth reviving.


  1. Apparently, no. I still think having a single solution here is worthwhile, but apparently the event module could use some attention.

  2. One of the advantages of using the events module and ActiveMQ is that we don’t have to implement custom queuing. I would think either migrating ActiveMQ in the events module to use the database for storage or building off the AtomFeed module are the right ways to address this.

  3. AtomFeed’s main use-case (as I understand it) is for sharing data among the various components of Bahmni and as part of Sync 2.0. The classes excluded are probably not commonly shared via those interfaces (admittedly Order is a bit of a weird one).

  4. Backwards compatibility? api-1.9 is only applicable when run in versions < 2.0.0; api-2.0 is applicable in versions > 2.0.0.

1 Like

Hi @ibacher @mseaton

Yeah it seems in atom feed we have beforetransaction() to fire the events.

But when it comes to serving from activemq we have a small problem again beforetransaction() will fire when an event happens and it will be synchronously delivered to subscribers. And I saw darius has mentioned we need a call back function to make sure this transaction actually committed. I think this is a call back we need.

@mseaton in this ticket has mentioned If there is a compelling reason to make the callbacks after transaction completion then I’d think the workaround would be to fix the service method that saves the feeds to be annotated with @Transactional and the propagation attribute set to REQUIRES_NEW and let spring deal with the transaction management as opposed to doing it manually like the current code

But if we are moving with events module and ActiveMQ will it be still possible to depend on the @Transactional annotation ?. In a module where we use events module to get events (Similar to atomfeed) but depends on events module and activemq. Will this make sure only the committed transactions save to the feed ?

Hi All

Any idea about this ?

cc :- @ibacher @mseaton @judeniroshan

@ayesh I think it depends on our use cases, the way we are handling the events, and what we are doing with them. My advocacy in the past for moving event firing into the beforeTransactionCompletion was due to use cases I saw the event module being used that involved synchronous listening for events and performing additional operations within the OpenMRS system (eg. module A saves an object causing an event to fire which then module B listens for to save additional data related to that object). In this case we want all of these save operations to save or fail as a unit. In other cases, asynchronous handling is preferred.

I think the question is more around what we are trying to monitor, and how will that data be consumed. Based on that, we can determine if we need logging to be done within the transaction, outside it, or both.

My guess would be that for monitoring purposes, we mostly care about data after the transaction is committed and any data related to monitoring would likely want to be stored separately from the data (i.e., I would lean towards failing to log the monitoring data rather than failing the main transaction).

@ibacher I actually agree with you I think we should get only committed transactions.

Btw as @mseaton mentioned can we define a set of data points that need to be monitored?

I see most of the data points are already been saved from the atom feed module to the events table.

For example :- No of parents registered to the platform within given period (can be taken with a query from events module). But the problem will be the performance again. Since we are dealing with sql queries.

@ibacher @mseaton @judeniroshan

Hi all

Since we are working with transactions completed we can’t move forward with atomfeed module.So I thought of having a look at the EMR-Monitor module. I locally build it had lot of bean creation issues.So I unwatched the module from the server that I am running currently.

But the weired thing is after I unwatched and start the server agian from sdk still it’s copying the .omod of emr-monitor module.

And also it affects my sdk as well. When I try to create a new server first thing sdk does is saying found openmrs module. Copying from local maven repository to the new server. So the error keeps popping up on new servers as well.

Finally I deleted the repository I downloaded as then it removed the emrmonitor dependency from the sdk: setup.

Hm with all other modules when we unwatch it there is no more intreaction with the module even on restarting the server. How can we fix this issue @mseaton ?