Monolith or Microservice mindset?

Looking at the effort invested into migrating encounter diagnoses and conditions from the emrapi module into core, i sit back and reflect asking myself, “Is it worth it?” This code had reached a level of stability while running from the emrapi module. Now that we have moved it, have to deprecate it in the module, migrate existing data into new tables/format, there is a likelihood of introducing some kind of instability, despite our best to avoid it. And in all this, there seems to be no value added from the end user’s perspective. If we continue with this approach, i would see the core expanding to such a huge monolith in years to come with features or functionality that not every implementation requires, and have no way of taking it out if they do not use it.

One of the greatest challenges that i have seen implementations face is upgrading of the core platform. These same implementations are most of the times using the latest releases of the various modules. Conforming how easier it is to deal with functionality separated out as modules.

The cohort builder bug has given us some bit of headache because the fix in core would not be easily available to implementations which may not be ready to upgrade. If this feature was in some sort of module, it would be a different story.

We all know how easy it is to fix bugs or add new features in modules and release almost immediately without the hassle of cases that deal with core.

Am starting to think that our strategy should change from moving services or groups of functionality into core. But instead, have them as modules to reduce the coupling and have more flexible and faster release cycles. Though this is not building micro services in their true sense, but the tendency is in the same direction to address some of the driving forces towards a micro services based architecture.

Of course there are pros and cons for each approach. It could be that am excited and just need to hear it from another angle, to get it. :smile:

How to break a Monolith into Microservices

2 Likes

Thanks @dkayiwa for bringing this up and I totally agree with you. OpenMRS is already setup as a kernel (core) with modules as services. The release cycles also reflect this core (annual) and reference application (stable distribution of select modules - twice a year).

So we should actually be taking more stuff out of core into modules that can be evolved quickly

I am thinking that should be the approach for OpenMRS Platform 3.0:

  • Core with patients + encounters + visits + observations + concepts
  • user management, providers, cohorts in modules

On another tangent, for context I have been working in the PHP world for a long time and Symfony the best known framework also took a similar approach in the latest release http://fabien.potencier.org/symfony4-monolith-vs-micro.html… The latest release is pure joy to work with, since the basic requirement is about 5 packages only.

As a follow-up I am thinking of an approach to get this, the first module to be implemented can be openmrs-module-legacyplatform that provides the legacy bridge

Other random thoughts, can the openmrs platform (kernel + webservices) provide a ready to run jar file that can be leveraged as a backend for a system running on a Raspberry PI? no UI needed with an Android front end

I tend to agree with you, with a caveat or two.

Personally, when I saw the project to migration Conditions to core I sighed a bit because, as @dkayiwa said it will provide no new functionality but risk breaking our existing implementations (and I don’t think the ticket to migrate existing encounters has been completed yet, so until that is added it actually blocks us from upgrading to 2.2).

The downside is that we risk entering up with multiple implementations of domain concepts (like Conditions).

My gut would be that if we have functionality that we believe belongs in core, we don’t shy away from adding it to core to start out with, even if we aren’t sure we have it 100% right. If then an implementation can’t wait for the latest version of core to be released, then they can pull it out into a module to allow it to work with earlier versions of OpenMRS, but the migration path to remove it once an implementation upgrades is built into that instead of trying to harvest it years later.

Take care, Mark

I agree with both @dkayiwa and @mogoodrich, I think what we’re doing with this migration is what we have always envisioned, may be we’re just not executing it properly. It’s good for new features to start out in modules with the long term goal being to eventually move them into core, this allows the feature to evolve and mature at a much faster rate that is independent of that of core. Once it’s mature we move it into core in way that allows implementations that are already using it to seamlessly upgrade to the version of core where it has been introduced. Where I think we went wrong with this migration was trying to do the modifications and migration at the same time, we should have first implemented the modifications in the module, release a new version and them move it into core which would pretty much be copying and pasting.

@wyclif yes but at the same time I would like to play devils advocate and radically say need to re-think what is core to openmrs as an EMR.

INMHO I think its only patients, encounters/visits, and observations… Thinking hard about global properties but I can leave them in there. Locations/providers probably …

Everything else including user management, security, access control, routing, module support etc are actually add-on modules, some may be required, but surely this means that the core should not change that much probably be stable for an annual release.

The core modules are what can evolve more rapidly, also allow trying out new things etc. Replacing security models, RBAC etc.

I guess what am saying is that when a feature is mature enough to the extent where we rarely make changes it to it, it should always be moved to core as long as it’s widely adopted by several implementations. On the other hand, if a feature is widely deemed as optional and used by few implementations I agree that it can always stay in a module.

@dkayiwa @mogoodrich The infrastructure is slowly moving to cloud. We also need a proper road map to design cloud-specific architecture to ensure high availability and reliable system to support the future needs of hospital management systems. I think we reached certain level of architecture decay in our core module. We could convert service-based architecture for core openMRS then slowly moving to microservice based architecture. It will make sure that old implementation has flexibility to upgrade the the platform and help to move forward the futuristic platform.

@prapakaran would you like to join our Technical Action Committee? https://wiki.openmrs.org/display/RES/Technical+Action+Committee

2 Likes

@dkayiwa Thanks for the invite and happy to join the meeting.

1 Like

Thanks @prapakaran for joining today’s TAC and pushing on the important evolution toward a cloud-friendly, service-based platform. Can you provide some additional detail to your ideas of how you think we could move forward? Are there specific domains that you have in mind?

Two key activities I have been tracking that could provide some near term steps toward this goal:

  1. Our reporting & analytics vision includes having a dockerized data warehouse that runs in parallel with OpenMRS and could serve as our first “service” to step toward a service-based architecture. Especially, because my expectation is not a one-way export of data from OpenMRS to a data warehouse, but a bi-directional relationship where data don’t just flow from OpenMRS to the data warehouse, but analytics derived in the data warehouse are available for use in OpenMRS (e.g., allowing a provider in OpenMRS to choose from a list of patient at risk for being lost to follow-up, where the risk is determined by calculations in the data warehouse).

  2. Our goal to come up with a standard/best practice approach to deploying OpenMRS using Docker. @mseaton has brought this up on prior TAC calls. This would not only promote best practices of containerization, but could greatly simplify deployments including cloud-based deployments.

1 Like

If I could double-like this post I would. This makes so much sense. :+1: :+1: @prapakaran glad you could join today - we’re certainly interested to hear your further thoughts.