Dockerizing development and deployment of OpenMRS

,

I’ve just discovered that this has been already addressed by @ibacher. So the plan is triggered whenever changes to O3 components are made and everything is being deployed. Thanks @ibacher!

2 Likes

I know you’ve seen this, @raff, but I want to ensure this Talk thread gets linked to a related discussion on Talk:

1 Like

Hello @raff, Bahmni team is using openmrs/openmrs-distro-platform:2.5.7 as the base image and we have enabled multi-architecture builds in Github Actions. The builds are successful and the docker images have both the architectures now. But still OpenMRS is not coming up properly on Mac M1 machines. We have also tried openmrs/openmrs-distro-platform:2.5.7 image on M1 but that too didn’t start. Do you have any suggestions/ experiences on running OpenMRS containers in Mac M1? Would you suggest trying out any other base image on M1 ?

cc. @gsluthra @angshuonline @binduak @shobanadevi @n0man @mradul.jain

Thanks.

2 Likes

@mohant Could you please share logs? Have you seen O3: M1 machine support - #8 by dkayiwa ?

Basically try increasing available memory and make sure not to hit any REST endpoints before fully started up due to an issue with the backend.

I’m able to run with 8 GB of memory, 4 vCPUs, 1 GB swap, 160 GB disk image size.

2 Likes

@raff I have 7.90 GB memory, 5 CPUs, 1GB swap and 59.6 GB disk image. Earlier i was able to run openmrs even with lesser memory, CPU and disk image than this. Will try with what you have suggested and update.

@raff tried with this storage(8GB memory, CPU -6, swap 1GB, Disk image size 160GB) and still openmrs is not up. shared logs openmrs.txt (21.0 KB) Could you pls check?

@shobanadevi from the logs it seems as if you have started plain platform and it’s up and running. No unexpected issues. The two errors are bugs in webservices.rest / owa, but they do not affect the system. Did you start it up from GitHub - openmrs/openmrs-distro-platform: This project is used to package the core OpenMRS war file with bundled modules ? If yes, then you should be able to access it via http://localhost:8080/openmrs. Otherwise please share steps to reproduce your issue and how you conclude it is not working.

1 Like

Could you say something about that bug? I’ve actually been using, e.g., http://localhost:8080/openmrs/ws/rest/v1/session to check whether the app is up or not…

Hiwigo: [RA-1769] - OpenMRS Issues

@raff Thank you. yes it is working now by accessing with url http://localhost:8080/openmrs.

Hi @raff,

I saw you in the core team video at OpenMRS 23 implementers conference and you should be the right person for my question if are there some concrete next steps or a confluence page for the backend topic on “Horizontal Scaling”?

According to the O3 Roadmap it looks more in the planning state at Horizontal Scaling Bottlenecks with tags Seeking and Open: Product Dashboard: OpenMRS Product Vision, Strategy, & Roadmap - Projects - OpenMRS Wiki

I mainly found those links for previous investigations: Playing around with OpenMRS in the cloud - Development - OpenMRS Talk Support for Clustering - Documentation - OpenMRS Wiki [TRUNK-314] Investigate about support for Servlet Container Clustering - OpenMRS Issues

But i am curious if there is maybe a more recent or up-to-date list with next steps or if it is rather currently on the plan to be developed sometime?

Many thanks, Johannes

1 Like

Hi @johnny94,

It’s on the plan to be developed sometime. Are you personally interested in scaling OpenMRS? Is it affecting your implementation? Is it for performance or high-availability?

I could definitely provide directions on how to approach this, but so far there was not enough interest from implementations.

OpenMRS workloads are mostly DB intensive so if it is about performance then the first thing to do would be to increase DB instance or introduce DB clustering. I would assess this as a medium task.

On the other hand if we want to achieve high-availability then we would also need to be able to run at least 2 instances of OpenMRS behind a load-balancer. It requires changes in openmrs-core as we would need to move away from local data store for caching and search index to distributed cache and search index. I would assess this as a complex task.

1 Like

Many thanks for your inputs already - especially with the hint to focus on increasing database for a performance quick win!

I am currently more personally interested, but we also use OpenMRS for project implementations in my company - but it is unsure how much time i can use for that from my work time. Currently it is not affecting an implementation but, asked already several times. Performance and high-availability are important, but therefore the application core must support that as well.

Everything outside openmrs is “basically clear” to me and need seperate dedicated time - but i am happy if you can provide me some directions on the internals of openmrs core before I can dig more into.

I already started some brainstorming with space for some comments or extensions from your side:

Many thanks, Johannes

1 Like

If we would like to run multiple instances of OpenMRS core we would need:

  1. Load-balancer in front with sticky sessions e.g. nginx (if self hosting) or whatever cloud provider offers
  2. Hibernate Search configured to use Elasticsearch instead of built-in Lucene.
  3. Ehcache (used for Hibernate cache and Spring cache) configured for replication between nodes.
  4. Migrate in-class caching used in openmrs-core and modules e.g. HashMaps to use Spring cache.
  5. Use distributed storage for files stored by modules in the OpenMRS Application Data directory.

1-4 are to be done in openmrs-core and a reference docker setup provided with docker-compose.

5th depends on the deployment environment. Clusters like Kubernetes, ECS, etc. provide shared volumes. Even docker-compose can re-use the same volume for different containers.

It is a significant amount of adjustments and much more infrastructure to run so if it’s mostly for performance reasons I would really recommend starting from working on DB (optimising indexes & queries, increasing DB hardware, introducing DB replicas) then maybe moving to Elasticsearch for the concept and patient search to take off some load from the OpenMRS instance itself and only then look into the OpenMRS instance replication as outlined above.

2 Likes

Many thanks @raff for your direction!

This helps me already further.

Best regards, Johannes

@raff Is there a strategy for sharing servlet sessions across multiple instances? Is this handled by ehcache? While we don’t make heavy use of session attributes, it’s required for things like the login and, with 2.6.0+, we’ll likely need to do something to share state for the CSRFGuard stuff (although, I think this can use servlet sessions, so might be solved by the above).

We’d also have to make some ecosystem adjustments as well, because we likely have modules outside of core that are storing data in-memory that would need to be shared between instances.

@ibacher the easiest way to approach this is to use a load balancer with sticky sessions, which means that an established session will always go to the same instance.

1 Like

@frederic.deniger just curious: Is this load-balancing approach a need for ICRC?