Bahmni multiple instances (EMR only)

We are interested in running 2 independent Bahmni instances on 1 server. Has anyone here succeeded in this, or are there any tips?

as far as I can tell, the implementation_name install variable is not for this purpose? i also looked at copying .WAR files and mysql databases, though I suspect that will not be enough for Bahmni…

I have already looked here https://talk.openmrs.org/t/how-do-i-force-a-2nd-installation-of-openmrs-to-a-specific-directory/ https://talk.openmrs.org/t/weird-tomcat-behaviour-when-running-two-instances/

I think it will require configuration in too many places, and then an upgrade might cause issues. You would be better off running Bahmni inside a VM/Vagrant Box, and then you can place two VMs on the same machine. That way… configurations with the vagrant box will be similar, and only the mapping from outside to in the machine will be different.

Related Links:

  1. https://bahmni.atlassian.net/wiki/display/BAH/Bahmni+Virtual+Box

  2. https://stackoverflow.com/questions/23888381/how-to-run-several-boxes-with-vagrant

thanks for you quick reply, we are on Amazon but Vagrant uses VirtualBox which does not work with AWS EC2 containers

if you are running on Amazon, then why don’t you run two instances of Bahmni on seperate instances. Will be much easier? Any specific reason you want to run both on one EC2 instance only?

All of Bahmni teams demo and QA instances run on Amazon… each on their own AWS machine. Its much more easy to manage that way… same scripts can be reused across machines.

Can you please explain more about your requirement / use-cases?

To clarify this is not for a production use but only for our CI workflow. Such instances will be very much under utilised: very few logins and not very often. We have a very good experience with leveraging t2.large instances for this. They very nicely can handle several instances of OpenMRS (2 or 3 + Jenkins … etc). We have found that this was a very cost effective way to bundle quite a few CI-related services together.

With other distros than Bahmni it is easy enough to leverage the SDK or Docker, but we started scratching our head over Bahmni.

Unless you have a better suggestion, and as a workaround to not being able to use Vagrant or Docker, we were thinking of forking the Bahmni Ansible playbook to make it install several Bahmni instances, in a well compartmentalised fashion, on the same CentOS instance.

Hi, I think Docker would be ok in AWS instances while Vagrant can’t. Should you have a try install bahmni in docker container?

Yes Docker works fine on AWS, but I believe that the Bahmni team has discontinued this effort. Quote:

Note that Docker based setup may seem complicated, and difficult, for people who have never worked with Docker.

We are Docker basic users, we haven’t developed yet a set of skills that would allow us to throw ourselves at Dockerizing Bahmni (especially in the time frame and with the delivery schedule that we are looking at). I would agree with the quote, it feels like it’s quite a Docker Compose challenge to have the whole platform being Dockerized.

For multiple reasons we feel more comfortable, at least for now, with extending the Ansible playbook, even if this would have to stay on our own fork. This is mainly because of possible backward compatibility issues with the Bahmni installer currently used on multiple prod. instances. We’d be curious to get @gsluthra’s take on this.

For people new / unfamiliar with Docker… it is not easy. We did Dockerize Bahmni a year ago… but realised that people in the community struggle with it in case something doesn’t work right, and it was difficult for us to remotely support/debug Docker issues. Plus Docker compatibility across OS is difficult to support. If we were doing cloud based deployments of Bahmni, and therefore had control over the environments, we would have gone the Docker way.

I think if you are ONLY using Bahmni EMR part, then Docker might work out fine for you. You could do a time-boxed effort of two-three days to make it run on Docker. Alternatively, since its EMR only, even making Ansible changes won’t be difficult for you if you are comfortable with that route.

Docker def has the advantage of seamlessly encapsulating a software, and makes the software appear like it has its own little linux OS in which its the only running app. So – many things become easier, once you have your docker containers/configuration done.

Thanks @gsluthra, ok well we have the two options open right now. I guess we will go the Ansible route first as it converges with other Ansible work that we do anyway. But in any cases we can certainly fallback on leveraging Docker Compose for Bahmni EMR only.

Hi, The following are some ways for running multiple Bahmni instances on the same m/c assuming the EMR part of Bahmni. This is an advanced way of installation and hope that you know the basic working of various components. This is in no way a recommendation. Please do it at your own risk.

  1. Docker way: Bahmni EMR uses Apache as webserver, openmrs.jar (/opt/openmrs/lib/openmrs.jar) as Embedded Tomcat containing openmrs context paths set, MySQL as database. Packaging these components, you can make a fat container. The bahmni-docker project has some references of how to do it. Its pretty old. Use at your own risk.

  2. Traditional way: We can have 1 Apache, 1 MySQL server (with 2 databases), 2 openmrs.jar (/opt/openmrs/lib/openmrs.jar containing Embedded Tomcat - running on different ports) files on the same m/c. Two versions of bahmniapps can be made available (/var/www/bahmniapps & /var/www/bahmniapps2 and can be configured in ssl.conf (/etc/httpd/conf.d/ssl.conf)). As @raff pointed out, you can set the OPENMRS_APPLICATION_DATA_DIRECTORY to point to different versions of .OpenMRS folders.

Please note that both these are for advanced installation. We will need to try it out once and see.

2 Likes

Thanks @bharatak, I think that we are going to start with 2. Or rather: adapt the Ansible playbook so that it allows 2. We will certainly ping you with more questions as we go.

Thanks for the heads up on the disclaimers, we fully understand and repeat again that none of this will ever be used in production.

Thanks for all the feedback, have a clearer idea of which path to take now.