Performance issues with Bahmni EMR

One of our bahmni implementations have lot of patient and obs data (around 3200000~ obs). Prod System requirements - 23GB, 12 core CPU. We have been facing performance issues for quite sometime. At least once in 45 days openmrs stops responding, restarting openmrs and mysql services used to solve the issue. We have faced this issue with bahmni 0.91. Recently we have upgraded bahmni from 0.91 to 0.92 and we started observing the same issue again. (Even with platform and http upgrade )

Similar issue has reported long back in the bahmni community and below is the JIRA card created in product backlog - https://bahmni.atlassian.net/browse/BAH-41

Below are the few points from JIRA card which applicable to our implementation too

  1. Peak times usually the load average touches 11 and CPU has 12 cores and users start complaining of extreme slowness
  2. The memory keeps increasing over a few days until restart

We started debugging issue. We are leveraging the Grafana graphs to analyse the issue. Meanwhile if somebody already noticed/tried to fix it, your points will be much helpful. Thanks !

@ajeenckya @arjun @mksrom @pramidat @ramashish @shivarachakonda @binduak @swetha184 @laxman @anandpatel @snehabagri @sushilp @sushmit @vmalini @akhilmalhotra @dipakthapa @ramashish@pradipta @mddubey @rrameshbtech @mddubey @iadksd @angshuonline @mwelazek @michaelbontyes @buvaneswariarun @praveenad @sanjayap @florianrappl @apaule @mwelazek @som.bhattacharyya @tejakancherla @rabbott @muhima08 @thomasrod @buvaneswariarun @swedhan @kirity @dkayiwa

1 Like

@binduak - for what it’s worth, we had a similar issue with our deployment (using an earlier version of Bahmni) and in our case the issue was related to the allocation of heap space. We had a cluster of errors in /var/log/openmrs.log at times that coincided with slow performance: java.lang.OPutOfMemoryError: Java heap space. We were able to resolve the issue by setting some flags in Tomcat. We did read that the issue was fixed in more recent versions of OpenMRS but I’m not sure of the exact version in which this was resolved

@binduak

I am facing the same issue at one of my implementations Prod System Configurations - 64GB RAM, 20 core CPU

We have Bahmni 0.93 version now, But earlier in bahmni 0.92 we didn’t encountered with this issue.

Below are the few points from JIRA card which applicable to our implementation too

  1. Peak times usually the load average touches 20 and CPU has 20 cores and users start complaining of extreme slowness
  2. The memory keeps increasing over a few days until restart.
  3. We see lot of these entries in httpd access log file

::1 - - [06/Jul/2022:13:18:34 +0530] “OPTIONS * HTTP/1.0” 200 - “-” “Apache/2.4.6 (CentOS) OpenSSL/1.0.2k-fips mod_wsgi/3.4 Python/2.7.5 (internal dummy connection)”

If you had some resolution over the issue please do share it with us to help us out here.

@Ritesh, we are doing some testing on 93 env. I suggest check with Ramkumar G on slack. Also, have you checked this thread

Please share your problem/details on slack #performance channel. We have recently migrated Bahmni to new version of OpenMRS (v2.1 → v 2.4). This seems to have made Bahmni performance much better, although more testing is in-progress. Links

  1. Slack: https://bahmni.atlassian.net/wiki/spaces/BAH/pages/414646273/Communication+Channels+and+Tools+Discourse+Slack
  2. Performance Testing plan: https://bahmni.atlassian.net/wiki/spaces/BAH/pages/3038445574/Performance+Benchmarking+and+Capacity+Planning