We’re working on installing Bahmni on Azure using Azure Container Registry (ACR) and Azure Container Instances (ACI), and we’d appreciate your guidance. Let me share our progress and the challenge we’re currently facing.
We noticed that Bahmni has a supported Docker installation, so I started by testing the standard Docker Compose setup locally. After confirming it worked, I modified the Docker Compose file to be ACI-compliant. I then created an ACR in Azure, pushed the images, and successfully deployed all the Bahmni services—16 in total—into an ACI instance. The services initially start correctly, and I was able to connect to the necessary mounts (e.g., for patient-documents) using Azure File Share. I also created the database in Azure MySQL, though I had to do this manually because Liquibase failed to initialize against Azure MySQL. All of this was a significant effort.
The issue we’re stuck on now is an error in the bahmni/proxy container. I’m seeing the warning AH00558: httpd: Could not reliably determine the server’s fully qualified domain name, followed by a “server name cannot be defined” message in the logs. After this warning, the proxy container stops, and because the other services depend on it, they stop as well, bringing down the entire system. I’ve tried setting the ServerName directive in httpd.conf and perhaps 20 other ways, and none solve the problem.
I understand there’s a supported installation for AWS, and we’d be eager to contribute documentation for a supported Azure installation if we can get it running on Azure.
Has anyone encountered this AH00558 warning in a cloud deployment, and can you suggest a way to set ServerName? After we solve this, what other issues might we run into that I have not found yet?
Thank you for any insights or guidance you can provide!
That’s a very generic error. could you share the entire container logs from the ACI instance, for the initially failing container. that would provide more insight the the Bahmni folks, including health statuses
Yeah, no problem. I took a shot hoping someone would have done this before. I’ll gather more information and reply to the group. Thank you for the quick response.
Upon further review, when OpenMRS starts, the log shows it progresses to “Updating the search index… It may take a few minutes,” and then it stops there. Initially, I took the log at its word, assuming it would take a few minutes and that it was the last message before the server was ready. However, after doing more research, I realized this is a very common problem—or at least was a very common problem—for which I have not found any resolution. I’m using the standard sample data with a Docker install, and so far, I am at two hours of indexing. I see mentions of this from 2020 to 2024. I’m unsure what the current status of this is. When I was using the system entirely off the Docker instance, I did not have this problem. Moving the database to Azure MySQL and some of the files to Azure File Share has triggered this.
My second question is about our installation on Azure, where we are using the Azure Database for MySQL. Despite my efforts, I could never get Liquibase to create the OpenMRS tables. I could see the two Liquibase tables themselves being created, but when it came time to create the rest of the tables, no matter what I did, it failed. I think it has something to do with the fact that in Azure, using the Flexible MySQL Database, we don’t have access to the root user. We have access to a user with root-like permissions, but not the actual root user, because the database is managed by Azure. For testing, I got this to work by backing up an existing database and restoring it, but this is not going to work in production because we will not be able to perform database upgrades. Any information on this? Thanks in advance for your time.
Before going too far down a rabbithole, the answer is probably in the server logs for your OpenMRS instance. If you’re able to share the log file via something like Pastebin or the like, it would facilitate the community’s ability to answer your questions.
OpenMRS generally doesn’t need root access to the server unless you ask OpenMRS to create the database. As long as you’ve created a database for it, you should be able to supply the credentials for a user with the ability to create tables, indices, views, etc. within the database.
How long search indexing takes can ultimately depend on how many concepts (and more accurately how many concept names) you have in your installation as well as how many resources are available for the indexer.
Ian, thanks for your reply. The docker-compose/.env files have placeholders for root passwords that is what led me astray. If they are only required for database creation and not table creation, I’ll try a few more times before wasting the community’s time. As for the indexing time, I am just getting familiar with the system and am using the standard demo database. I’m assuming this only happens during the initial setup, and once we get through that, the cache is maintained with the database. Is the cache stored locally?
Thank you for your ongoing support. I’m seeking detailed guidance or documentation on configuring Bahmni to meet our deployment requirements. Our plan is to leverage Azure Container Registry (ACR) and Azure Container Instances (ACI) to deploy and scale Bahmni containers in Azure. Given that these containers are ephemeral, we need to configure the application’s indices to persist on an Azure File Share, similar to other permanent files.
I’ve reviewed the Docker Compose and environment configuration files but couldn’t identify a clear setting to direct the indices to an Azure File Share. I suspect the solution lies within the source code or advanced configuration options. Could you point me to relevant documentation, configuration examples, or specific files where this can be addressed?
Context: We are a healthcare/AI startup evaluating Bahmni as a core component of our integration strategy. My primary expertise is in Windows/C# development, with Java as a secondary proficiency, so I’m navigating the Java-based ecosystem of Bahmni with some learning curve. Our immediate directive, among many competing priorities, is to get Bahmni operational for a thorough evaluation. Following a successful evaluation, we plan to allocate dedicated resources to maintain and extend the platform.
Any assistance with configuration details, particularly around persisting indices to Azure File Share, would be deeply appreciated at this stage. Thank you in advance for your expertise and support.
Not to do a rug-pull here, but I’m really on the OpenMRS side of things and not the Bahmni side. Perhaps @mohant or @rahu1ramesh would be able to guide you to the right place in the Bahmni docs?
Thanks for the advice. I attempted to have the system auto-create the tables and received the same error as always. The logs can be found at the link below. The MySql database is in Azure and it is a managed instance version 8.0 of MySql. The two liquibase tables get created but that is all and the rest is in the log.
Thank you @ibacher for promptly bringing this to our attention.
Hi @slomicka,
Let me take a moment to address and clarify a few of the queries you raised:
OpenMRS Schema Initialization
When starting Bahmni with a fresh (empty) database, it is important to ensure that OpenMRS is allowed to initialize the database schema. This can be achieved by setting the environment variable OPENMRS_DB_CREATE_TABLES='true' in your configuration file—either .env or .env.dev, depending on your setup. Without this flag, OpenMRS will not attempt to create the necessary tables on startup, which appears to be the cause of the issue noted in your logs where the global_property table is missing.
Lucene Indexing Performance
The Lucene search index update process should typically complete within a reasonable timeframe. In our experience, on a production environment with approximately 100,000 patients, it usually takes around 10 minutes to complete. If you are experiencing significantly longer durations, it may indicate that the container running OpenMRS is constrained in terms of CPU or memory resources. We recommend checking your container resource allocations and adjusting them as needed to optimize performance.
Persistence of Lucene Indexes
As of now, Bahmni does not persist Lucene indexes across container recreates. This means the indexes are regenerated each time the container is recreated, which could add to the startup time. Your suggestion to persist the Lucene index data is a valuable one, and we appreciate you raising it. We will evaluate how best to incorporate persistent volumes for Lucene indexes in our deployment model to improve efficiency and reduce redundant computation on each recreate.
Please let us know if this clarifies your concerns. Feel free to reach out with any additional questions or blockers you encounter—happy to help you out.
Yes, I have set CREATE_TABLES = true. What happens is that the two tables for Liquibase are created. Then the log shows it’s calculating unapplied change sets. Then something goes wrong. It works correctly on a local copy of MySQL, but we are trying to move this to Azure and use Azure Database for MySQL version 8.0. I am very familiar with ORMs and have used Entity Framework for a decade, but not Liquibase. My guess is that there is something in the table creation process that assumes root privileges, and we are not root in Azure. We have a user with root privileges, but we are not root.
2 - 3. Thank you for the answer. What do enterprises do for resilience? It seems like there can only be one instance of the server running. Am I missing something? It took over 2 hours to index and now every time I restart it tries to reindex again.
Lucene index is stored in a “lucene” folder in your openmrs application data directory, which should be very easy to put in a volume and this way persisted between container recreations. The index can only be used by one container at a time.
There’s ongoing work for replication support in openmrs-core, but changes are not yet released. You can track the progress here.
I’m assuming the lucene index is just a file hence one process at a time. Moving to a server version of Elastic Search makes sense. Any time frame?
I see a lot of AWS work, no Azure. Any reason why? We are Azure and willing to help with this.
Regarding to the liquibase not creating the tables, any ideas? Also, noticed in further reading I see where OpenMRS can be run on Postgres, Bahmin part as well. If so I’ll try that.
The Elasticsearch backend is already merged in core. It should be released in a few months, but then there’s a question when Bahmni and other distros are ready to use it.
Azure route could be supported by Kubernetes and the community maintained helm chart.
Most implementations are running on premise thus the focus on Kubernetes.
There’s an AWS proprietary setup, because @jacob.t.mevorach created it and shared it with the community. You are more than welcome to share yours when you make it work. I meant this setup.