Grafana centralised logging and monitoring service

I’m excited to announce that we now have a centralised Grafana logging and monitoring service for OpenMRS!

The common setup is available for OpenMRS O3 in docker-compose and Kubernetes!

You can see it in action at https://o3-k8s.openmrs.org/grafana/ (login with user: admin and password: Admin123).

Logs from all services can now be found in a single place on a dedicated Grafana dashboard. You can quickly see error and warning counts, filter by services, list error logs, search etc.

Soon we will be adding alarms so you are notified about errors via e.g. e-mail. We are also planning to add metrics for monitoring your hardware utilisation so you can act when e.g. you are running out of storage.

The best part is that if you use docker-compose or OpenMRS Kubernetes helm chart, it’s extremely easy to deploy! Just copy over needed files or enable the feature in helm chart. No additional configuration needed. It comes with a pre-configured dashboard.

We welcome tweaks and improvements! If you make any changes in the dashboard, please do share them with the broader community so we can include them in the common setup and everyone can benefit!

Thank you!

10 Likes

How does the worst part look like? Or what detailed steps would it require? :blush:

Please see the README I linked. It’s just copying config files and docker-compose.

In K8s it’s setting monitoring.enabled: true when deploying the chart and adjusting ingress config if needed.

I think I’m going to make a small adjustment to the gateway service so that grafana can be hidden behind nginx in docker-compose setup.

Hey @raff! This is great! One thing: instead of using volume mounts to get the configuration into loki, alloy, and grafana containers is it possible to setup our own Dockerfiles to create those? This helps with getting things deployed in our weird infrastructure setup.

@ibacher good idea! I’m thinking of a small init container that has the whole config and copies it over to alloy, grafana and loki upon startup. It will make it super easy to distribute updates.

1 Like

Yeah, that sounds great!

Great stuff @raff!

We’ve been using Grafana at @SolDevelo for our OpenLMIS implementations, and I think we have some configs that we could contribute here.

For instance, we built out a custom dashboard for endpoint health-checks (uptime, SSL cert expiration, etc.). It basically allowed us to ditch Pingdom for an entirely OSS stack.

Let me know if you think these would be a good fit for the O3 setup, we’d be happy to share them.

1 Like

@pwargulak absolutely, please do share! Our dashboards go to openmrs-distro-referenceapplication/monitoring/dashboards at main · openmrs/openmrs-distro-referenceapplication · GitHub

Grafana made it to dev3 at Grafana

The whole config is now being packaged as a docker init image so it’s even easier to deploy. You just need to copy over docker-compose.grafana.yml.

This is great @raff

phenomenal addition thanks @raff

@raff Hi I have been working on developing the monitoring stack of OpenLMIS. I think the tool I have been using for probing the services might come in handy for OpenMRS monitoring service.

In OpenLMIS we are using it to test the endpoints availability, response time and connection details (which is base feature of the tool) but also as a way to preview the uptime and availability of instances as well as analyze occured downtimes. Take a look at some screenshot from running system below.

I have successfully configured the tool for OpenMRS and set up for probing backend and frontend of the app. For the deployments this tool can be configured to probe services directly, through load balancers and dns to be sure what part is misbehaving in case of any problems.

Take a look at PR I have prepared Monitoring extension by kszymankiewicz-sd · Pull Request #971 · openmrs/openmrs-distro-referenceapplication · GitHub In case of any questions I’d be happy to provide some more info.

1 Like

Thanks! It’s a great addition to our setup! I’ll try to find some time this week to review and test it out!