Optimizing the Backend: Examples of slow responses in O3

I agree that any work on performance is best done in discrete tickets with clear goals (e.g., “Improve performance of XYZ API call to respond within 100 ms”) and a predictable enough test environment/scenario to be able to reliably measure the improvement. Ideally, we’d have unit tests to catch any regressions; however, we have to be careful not to create tests that fail randomly 10% of the time (e.g., when a CI environment is under unusual load).

On the other hand, I also think we would benefit from a strategic approach for prioritizing performance improvements and to help define targets. I would expect a strategic approach to address issues that might not be considered on a ticket-by-ticket basis. For example:

  • Should we spend effort fixing a method that takes 2000 ms to respond but is only called once while deferring improving a method that responds in 100 ms but is called hundreds of times?
  • Can we use tools like Lighthouse to identify and prioritize targets? It seems like a combination of proactively identifying worst offenders + user-identified pain points would be best.
  • Avoid focusing on technical performance and lose sight of the fact that perceived performance is more important than actual performance.
  • When should we put effort into improving performance of a FHIR end-point vs bypassing FHIR and using our custom REST API? If our goal is to be increasingly FHIR-compliant and reducing the OpenMRS-specific learning curve over time, then any such change is a trade-off between performance and technical debt.

We probably need to thinking about performance top-to-bottom (i.e., not just as a backend or frontend issue… or for backend & frontend “teams” to consider separately).

I think we have a big enough bandwidth problem (chattiness) in OpenMRS 3 that we should address it separately from performance. For example, we have examples of OpenMRS 3 distributions using 30x the bandwidth of OpenMRS 2.x over a month. Perhaps this deserves a separate thread.

3 Likes