The OpenMRS Platform 3.0.0 is currently in active development. This next-generation release aims to modernize the backend by upgrading to Java 21, along with the latest versions of Spring and Hibernate. You can find more details through the following links.
As this will be a major release, we have the opportunity to introduce significant changes, including those that are not backward compatible. We invite you to share your suggestions, whether they are new features or improvements to existing systems, that you would like to see in Platform 3.0.0. We look forward to hearing your ideas.
It would be nice if we scrapped our custom authorization scheme and re-worked it in terms of Spring Security. In particular, being able to support Spring Security’s ACL’s may give us better solutions to certain issues.
It would be nice if we could switch the custom validation layer for Hibernate Validator
We should replace the current password scheme with something a little more “modern“. While there’s nothing particularly broken about our current scheme, but uses multiple iterations of hashing. In particular, we should support one of the schemes laid out here. We should also probably change the stored format to something similar to what Spring Security uses. Our current scheme doesn’t really record which hashing method is used, so upgrading password hashes is sort of a matter of taking the plaintext and trying them all until one matches. This scheme is unambiguous and allows for other properties to be encoded (e.g., the salt is part of the parameters for many password schemes; placing it here would allow us to more easily switch to using a unique salt per password rather than a unique salt per user).
A key facility added by the authentication module is the DelegatingAuthenticationScheme, which allows custom authentication schemes to be plugged in via runtime configuration. It would be nicer to adopt this mechanism rather than what we have now which allows only a single authentication scheme. (Using Spring Security might override this altogether, but I was thinking more about that as an authorization plugin rather than replacing authentication).
Some of our Exception classes have constructors that take message keys, but not all of them due, resulting in cases where exceptions return message keys rather than the actual message. This is technically a breaking change, but it would be better if all messages to APIException and descendants were handled as if they might be translatable keys.
These are roughly ordered from least easily implementable to most easily implementable.
This update will drop support for older Java versions and migrate our codebase to Java 21. It will also include updates to Spring and Hibernate that are not going to be backward compatible. These changes are the main reason for moving to the 3.x major version line. You can find more details in this discussion.
Yeah, I think the most important part here is that the libraries we depend on for Core have already made breaking changes in their supported versions of the Java ecosystem. E.g.,
Spring 6: JDK 17+, plus the move from Java EE → Jakarta EE, with all the packages renamed (so Tomcat 10+ as a minimum)
Hibernate 6: Java 11+, move to Jakarta Persistence (Hibernate 7 is Java 17 minimum, but that won’t be supported until Spring 7)
Basically, keeping up with our dependencies, even not on the bleeding edge but just the supported releases requires us to make breaking changes in the backend, so platform 3.x is kind of forced.
I would be interested in us making any changes we feel are appropriate to core to better support the goals laid out in this thread: Asynchronous message queuing, retries, and error handling . Basically I think this involves adding Event firing and listener registration to core such that these events are guaranteed delivery and transactional (listeners can choose to participate synchronously in the transaction or choose to run asynchronously outside of the transaction), and such that no module ever needs to implement a Hibernate Interceptor but can simply rely upon the core Event publishing API with 100% confidence.
While this can also be done in an improved event module, doing this in core allows us to decide to fire events at all sorts of interesting places throughout the core codebase that a module may be interested in plugging into, and allow us to get rid of AOP and other multiple Hibernate Interceptors altogether.
I would also be interested in improving the module loading lifecycle such that modules can have more fine-grained control over when certain functionality might be executed, whether in relation to core or in relation to other modules. A few specific examples:
My top-level distribution module depends on Initializer, but wants to invoke the Initializer API directly to have more control over when and how it loads the configuration, rather than have this happen during the Initializer activator’s started method. Currently this can only be controlled at the distribution level by setting certain runtime or system properties. It cannot be coded into the top-level module.
My implementation has a bunch of data that was created in an older version of OpenMRS and now fails validation rules. I want to write some migration scripts and deploy them in my module and have them execute before OpenMRS runs any liquibase changesets. Currently there is no way for a module to inject behavior into this phase of OpenMRS startup.
I’m not really sure how sophisticated it can or needs to be. Simply being able to set a configuration variable or flag that the upstream module is aware of would likely be enough, as long as it could be set prior to it’s use. The main issue I think is the order in which module lifecycle events are executed.
I believe we currently do something like this:
Module A Before Started
Module A Started
Module A After Started
Module B Before Started
Module B Started
Module B After Started
whereas we could address this issue by adding more lifecycle events and also by grouping the event execution by event type rather than by module like this:
Module A Before Started
Module B Before Started
Module A Started
Module B Started
Module A After Started
Module B After Started
This would allow Module B to instruct module A to do something by putting the instructions in it’s “Before Started” method that would be used in the parent module’s “Started” method.
If we can’t do this with the existing lifecycle methods for backwards-compatibility reasons, we might be able to do so by introducing additional, new lifecycle methods.
We might also be able to build on my other request above and leverage a more expansive event system. In this case we could just have modules fire events and allow other modules to subscribe to those events. So Module A could fire a “beforeStarted” event, and Module B could subscribe to “beforeStarted” events, filter on those from “Module A”, and then do what it needs to do before Module A hits its started method.
A different approach could be harvesting the liquibase domain out of the Initializer module, and instead supporting something like this natively within core. i.e. provide a core-supported mechanism to execute liquibase changesets located on the file system in the application-data directory somewhere, and to provide some ability to control when these are executed with respect to core changesets (i.e. before or after).
@mseaton I’m not in favour of running liquibase changesets located on the file system. I do feel they should belong to some module and thus be versioned and distributed that way.
As part of TRUNK-6418 we’ll add new methods to module activator for running setup on version change. Please see if it works for you. If there’s no method to easily run liquibase from setup methods (if you want to write migrations that way), we could add that as well in a separate issue.
@mseaton supporting message queuing in core is a bigger chunk of work. Not sure if we have space or funding to include that in the 3.0 release. The 3.0 is already a big one. I’m definitely interested in working on that though.
Thanks @wikumc for bringing this up. I guess this would be an ideal time to implement the transition from XStream to Jackson, as it involves some backward-incompatible changes. The work on this is being tracked on TRUNK-6351.
That’s fair. I wasn’t trying to suggest that the changesets not be versioned and distributed and installed in a controlled manner, only that we don’t necessarily need a module (jar/omod) to package them. Content packages can also serve this role, and are already doing so with liquibase changesets via Initializer’s liquibase domain. It’s probably best to leave it there, I was just throwing it out there to brainstorm / provoke discussion. Again, the main issue here is just when these are executed, and not having enough control over that.
It isn’t 100% clear to me from the ticket description, but if the plan is to allow modules to inject setup code before core liquibase changesets run, not just the module’s own liquibase changesets, then yes this should likely meet the need.
We don’t need to tackle queuing or any asynchronous processing in core in phase 1, or even ever. I am primarily talking about an Event and EventListener interface, firing events, registering listeners, and iterating over registered listeners in the same transaction, in the same thread as the originating event. Think of this as a replacement mechanism for any custom AOP or Hibernate Interceptors.
One such core listener might have a role of taking an event and stuffing it into an asynchronous message queue like ActiveMQ, and dealing with the complexity of asynchronous/queued messaging, delivery successes and failures and retries and all of that. But that would just be one potential consumer of the core events, and that could remain in the event module or elsewhere.
What I am interested in is a solid, trusted core mechanism that can provide a preferred alternative to AOP or Hibernate Interceptors. I don’t think this is a ton of work.
Would you just do application level events without CDC? If that’s what you have in mind then I agree it’s less work. It’s a matter of publishing some events consistently from e.g. OpenMRS service AOPs and Hibernate interceptors with https://www.baeldung.com/spring-events. We don’t really need to build any framework (aside from some model classes to store event details) around that and just use Spring Events with 3 annotations for consuming events: @EventListener, @TransactionalEventListener (allows to select whether to execute during/after transaction) and @Async.
If you have in mind a reliable CDC with Debezium (asynchronous), it’s more work especially supporting replication of OpenMRS nodes and running embedded or standalone Debezium server. Also I would provide ActiveMQ via core and not a module in view of clustering support and not missing on events before a module starts and ActiveMQ is available.
ModuleActivator#setupOnVersionChangeBeforeSchemaChanges(String previousCoreVersion, String previousModuleVersion) (run before any liquibase upgrades including core)
ModuleActivator#setupOnVersionChange(String previousCoreVersion, String previousModuleVersion) (run after all liquibase upgrades)
Correct. A full CDC solution with Debezium or similar mechanism that could cover all database changes would be a separate initiative, and could theoretically be a source of these core events. So I see the core event and listener constructs as still a missing and fundamental piece.
There is also a large gap in the event module, in that AFAIK, it only supports asynchronous consumption of events and does not allow listeners to participate in the transaction that fires the original event. In my experience, the majority of use cases for which we have used the event module, we would be much better off sacrificing asynchronous performance for transactional reliability, as both the event and the listener are generally performing their operations within the OpenMRS JVM, and not interfacing with some external service, and injecting behavior in a similar way to what one might do with AOP or a Hibernate Interceptor.