@raff, @pgesek, @darius, @burke
The technical section posted by @pgesek says that the child will read the atom feed published by master and there will be a push from the slave to the master.
I want to get your opinions on these approaches and discuss the architecture. I’m focusing primarily on the exchange of patient information and encounters.
Slave Pull From Master In the current architecture, the master OpenMRS system would have to post an atom feed that is cohort specific and each slave system would read that atom feed every time they came online. When that clinic reads that atom feed, they would kick off a series of FHIR REST calls that retrieve the patient’s information from the master, the most recent version of the patient record would be returned and imported to the slave OpenMRS database. Each record in the atom feed would act as a transaction and the feed reader in the slave OpenMRS would adjust the marker for each successful transaction.
To accomplish this, we would need to build the business logic in the slave OpenMRS to try to read the atom feed from the master OpenMRS and act when there is an update in the feed. (I haven’t seen evidence that this business logic is built in OpenMRS.)
Slave Push to Master The push to master could be done in a few ways and this is where I think we have overlap with the MPI project.
First, I’d like to make sure we want to do a push of messages from slave to master. We could theoretically have a multi-feed reader on the master OpenMRS that reads the atom feeds from each slave OpenMRS on a regular schedule and queries the FHIR REST API to get updates to the patient record.
Assuming we do want to push messages from slave to master, we get into the area of message queuing at each clinic.
We need to discuss how we create the message that needs to be pushed from slave to master. My current thinking is to raise an event for each database transaction like we currently do with the event module. That event could either A) generate an entry in an atom feed, B) create a FHIR message for that particular message type and post it to master or C) create a FHIR message for that particular message type and store it in an OpenMRS table that could be read by a third party tool.
In scenario A, we generate an entry in the atom feed. We would then need to build the business logic to locally read the atom feed, have it query the REST API and create the FHIR message(s). This would likely be done with a third party tool like Mirth.
In scenario B, we create a FHIR message for each message type and try to push that message to the master OpenMRS. If successful, great. If not, we would need a mechanism to retry on a regular schedule, audit failed transactions and provide administrator visibility into the process. We could build this queuing mechanism natively in OpenMRS or we could use a third party tool like Mirth to manage the interaction with the master OpenMRS.
In scenario C, we generate the FHIR message and locally store it within OpenMRS. In this scenario, we offload the transportation and auditing to a third party tool like Mirth.
I’ve been thinking about this problem for some time. If we go with scenario B and we post these messages to a local third party tool like Mirth, we may be in a situation where the local third party tool is not available. In that event, we would still need visibility into failed transactions within OpenMRS. That’s why scenario C is attractive. It allows a third party tool to do the transportation and mark a message as successfully transported in the event of success.
What do you think about these different architectures?
Craig
FYI @mogoodrich, @mseaton