FHIR module and export/import-based testing?

I thought of an approach that could the FHIR module some extra mindspace and interest: Prioritize having every FHIR read of OpenMRS data can be symmetrically writable into another OpenMRS server with equivalent metadata.

This would mean that the FHIR module becomes immediately interesting to everyone wanting to synchronize/import/export data between different OpenMRS servers.

Imagine having a test for every data resource where you:

  1. Set up test metadata ***
  2. Set up test data
  3. Verify expected data ***
  4. GET query against FHIR, and save the resulting resource as xml/json

Then reset the database and:

  1. Set up test metadata ***
  2. POST query against FHIR, posting the xml/json from the GET
  3. Verify test data ***

(The *** lines would be exactly the same when generating via GET and replaying via POST.)

I wouldn’t expect the FHIR module to directly provide any import/export/sync workflows or functionality, but if the standard dev process and test approach of the FHIR module ensured and certified that our FHIR resources could be read and written symmetrically, I can definitely see someone wanting to build that functionality on top of it.


I just want to add my enthusiastic support for this. And to the related point that Darius made, which is to do this in a way that implementations on 1.9.x (or 1.10.x at most) can utilize.

1 Like

Great idea @darius. FHIR has interesting operation called $everything. It has defined for Encounter and Patient. Patient/id/$everything will give a dump of resources that has associated with patient that belongs to given id.

This will be a really interesting feature. We can build initial version of this kind of functionality very soon and develop it as time progress. :).

@surangak any more comments?

1 Like

I’d just add that the key fundamental piece to this is making sure that producing vs ingesting FHIR representations of OpenMRS data are symmetric, and nondestructive of any data that doesn’t fit in the FHIR rep.

Having it be complete (i.e. all OpenMRS data points are included in the FHIR rep) and automated/easy (e.g. supporting $everything) are icing on the cake, but aren’t necessary for the first version.

Basically, if you ensure just the fundamental part, then others have the opportunity to build on top of it. And they probably will, without you having to do anything. :slight_smile:

This sounds scary. A patient seen once shouldn’t be a problem. But a patient I’ve treated for 20 years could make this a DOS request. :wink:

I trust the “everything” response is paginated in cases where hundreds or thousands of encounters & observations are involved.

1 Like

For now, AFAIK there is no pagination concept. Definitely this could lead for a DOS request. We might need extra protection on this method. But we can send set of parameters in $everything request which can limit the results based on the incoming values. :slight_smile:

@darius created a JIRA issue to track this improvement:

For the first cut we can think of implementing some general functionality which expose several core OpenMRS objects through FHIR. Some resource does not have all the attributes in FHIR resource and vice versa. But for the first cut, it should be fine.


According to FHIR’s RESTful API page, paging is expected to follow RFC 5005 – Feed Paging and Archiving.

Personally, I think services like GitHub have gotten paging right – i.e., the server limits the maximum number of results returned and clients have the option of asking for smaller – not bigger – pages (e.g., using something like GitHub’s per_page parameter). Of course, we should follow RFC 5005, but I would support a well-documented maximum paging size in all cases. There shouldn’t be any RESTful call that returns more than the maximum number of results in a single response.

1 Like

Yeap @burke that should be there. :slight_smile: HAPI FHIR library itself has this support upto some extent we should be able to use it too.