Should we have more forks?

Continuing the discussion from 1.11.4 upgrade fails due to person change to datetime:

There’s another way to handle a slow-to-release platform, with downstream projects wanting a faster timeline for new features. It’s the word-that-cannot-be-mentioned in OpenMRS-land: the fork.

One approach that would have worked here would be for the Bahmni team to implement this feature in openmrs/openmrs-core master, so that they know it will eventually show up in an OpenMRS release. Then they would create a fork at bahmni/openmrs-core (of the 1.11.x line) and backport the change there. Eventually, when Platform 2.0 is released and Bahmni is ready to upgrade to it, they would abandon their fork and go back to the main line of code.

Now, the Bahmni team is very committed to not forking openmrs-core, so I know they don’t want to do this. But I’m wondering if maybe we should be celebrating the “backport fork” approach as good behavior.

Currently we have a perverse situation: that the faster an implementation or downstream project is creating new features, the less likely its developers are to contribute these to openmrs-core. E.g. if you are doing one major system upgrade each year, then it doesn’t cost much to wait for a once-a-year release of the OpenMRS platform. But if you’re releasing monthly (or continuously!) then your only option is to implement everything in modules. My personal experience was that this caused me to stop contributing to openmrs-core, even when I was writing tons of code. (And I think the same happened for Mark and Mike.)

So, my proposal is that:

We should encourage active implementations and downstream projects to create temporary forks, and this will lead to more contributions to openmrs-core.

I had the same thought yesterday here: I have created the 1.12.x branch in openmrs-core (off of 1.11.x) - #9 by shruthidipali

Some other things we could consider that may be related:

  • similar to what I’m saying above, but have a “bleeding-edge” branch/fork that is shared. E.g. multiple organizations that all want want to be running the latest release, and also want features fast, can coordinate to backport their desired features someplace shared. That way they don’t feel like they are forking alone.
  • support much more frequent on-demand releases. E.g. make it so in this scenario, if Bahmni wanted to do the work, they could have gotten this feature in an unscheduled 1.12 release. (This would require more automation around the release process.)
  • adopting gitflow

+1 to forks. Temporary or permanent!! Isn’t that why we moved to a distributed model - git? I think its a good thing if people can fork and get things done that are important to them. Impact to health care is what should matter… no? The point I would like to emphasize and with the risk of sounding like a broken record, we have to make the platform so awesome, that people would like to contribute upstream. There is nothing that can’t be done through our module architecture. So the question I’ve posted a number of times is why would people contribute upstream? In my understanding, awesomeness doesn’t come only through good features, code base alone… but I think in modern, distributed, platform world this awesomeness comes from value chain. As Tocqueville or more recently Lin put it, it now comes from social capital. Can OpenMRS be that marketplace where people get value from contributing back upstream?

1 Like

Darius, seems like a reasonable approach to discuss further. To think out loud and try to come up with issues:

Right now PIH is working off 1.10.x. So suppose instead we start working off a 1.10.x PIH fork. Now, say I need a new feature, and am planning on coding it myself. In the old model, I realize it would be something that is too much of a change to backport–and it that case, I just have developed it in a module instead of adding it to core. But now I could have the ability to add it to core. A good thing.

So, what I would do first would be to add it to the PIH 1.10.x fork–because our build pipeline would be built around this branch, and my development environment would be set up on this branch. Once I get it working, I could forward port it to master. Issues here: at this point, if I ever wanted to link back up to the OpenMRS main line, I’d have to jump from 1.10 directly to 1.12 (unless I wanted to maintain a PIH 1.11.x fork as well which I don’t). This seems problematic, but I think the idea is with this model is we might be able to get rid of point releases in the current sense, so it would be less an issue?

The other issue with maintaining our own fork is that it could be very tempting to abandon “forward porting” to master. For instance, say we add a new feature X to our fork, go to forward port it to master, and there’s an issue we can’t resolve. We work on it for a bit, but eventually, because of time pressure or whatever, put it on hold as “something we will have to figure how to merge back into master before we can upgrade master”. Then we add new feature y to our fork, and go to add it master and there’s an issue, though significantly smaller than for feature x. When we debate how much time to put into forward port, there’d be an argument, “well, we aren’t even sure if we are going to be able to get feature x in master, and if we don’t, we’ll never be able to use master, so we shouldn’t invest much time in forward porting feature y until we decide about feature x…” A slippery slope…

Also, although I’d be writing unit and component tests for each new feature, I’d really only ever be doing manual UI testing on our own fork.

Anyway, definitely a topic of OMRS2015!

Mark

Great discussion!

+1000 for automating any & all tasks that a computer can do.

Automating the upgrade process would help immensely, but would require that we deliver the platform as a “platform” (not just a WAR) – e.g., make the standalone production-worthy & scalable and/or use Docker.

I’ve been mulling this over.

Running custom releases sounds scary. What sounds like a good idea up front often turns into a major headaches. For example, when we’ve done this with other projects (hibernate or liquibase), the downstream costs have outweighed the benefits. I’ve seen the same thing in local projects within Regenstrief, where these effectively become long-lived branches and we end up spending way more effort trying to dig ourselves out of the hole.

But what I think you’re actually describing is “feature backporting.” In other words, how can we contribute to openmrs/master and then cherry-pick features to run locallly. Backporting features to maintenance branches doesn’t sounds broken. Backporting features only locally as you describe, sounds like it could become a maintenance nightmare for implementations, especially given that modules offer a safer alternative. Perhaps some combination of making it easier to cherry pick features for interim releases (e.g., what we’re doing with 1.12) and trying to use feature toggles more often would help.


A lot to think about. Pretty heavy for typed conversation (like Talk or email).

BTW… “forking” is the first step in our current forking workflow.

From what i have seen, implementations have had more strict deadlines than the general openmrs platform and community supported modules. So whatever approach we take, if it does not make it easier for them to directly contribute, they will just continue to do things in their modules or custom forks. Just for the record, in the reference application, we have had to manually copy (instead of cherry-picking) a number of features from PIH modules.

Suggested at Should We Encourage Forks that Backport Features?

1 Like