OpenMRS has grown tremendously in these past 10 years. However, as the number of implementations grow, it is increasingly difficult for any organization to understand how their needs are being addressed by the software development process.
It would be helpful for OpenMRS to provide some guidance how implementation needs can be recorded and prioritized for placement on the product development roadmap. This will definitely encourage implementations to share their invaluable opinions, as well as help the OpenMRS community to provide the best services and needs.
Good question @ayeung. I’m glad you asked it! I’d love to start having a specific Implementer’s call that we could start hearing about the needs from the implementers and using that to start working through how things are prioritized into the development roadmap. I’m also open to other suggestions! What other creative ways could we get implementers to give us their input? @jthomas Is there a call dedicated to this kind of use?
Couldn’t agree more. This is a priority in our operational plan for the platform: “Detail how the platform road map is prioritized” and “Publish how users of the platform can contribute to the platform road map.” It’s also why we’ve positioned @janflowers to help lead implementers in having their voice heard. We’d like to evolve from developers trying to generate priorities by polling implementations for their needs toward implementations driving priorities.
I’ve seen other communities try to send representatives to meet with implementations, implementers and its users. This has to be a role where someone reaches out to see what implementers are doing and ask its users what they are missing. I think its quite difficult to participate in a community, when you are busy putting off local fires.
Thanks for asking this extremely important question. In fact, it’s one of the most important questions we must resolve if the OpenMRS Community is going to remain relevant and a reliable source of software & services for our customers.
The great news, as seen above, is that @janflowers is doing a lot of thinking on these topics, which is key to our long-term viability & growth. We should all do whatever we can to support her efforts, and to find people to help in the cause.
The other good news is that requirements management is a practice that is well-documented and well-understood in the software engineering world; so we don’t have to re-invent the wheel, we just have to make it work for us.
I would recommend that with Jan’s leadership, we assemble a team of people to focus on understanding what the 5 stages of Investigation, Feasibility, Design, Construction and Test, and Release mean to us. We already do some minimal/basic process in the Design and Release phases, but importantly, we don’t have a documented process about how to pass feedback in the Release phase back to the earlier phases of a subsequent release. (This is in part what Ada mentioned above.)
I believe we also need to do much better with traceability as described in the Wikipedia article above. If our customers don’t understand how and when their ideas are getting implemented (or not!) in a particular release, we will appear as non-responsive to their needs.
Finally, I also strongly support the idea from @sunbiz about co-locating developers with implementations for extended periods of time. Those co-placements should also rotate – in other words, a “serious” (high commitment) developer-contributor should have exposure to as many different settings as possible. I realize not all of our contributors will be able to accept this opportunity, but I think we’ll be surprised at the numbers willing to take on the challenge. I hope the engineering teams will include this idea in their requests for funding to our fiscal sponsor(s) as we get more formalized about the fundraising process in 2016.
We need lots of people to do this work! If this kind of stuff is interesting to you, please speak up and make yourself known to @janflowers, or to me!
We also need many more ideas, so please continue to reply to this topic with your thoughts … we want to hear from both engineers with experience in these areas, as well as hear from our implementing customers who have good ideas to share about making our software releases more valuable for them.
Weekly Implementers and University meetings lose their charm quickly. Probably too much to organize impactful subjects and too much weekly time on already busy schedules.
However I would love to return to having occasional meetings. Monthly? Quarterly? With recent discussion about Nutrition, this might be a good topic. I’d also be interested to hear from other active implementations to understand their direction and platform. Are people building on 2.0 or Bahmni? It would also be great to hear more about country initiatives (ie. Philippines, Rwanda, Kenya, et al).
The need to have a feedback cycle and a traceability matrix ( as well as what Michael notes above for requirements management ) is critical. While assembling a team would be great, it may help to do a white paper that includes what we want for requirements management ( i am not sure that we need to detail all of the five stages), and then ask for feedback from the community. Having developers co-llocated is a great idea, and i think that is already happening in some places. @jan, i can help you with this if you want.
we need a tool for requirements and configuration management. I assume that there is one already.
At one point, we had tried the idea of voting on “tickets”. I still like this idea. If your’re really interested you can follow the ticket and even test it as it gets implemented. But, it seemed frequently the top-voted tickets weren’t the actual items that got worked on. I did a quick search to prove that is not happening. Of the 18 tickets that were closed in the past 4 weeks for Trunk, RA, HTML Form Entry and Reporting Module, there was only 1 that had any votes at all. This obviously isn’t working, perhaps because the programmers aren’t working on the up-voted tickets, or because implementers don’t know to vote on tickets, or are intimidated by the process.
For larger roadmap type features, I would recommend an annual community call for TRUNK and bi-annual call for Reference App. The call should be coordinated into the development/release cycle as you decide what goes into the next release. Let anyone interested come and put in their two cents. Document the call on the ether-pad, and allow implementers to add +1 next to features of importance to them. This can be done after the call for those who have connectivity/timezone/conflict issues.
In practice, I find what works best is to be the “squeaky-wheel” and continually bring up issues in forms/calls/conferences, etc.
Thanks for trying to be pro-active and remain connected to the implementers!
This sounds like a valuable idea, so I’d stress the last line about including this in a request for funding, since it won’t happen without funding. I’d urge we consider this funding not only be used to pay for the co-location, but if there are qualified volunteer developers who are interested, also to pay a contract salary/scholarship as well.
I think it’s important that we improve how OpenMRS prioritizes issues (and make that visible to the community), but it’s all for naught if there’s not enough resources to action those priorities. If there was a better roadmap, how many developers do we think we have that could spent a significant portion of their time focusing on that roadmap? I know that upper management at my organization is going to insist that I focus on our priorities regardless of the OpenMRS roadmap. Maybe that’s short-sighted, and that may not be the same for all organizations, but having more independent funding streams that OpenMRS can direct towards its roadmap gives OpenMRS more control.
(I’m not really involved in the funding plans and maybe I’m beating a dead horse, but I think this is crucial).
funding is a critical issue. We are trying to address it though it is far from solved IMO. There is a catch though–documenting what we need for funding means that we can document the roadmap including the impact from the development work.
i like what James suggested about a call where the prioritization happens–call is planned for every 6 months, prior to the call, the proposed roadmap funcationality is available somewhere; there is a way to give input even if you cant be on the call. decisions are made at the call ( since the request for whatever will include a LOE that can be evaluated to figure what is reasonable to commit to). implementers could be at the meeting. this is what normally happens in a change control board, or whatever IT review process is being used, so it could help us be more deliberate and transparent.
A fair point, @terry, that having a documented roadmap is helpful, if not necessary, in securing funding. (Hopefully it’s enough that we can show a documented roadmap driven by implementation needs and the contents of the roadmap won’t have to be modified to fit the funding streams, but that’s another issue.)
IMO then it’s important at these prioritization meetings to acknowledge that increased funding likely required to make all this happen, and that a key motivation for the roadmap is to aid in getting funding. I wouldn’t want implementers to invest time in a new roadmap process and then feel doubly frustrated six months after that when the items on the six month roadmap still aren’t complete.
I think the current limited team of “core” OpenMRS devs has enough work just keeping the ship afloat, with tasks such as merging and reviewing pull requests, making sure the build pipeline stays up and running, handling code debt (the 2.0 API refactoring), moving already-established priorities forward (like OCL) and community involvement (answering questions on Talk, GSoC) that there really is very limited time for implementation-driven features unless resources are added or some of these other above items are de-prioritized.
It might also help to know some of the history that has been touched on in above comments:
Survey-based Road Map updated 1-2x/year. Our early attempts included surveys and at least an annual (sometimes twice a year: once at our implementers meeting and once at a leadership meeting), we would survey as many implementers as we could to get their top priorities ahead of time and at the meeting we’d turn them into a road map.
Everyone had a chance to contribute
It got semi-regular attention and we had an informed road map
Implementation needs & priorities changed far faster than a 6-month period (sometimes weekly or monthly), so by the time we were a few months into our “road map,” we could be working on things that were no longer a priority.
We turned to voting. In an attempt to address the moving target of priorities, we turned to vote-based prioritization a few years ago and created this wiki page to summarize new & most-voted issues (now broken since it uses outdated macros).
Anyone could vote
Priorities could be based on true need
People complained that voting unfairly favored larger organizations (more people to vote on issues)
It was harder to organize meaningful/themed sprints and development felt “rudderless”
Road Map Committee. To try to compromise between a 1-2x/year road map and a purely vote-based road map, I tried to get people to participate in a “Road Map Committee” with the idea that we’d meet more regularly and create a process that was informed by surveys, votes, and whatever formula/process we could invent to drive a responsive road map in a fair & agile manner.
Tried to combine the best parts of our prior attempts
Weekly Project Management. After the road map committee approach flopped, we landed in a place where leadership wasn’t regularly polling implementations (we still tried to do this in occasional Dev Forums), we weren’t satisfied with purely vote-driven prioritization, and, for a number of reasons, the number of people invested in driving a road map for the reference application was declining (e.g., PIH, AMPATH, Kenya-EMR, Bahmni, and others using various flavors of front-ends). So, development was, once again, rudderless. In response, I set up our weekly Monday Project Management call, which has a handful of regular attendees, occasional attendees representing release management or specific implementation needs. We’ve used this to manage our current technical road map.
Provides at least some oversight and direction for development
Limited participation, so not fully representative
Focused more on technical project management, so doesn’t meet all implementation needs nor the “solution” for community prioritization.
Given the above, a twice-per-year implementation-driven prioritization would give us back what we were getting when developers & leadership surveyed implementations and would be extremely helpful in informing a technical road map, but (for the reasons & history above), I’m not convinced that a twice-a-year process can fully suffice.
Ideally, we’d have a well-communicated process informed by multiple factors and updated frequently (at least monthly):
Strategic needs (e.g., considering the long-term vision, partnerships, etc.)
Available resources (e.g., high priorities that lack resourcing may move slower than lower priorities that are fully resourced or a lower priority need that fits neatly within the timing & scope of a GSoC project)
Development needs (e.g., a high priority need that is poorly defined may take a back seat to sprinting on a lower priority need that has a theme + tickets ready for work + BA support)
As I mentioned above, coming up with a process like this for the Platform is part of our objectives in our operational plan. How well we can use implementation priorities to drive a web application (reference application or community distribution) will depend on the extent that we can align the community on web-based conventions (a separate conversation I will be pushing for imminently).