Hi all! I have been thinking a lot about security, and wanted to discuss some new initiatives focused on our community approach to security. We need pathway for people who want to work on security issues to have the ability to do so.
Problems we face:
Volume and source of alerts (emails to security@openmrs.org, GitHub security researcher reports, code scanning on almost every repo). This is a constant barrage, and we need to figure out what can be safely ignored and help ensure that a decision is made on these items and that unnecessary alerts are resolved.
Triage the remaining issues, and ensure that critical issues are flagged for the security community to resolve as soon as possible. Updating a tracker that is only open to the security team with priority and to ensure that issues are resolved.
Resolution: We need more devs with advanced knowledge of OMRS to come together and ensure the platform remains as secure as possible. We need dedicated time to address security issues as a partnership across the community.
Here are some ideas we have on how to resolve some of these issues:
Create a Guide: “How to help resolve Security Issues in OpenMRS” to help onboard devs who might want to focus on security issues.
Monthly Security Group session (on zoom): Go through the latest vulnerability reports, make decisions on what needs to be addressed, the estimated level of work, and the severity of the vulnerability, and allocate tasks to team members to resolve the highest priority issues.
Build addressing security issues into Dev levels (EG: set a requirement to have contributed security fixes into being a level 4 or 5)
Have the security team report back to Product Leaders Co-Op highlighting the highest priority issues and make a plan to resolve those without adequate resources assigned.
Automatically add all dev 4 and 5 to the security group: We don’t want to publish the vulnerabilities until a fix is released, but we need eyes on the issues as soon as possible. This would be a good way to ensure skilled devs who know the platform well are given the best opportunity to contribute fixes.
Please contribute your ideas to this thread on how we can keep OMRS and our private data secure!
We had an OMRS security Slack channel in 2021 that focused on addressing security issues and fixes, led by @isears, @ibacher, and @dkayiwa. I believe it’s time to revamp that channel as it contains valuable information that we can build upon.
@caseynth2 this is a very good initiative and i like your suggestions, with a few modifications as below:
I would be more cautious about this, just in case a bad guy intentionally climbs their way to these dev stages with the obvious ill intentions. Currently, we manually choose representatives from different organisations and community. I think this selection process is not so much work to be worth the risk of automatically including every /dev/4 and /dev/5
I remember @mksd recently sharing a security issue of this kind on slack. I just do not remember which slack thread it was.
Can we lower this to /dev/3? I think some simple security fixes can be done by a /dev/3
In addition to your great ideas above, i would also add that it would be nice for us to also once in a while, look for potential security vulnerabilities by ourselves. Some sort of penetration testing done by us instead of an external commercial company.
So, first, I’d like to acknowledge that there has been a huge volume of work addressing security issues in the past year even disregarding the penetration test.
More rigorously applying permission checks
Implementing a permission model for global properties
Review of several REST endpoints to ensure appropriate privileges and authentication
Moving to a whitelisting mechanism for unauthenticated URLs
Implementing an auditing framework
Moving to XStream whitelisting
I mention these because not only are all of these things great, but it’s really what we should be thinking of in terms of what “security issues in OpenMRS” look like. In fact, I’d really advocate for us to not think about this in terms of “security issues” at all; fundamentally, the continuous work here is ensuring that the OpenMRS Platform and the components for the reference application / EMR distribution provides the important security properties of an EMR and tools to make it easy for developers of both community-housed and third-party code to write code that is secure by default.
So a few things:
I don’t think the volume of alerts for OpenMRS is particularly overwhelming. In a certain sense, we have the opposite problem: communication via, e.g., the security channel or GitHub is so infrequent that those that do happen frequently get missed. (I’m aware of exactly two vulnerabilities in the application being reported in the last 12 months; we have substantially more issues in our infrastructure, but identifying those and farming them out is much more complicated).
What is also problematic is that in our various attempts over the years to use Sonar, the output has been effectively ignored.
The main goal of a security group should not be reviewing vulnerability reports; again, this will quickly become a rather useless meeting. However, I would be in favour of using, say, one of the TAC calls a month to discuss any security improvements we can make across the application, which could, as and when required, address any vulnerability reports.
Honestly, every developer at /dev/3 and above should be empowered to deal with security issues that arise. I’m hesitant about making it a requirement because I’m worried that this will just create noise (e.g., making it a requirement for a /dev/4 to resolve a security issue kind of creates an incentive for /dev/3s to introduce security issues that they can then fix).
I’m opposed to adding people just on the basis of dev level. If we wanted to create a similar “security level” structure, to track people actively contributing to, interested in, etc. security issues that would be more appropriate.
I highly agree with the security concern because it is usually one of the last implementation which is highly of great importance in any platform. As a Cyber Security expert I have been looking into this and we can share inputs on the matter,
we should indeed dive into the conversations about these concerns and see the way forward. One of the greatest weak link I always happen to identify is mostly directed to the user and if we have this conversations and even create awareness and trainings then we can improve greatly.