Thanks @raff all our modules are now deploying snapshots to Nexus on merges of PRs and commits to the main repo.
@ssmusoke, how about you document it on the wiki in return?
@raff Oh yes, was actually doing that. Its usually how I pay back for any support
@raff Version 1 for review https://wiki.openmrs.org/display/docs/Setup+Travis+CI+to+Deploy+Snapshots+to+Nexus
This thread is 4 years old now, but I found myself rediscovering this as I’m trying to set up some new CI processes for our Rwanda EMR modules and ensure these get built, tested, deployed automatically to Maven in a manner that is accessible and transparent to everyone involved.
Yesterday I spent some time moving our rwandareports module off of our internal PIH Bamboo server and onto Travis-CI. Thank you so much to @ssmusoke and @raff for the posted instructions, and existing modules that I could reference for this.
As you can see from the build history, it took me some fiddling to get this right, but I managed to in the end. (Note: the 401 Unauthorized Error that comes from JFrog is such an unhelpful error. They really need to improve that.) In my case yesterday, I finally tracked my issue down to the URL. Using the mavenrepo URL failed to allow deployments for me. Once I tried changing directly to the jfrog URL, deploying my snapshot builds worked. Hopefully this saves someone the hours I lost.
That aside, I’m now interested in seeing whether it makes sense to get our entire Rwanda EMR build pipeline setup using Travis, and then using this experience to determine whether we might move to Travis more broadly at PIH. As described above, Bamboo still likely has it’s place in our CI infrastructure, and won’t get replaced overnight, but I’m interested in seeing how much we can migrate over time.
Is there any active work in the OpenMRS community where this is being worked on within OpenMRS? Have we gained any further consensus on whether moving from Bamboo to Travis makes sense for us and if so, if there is any interest doing so? Obviously there is a question of resource investment and whether this is a worthwhile thing to spend scarce resources on, but it would be good to understand whether this is strategically what we would like to do, given resources.
Good point @mseaton.
While it may be a better alternative to Completely switch to Travis ,for deployments and Builds , But the the issue of Scarce resources is a reality.
i would simply say >>
if it ain't broke don't fix it
As a follow-up to my own reply here, I also want to quickly highlight that I spent some additional time looking at the newer Github Actions capabilities, as an alternative to Travis, and found a lot of things to like there too (though like any newer solution there are still areas to improve).
I switched over our RwandaEMR module to use Github Actions for CI, including 2 jobs - one tests all PRs, and the other deploys snapshots to Maven for all commits pushed to master.
The code for each of these is a single file in the codebase, and once committed these run automatically in github. You simply navigate to the “actions” area of your github repository to view these. Because it is built into github, there is very little delay between commit and job initiation, and there is no need for authorizing an additional app (eg. travis) to access your repository.
I’ll try to put together more thoughts in a separate post about our experience with this if we move ahead with it, but for now just wanted to get this out there for others particularly if they are trying to do similar things or if they have experiences to share.
That’s cool @mseaton, thanks for sharing!
In particular I’d be eager to learn on your experience with supporting the Maven release process with GitHub actions.
Migrate to GitLab. GitLab CI can do everything Bamboo can.
Can even replace Artifactory!
I’ve been extremely busy with… the whole thing going on, and I just could come here now.
Hm. That sounds like something is wrong on the nexus setup. Can you please try ‘https://mavenrepo.openmrs.org/nexus/content/repositories/modules-pih-snapshots/’ (with the slash at the end) to see if you have the same problem? I’m curious about your setup, because it’s supposed to work.
Travis is a cloud CI. It works extremely well for independent artefacts (one pipeline that doesn’t interact with others). I particularly dislike having to cut a different branch to create a release (as it breaks the whole definition of a pipeline), but it does have its purposes.
It’s definitively not going to happen any time soon (at very least), as we have snapshots, that triggers snapshots, that triggers snapshots We don’t have independent pipelines, what can lead to certain complications. We also have the deployment views, that helps us a lot during releases and after, as we have static environments. It’s probably not worth moving at all for now.
What doesn’t block you at all in moving your own builds to travis or any other cloud CI!
Bullshit. Every CI can run arbitrary commands, that’s what they are, remote code executors. Each CI tool has their own strengths. Some have better triggers in place depending on how you like to run your deployments.
Thanks so much for the detailed response. A few quick follow-ups…
Well, now you can see above I’ve moved from Travis to Github Actions for my current experimentation
At this point, my goal is simply to set up a CI structure that will work best for a country-level distribution project, with minimal infrastructure that is owned and administered by a single organization and which is transparent and accessible for all to adopt. OpenMRS could be that if we want OpenMRS Bamboo to be used for that purpose, but it is not clear that’s what OpenMRS Bamboo is intended for, and also this seems like an opportunity to experiment with these CI tools that are more intrinsically linked to the source code.
As for any broad-scale OpenMRS migration to Travis, I’m not pushing for that…just trying to take the temperature of the room, given nearly 4 years has passed since @raff wrote up this post and he and @ssmusoke provided working examples for several modules. As you say…
…that is probably true. As you detail in your blog, I think another solution would need to have clear, compelling benefits that are greater than the cost of migration, which (given our resources) is probably close to impossible at this point. But if some groups start experimenting with new approaches and find use cases for which they work quite well and are compelling, then this can be informative to the community and might provide opportunities for us to migrate opportunistically where the situation makes sense.
My guess is that there are likely at least some modules in OpenMRS Bamboo that are not part of any kind of dependency building pipeline, and only exist there to build and test upon each commit, and (maybe) to deploy snapshots and releases to Artifactory. These might be good candidates to pull out of Bamboo and into a Travis-based process, and see how that impacts things.
Speaking of Travis, I’m going to show my ignorance here, but can you clarify what you mean by your comments here?
Thanks again for your thoughts, and no rush on responding, we are all busy!
Thanks so much for the detailed answer. It’s a lot easier to debug
The URL you are using goes to
/artifactory/. The URL the mavenrepo is redirecting goes to
Transfer failed for https://openmrs.jfrog.io/openmrs/modules-pih-snapshots/org/openmrs/module/rwandaemr/2.0.2-SNAPSHOT/rwandaemr-2.0.2-20200413.022553-2.pom 401 Unauthorized -> [Help 1]
What does it mean? I don’t know. It’s not I really know what I’m doing here, but it could be related to the permissions I gave to your CI Artifactory user. Eventually I will make sense of it, but I don’t think it’s a bad idea to keep the JFrog URL - at least for the time being.
I raised ITSM-4271 to investigate that in the future. Downside of having the URL to Jfrog domain is that we are ‘hardcoded’ on it during maven releases, so it’s some sort of vendor lock-in (for some use cases).
The definition of a pipeline is: you checkout the code, you build an artefact (a java .war file, a docker image) and you keep promoting the artefact through your pipeline. You never rebuild the artefact. The further you go up the pipeline, the more confidence you have you can deploy it safely to production.
Each pipeline will release to production independently of others.
The reason why you never rebuild your artefact is that you are completely isolated of environmental changes (build environment; upgrades to dependencies or platform). We all know how rebuilding an artefact (from the same commit) in a different day can lead to different results.
So how you do in travis? You commit to master, you run unit tests, build your artefact, run integration tests, run manual tests, UAT tests, whatever. When you are happy with the result, you decide to deploy to production…
… you update branch ‘production’ to have that commit. That will, guess what, create ANOTHER artefact. So you have to redo every single of your tests, because you are dealing with a different artefact all together.
It’s small, but it’s a risk you are taking.
You don’t have to follow that workflow. I.e., you can add steps to Travis to build and deploy the generated artefacts to where you want (e.g., I have some Travis builds that create a docker image, push it to Docker Hub and then tag the “release” on GitHub, though that tag only means “this was the source used to build release x”).
That said, I agree that Travis isn’t the best tool to use when you need to coordinate multiple artefacts and we would need to find a compelling reason to move off Bamboo (having all builds on a single dashboard with a single reporting infrastructure is nice).
Thanks @cintiadr I think I understand what you mean better now. I don’t know…regardless of whether we use Bamboo, Travis, Github, or another CI server, I feel like we can implement pipelines. I’m not yet seeing where one limits us more than others. I do think Bamboo will make certain pipelines easier to manage.
Unfortunately, with our use of Maven snapshots, our release processes will always lead to at least one new commit pre-release (to change the pom from SNAPSHOT to non-SNAPSHOT, update the SCM tag, etc), and then will execute a completely separate build and test process from the newly created tag in github in order to initiate the release. So at least for us in OpenMRS we pretty much never release the artifact that has been tested against as you describe. But this seems to me to be a symptom of our usage of Maven Snapshots more so than a particular limitation or quality of our CI tool.
Not to expand the scope of this thread, but there are a number of things that one uncovers that can be improved in our CI process, once looking under the hood. One thing I have noticed is that many of our artifacts build against stable dependencies, but yet we have dependent builds in Bamboo (pipelines) that run through and execute whenever dependent projects are built. For example, in Bamboo the HtmlFormEntry plan will kick off builds of HtmlFormEntry19Ext, HtmlFormEntryUI, ReferenceApplication Module, and ReferenceApplication distribution. Yet, if you look at the actual referenceapplication distribution code, you will see that it depends on a stable version of htmlformentry - 3.10.0 currently. So really, there is no reason it needs to rebuild just because a new version of htmlformentry 3.17.4-SNAPSHOT has been deployed. This doesn’t really lead to problems per se, though it does create a large number of completely unnecessary builds, and reinitiation of downstream processes that might be watching for new versions of the referenceapplication distribution, even though this has not changed at all.
It would seem the intent of the above was likely to ensure that when any downstream dependencies are updated to require a snapshot, that the CI pipeline would ensure it is tested properly. But there is likely a better way to accomplish this. What we should really be looking at is building/adding a “Maven Dependency Trigger” for each of our modules, rather than our current process of having parent projects blindly trigger child project builds.
This is one thing I have on my radar to enable in the Github CI jobs that I’ve started to spike on. I’m not sure if Bamboo has a trigger option like this available out of the box - I’ve personally only seen and used the repository polling trigger. I’m wondering if this is a new goal we might add to our SDK Maven Plugin - one that can run on a polling schedule, identify if any dependencies have changed (snapshots or otherwise) since the last build, and if so, kick off a new build.
My understanding is that the reference application distribution is supposed to depend on snapshot versions of all modules, until the time of releasing. After releasing, like version 2.10.0 which happened a week ago, we are supposed to have again bumped all modules to the next snapshot versions. I guess that is why our CI process was set up that way.
This does strike me as a strange release process, precisely because of the issue @cintiadr identified, i.e., we move one set of artefacts into UAT, but then cut a release with… a completely new set of dependencies
with whatever changes have gone into
master on each module added to the release. This can have bad consequences (as with including the COVID-19 concepts in the 2.10 RefApp).
My impression is that the more normal process across the industry would be to do a “feature freeze” at some point prior to a release and only build against module versions ready at the time of the “feature freeze” and only updating those modules to fix bugs identified in the release testing process.
The problem is, that this workflow would require giving up OpenMRS’s allergy to “forking” (that is, creating version branches) and would work best with a more rigorous application of SemVer (i.e., we should be more willing to bump module major versions, default to bumping minor versions and use patch versions to track… well… bug fix releases).
Hi @ian - it isn’t as bad as all that. What you are describing does not happen to my knowledge. The versions that are tested are (or should be) the versions deployed. We don’t “just take the latest from master” unless that latest from master is what has been tested against (eg. the latest snapshot). For core, there are release branches that are cut for all new minor releases (1.9.x, 1.10.x, etc) which typically represents a pre-release alpha. We don’t do the same for maintenance branches, but haven’t had problems with this to my knowledge. For modules it is certainly possible for us to run into trouble if we aren’t careful.
@mseaton You’re right I was being a bit hyperbolic. What I meant to say is that if RefApp 2.11 was being developed with, say, HFE 3.11.0-SNAPSHOT the more usual thing would be to release it with 3.11.x, where x is a patch version released to fix whatever bugs were found while testing 2.11.0-SNAPSHOT, even if HFE has moved on to 3.12.0 (because new features were added during the release cycle). I do realise this hasn’t traditionally caused issues with the RefApp once it’s released, I’m just wondering if it’s the most efficient way of ensuring timely releases of the RefApp.
Of course, this may be looking at the RefApp the wrong way, i.e., that it’s some sort of product instead of a demo of the current capabilities of OpenMRS. The current processes work very well in that role.