One of the Bamboo agents ran out of disk

Yue, one of our agents, ran out of disk. Some builds might have gone red because of that.

I saw the alarm two days ago, but I thought I’d have a little bit more time…

Fortunately, not yet seen a red build because of this. Hopefully you will get time soon. Thanks @cintiadr :smile:

Just caught the red build :smile:. See here!

[INFO] Downloading from sonatype-apache:
The job exceeded the maximum time limit for jobs, and has been terminated.


That’s completely unrelated, for sure.

A machine running a Bamboo jobs was on 100% disk for a couple of hours. That cannot be possibly related to the fact that a build would fail in travis.

Nope @cintiadr

I just caught another instance of Travis build failure, and the message was, (See here)

The job exceeded the maximum time limit for jobs, and has been terminated.

I was able to compile and build the changes locally without any errors, So I can sure that this didn’t happen due to the build failure in Travis. Somehow, it exceeded the maxim time limit for jobs (ran for 49min - usally it endup with in 6-7 mins). So there might be some issues with the agents. Could you able to figure out the issue here?

I cannot really tell anything about travis agents or how that company maintains their infrastructure.

What I’m going to ask is if it’s normal for our builds in travis to download all maven dependencies and not keep the folder ~/.m2 cache. Is that by design?

Also, I never had to investigate travis builds, so I don’t know where to look for the timestamps. Was every single download slower than usual, or did maven got stuck at downloading the last one?

@suthagar23 this is unrelated to OpenMRS infra. Travis sets a time limits on builds that don’t produce output for a while, and this might be worked around by adapting the Travis config accordingly.

Could you try this (on a branch) for openmrs-module-locationbasedaccess’s .travis.yml:

language: java
 - oraclejdk8
  - travis_wait mvn install -Dmaven.javadoc.skip=true -V -B 


Thanks for the information @mksd Let me try out those in the branch.