[RESOLVED] Nexus and Confluence: DISK SPACE IS LOW!

Tags: #<Tag:0x00007f1f76c5d090> #<Tag:0x00007f1f76c5cf28> #<Tag:0x00007f1f76c5cd98>

The server is currently running dangerously low on disk space. Please disable all builds (or disable the parts of the plan which deploy artificacts until further notice)

Please also try not to upload any attachments to Confluence.

I think every single CI plan deploys some artifact to Nexus, and it’s not really feasible to disable every CI plan.

Can we please explore where the most disk space is being used and focus on addressing that specific issue with high priority?

Welcome to price we pay for assuming unlimited disk space… I’m not even sure if we can get free at the rate we’re using. It has grown to 66.3 GB.

fwiw, I have disabled all the PIH mirebalais plans, which do deploy artifacts.

@burke @terry This basically brings development to screeching halt. But it is unfair to expect a single, unpaid volunteer to put out fires like this. What can do to get other core OpenMRS programming resources to help out with this ASAP?

Also, in the interim, is there any possibility we can get more disk space from IU?

Thanks, Mark

There is literally no reason for one service to use that much that I can even fathom. So I wouldn’t even bother. To top it off, I’m not feeling well. This can be scripted, so it’s not a huge deal – I just need the greenlight and the decision on what JFrog service we want to use. I do not give access to production servers with readily. Decide what you want, and I’ll do it.

I see no reason that a trusted resource like Rafal or Wyclif should not be given access to production servers if they have bandwidth to help out.

Mark

I can handle it – just make a decision.

@burke @darius ^^ can we schedule a call or some other meeting to come up with a solution to this ASAP?

Mark

Why not do it here on Talk. Neither @burke nor @darius have access to that server. I’m not joining a call. Handle it here. Too much happens over the phone, this will be handled here.

I have 30 minutes at 9am ET then I’m free from 10:30am ET tomorrow, to figure out what our urgent solution is.

@r0bby, or someone else on infrastructure, please help us out with some details about where exactly the space is being used. I have some suspicions about quick fixes we can make, but we need some dat to make informed decisions. :slight_smile:

I’m sick – I’m not hauling myself out of bed at 9am tomorrow. Handle this publicly – on Talk – the way Open Source actually works. I’m not gonna sit and listen to a call recording either. This can be handled publicly on Talk, do it that way if you actually want action taken.

The space is being used by the sonatype-work directory – which is where nexus stores artifacts and it is still growing. There isn’t a “quick fix” – we need to move this now. We won’t be able to do this for free sadly. You need to reduce the size of that directory to under 20 GB for me to remotely happy and it’d have to stay at that.

I’m wondering about the details within that directory. I expect there are just a couple artifacts taking up most of the space, e.g. something built as part of CI for the reference application.

PS- can you give me ssh access to that server so that I can explore where disk space is being used? (Not at my computer the moment, but will be later.)

--- /opt/sonatype-work/nexus/storage ------------------------------------------- /.. 51.3GiB [##########] /snapshots 6.4GiB [# ] /central 2.9GiB [ ] /modules 2.2GiB [ ] /releases 626.6MiB [ ] /central-m1 574.5MiB [ ] /thirdparty 276.4MiB [ ] /public 236.1MiB [ ] /contrib 25.6MiB [ ] /java.net-m2 16.6MiB [ ] /sakai-maven 404.0KiB [ ] /apache-snapshots 120.0KiB [ ] /codehaus-snapshots 112.0KiB [ ] /google 36.0KiB [ ] /java.net-m1 32.0KiB [ ] /java.net-m1-m2

Basied on this – I am going to nuke the central directory as it’s a mirror.

Snapshots are taking up an INSANE amount of space.

Can’t all SNAPSHOTS until end of life releases be removed after performing an offline backup?

And where do you propose we make that backup to? We don’t even have space to tarball it.

I can maintain them on a Digital Ocean server Until something is ready.

diving deeper, it appears all thst space is metadata – I’m gonna delete those --it’s all trash :slight_smile:

1 Like

I nuked the trash directory which frees up a lot of space. We’re fine for now. We need to periodically actually empty the trash folder for older snapshots.