This is a bit off-topic, but the one major down-side I’ve experienced with Jetstream is that it can add quite a bit of latency for those not in the US.
I don’t know how noticeable this is across systems, but the one place it’s currently a real pain point is dev3. Dev3 is used as the “default” backend and integrated system for developers working on the frontend, so by default, if someone checks out, say openmrs-esm-patient-chart and runs yarn start --sources packages/esm-patient-vitals-app (which is the command we recommend for working on the Vitals app, in this instance), the vitals app itself gets served locally, but all data, metadata and other apps are ultimately served from dev3.
Even reliable Internet in Kenya, can have a base ping of around 300ms to dev3 (this is pretty much what I observe when pairing with @dkigen), which means that each web request can take an addition 300ms just in network latency. (Here are some other ping comparators from https://tools.keycdn.com/ping):
So, in this case, we’re pretty actively slowing down the velocity of contributors from non-US countries. Of course, things work pretty well with a local build, but I worry that the overhead of asking frontend developers to setup a Java tool chain to run OpenMRS locally using the SDK would inhibit new contributions. I have considered proposing using some of the AWS credits to run a CloudFront instance, but I haven’t looked into that too much.
So the idea of using CloudFront here isn’t that CloudFront magically speeds things up (although AWS’s marketing will make it sound like this). The slowness here is due to the number of network hops between, e.g., dev3.openmrs.org (hosted in Indiana) and, say, Kampala. The idea of CloudFront is that instead of dev3.openmrs.org resolving to an IP of a server in Indiana, it resolves to an Amazon “gateway” server closer to you (there’s an Amazon data center in Nairobi for example) and then uses Amazon’s network to move data between the server in Indiana (or w/e) and the gateway server (with several caching layers in the middle).
Basically, the upshot would be that the connection to dev3 would be more like connecting to a website hosted in Nairobi than one hosted in the US, but if you’re talking about a custom backend not hosted at the same physical distance, the speed-up you’d be likely to see is substantially more marginal.
Indeed we are affected by being far from Indiana data warehouses. Thanks for the good explanation @ibacher.
I am wondering if there is a way of keeping Indiana as the dev3 host but add Amazon CloudFront into the middleware.
I don’t know if CloudFront allows to serve something that is not hosted on AWS. If this is not possible, is the approach setting up an Amazon machine running OpenMRS 3.x or there is a way of using some sort of DNS provider that uses CloudFront as a middleware / getway to connect to dev3?
I am new to cloud and actually don’t have an AWS account but wish to know how this will work.
@ibacher, I could set up a reverse proxy on CloudFront for dev3.openmrs.org. It’s a quick process. Just create me a user on the AWS account that we have credits to spend and I can set it all up with terraform.
So… this year, AWS has asked us to create a whole new account in order to use our credits. Right now, that means migrating everything we have on our current account over to a new account. Or we can create a new account and only use it (and our credits) for new services. Once our credits are up and unless we receive more, we’d have to make a plan for covering the monthly fees if we wanted to continue those services.
So, actually this CloudFront use-case would be an excellent opportunity to create a second account as they’ve requested, without either breaking our existing usage (which is basically our infrastructure backup system, which is why I’m hesitant to touch it) or necessarily tying us too deeply into AWS, since we can always drop CloudFront and serve things directly from Jetstream if the cost is excessive. Honestly, for this one server, we’re very unlikely to exceed the free tier anyway.
The biggest issue is that our existing account is already tied to our infrastructure user’s email.