New demo server is ready for tests!

The new demo server is finally here to be tested!

You can also run in order to reset it to its original state.

A few missing tasks:

  • Deploy dockerhub image automatically (and to openmrs account)
  • Improve credentials handling
  • Create autoredirect to /openmrs when accessing it without /

This is a dockerised deployment of openmrs. The docker image came from the openmrs SDK, and docker compose files come from (don’t worry, creds will be overridden pretty soon).

Please test! Make sure everything is complete!

1 Like

Thanks @cintiadr!

One problem: concept searching wasn’t working until I went and rebuilt the lucene index via (I’m sure there’s a way to do this automatically).

I noticed some strange things that are probably not unique to this server:

  1. Appointment scheduling shows a blank screen at this crazy url when I try to schedule an appointment for my patient.
  2. Allergies have some new categories I don’t remember from before (animal, plant pollen, other) and when you choose them there are no suboptions:

I will mention these on another thread in the dev category.

@cintiadr, that’s great!

The quick fix for the search index to be rebuilt at startup is removing search.indexVersion global property from the demo sql (by removing an insert or adding a delete statement at the end of the dump).

The proper fix is to have it done automatically by SDK. Created SDK-201.

Like this?

1 Like

Exactly like that!

@cintiadr, if this demo is dockerized, how about using nginx-proxy-letsencrypt so the demo can demonstrate best practice of running OpenMRS using TLS?

$ git clone
$ cd nginx-proxy-letsencrypt
$ docker network create web
$ docker-compose up -d

From then on, any docker container you start on the host with appropriate environment vars (VIRTUAL_HOST, LETSENCRYPT_HOST, and LETSENCRYPT_EMAIL) and connected to the specified network (--net web) will automatically be proxied with HTTP redirect to HTTPS LetsEncrypt TLS security (there’s an add convenience script included that demonstrates how to add a container to be proxied).

Could we add an hourly cron job to run something like this?

$ cd /path/to/demo && docker-compose restart openmrs-referenceapplication-mysql

Would that be enough to reset the demo’s database hourly?

Also, in the past (ITSM-3917 → help desk #9245), we added this SQL to the demo database to prevent people from accidentally (or nefariously) changing the admin’s username or password:

IF OLD.user_id = 1 THEN
SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'admin user account is locked';
IF OLD.user_id = 1 THEN
SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'admin user account is locked';

The triggers can be remove when/if needed with:

drop trigger users_update; 
drop trigger users_delete; 

As I mentioned before, I don’t want to touch TLS now, it’s lower in my priorities. But my focus is to actually get a process to deploy docker apps automatically and easily to our infra, without manual installation of any way.

As the docker hosts will be shared (quite possibly) by several different docker applications, I’m not sure having a different nginx proxy for each one of them is the best way forward. Also, I’d like to make it as easier as possible for new docker applications to be deployed in our infra.

That’s why initially I deployed nginx from ansible, as part of the base OS, a single one per docker host. There are several different letsencrypt ansible roles too. When we start adding more docker containers apps, it will be more clear if that’s the best way or not.

I created a build to redeploy, it’s still manual for now.

Hum, ok! I will replicate the same thing, Thanks!

These are the next steps:

  • Deploy dockerhub image automatically from reference application build
  • Protect credentials
  • Improve deployment dockerhub -> docker host (to avoid a dummy build from Bamboo + SSH)

What would be really nice is to have a best-practice example of a docker-compose that combines openmrs + mysql + nginx/apache with SSL in production-suitable way.

We don’t need to dual-purpose the demo server for this, and I wouldn’t ask Cintia to reprioritize to cover this either. @burke, maybe this would be a good project writeup?

The point of nginx-proxy-letsencrypt is that you don’t need to touch TLS or install nginx or cron jobs on the host. You create a “web” network, docker-compose up, and then TLS is taken care of for you for any additional containers added to the network (i.e., all you need is to do is set VIRTUAL_HOST & LETSENCRYPT_HOST to the target domain name and LETSENCRYPT_EMAIL to the infrastructure@ address).

There’s only one nginix proxy that’s automatically reconfigured to proxy any containers on its network that specify the virtual host environment variables. It will automatically redirect HTTP to HTTPS and install & maintain LetsEncrypt certs for you.

Personally, I don’t think apps should have to worry about TLS. Similar to logging, it’s better to leave implementation details to the system hosting the container. For example, when new nginx settings are recommended to overcome a vulnerability, it’s much easier to update TLS settings in a single proxy than to address it in every app.

That’s fair. I’ve used nginx-proxy-letsencrypt several times now for different projects and got LetsEncrypt working effortlessly, so thought I’d suggest it.

So that would mean that we’d need to configure all the docker-compose apps of a certain host to be using the same network stack (web), right? Having the same network for all of them is undesired (for security reasons, port collisions). I’m not aware of any way of defining multiple networks for a docker container, am I missing any step?

Another thing that we should do on our demo server is get rid of the nag to add the server to the atlas.

This should be the SQL:

update global_property set property_value = 'true' where property = 'atlas.stopAskingToConfigure'

(I haven’t tested it, e.g. to be sure it’s an update not an insert, but the property and value are correct.)

I think I addressed all the posts in here.

There’s still some issues with stability (which are currently being investigated, which I should have hopefully solved by tomorrow), but I’m thinking about doing the migration this week.

Unless someone has something against, I’m doing the migration of demo to the new server on Thursday morning (UTC time).

No objections from me. :slight_smile:

This should be now done!

Demo server is dockerised, I will write a talk explaining all the process!

@cintiadr could you be knowing what is going on here?

So the other application on the same machine had problems too.

There was a problem with disk space, but I’m not sure that was the cause of the problems. It could be some other resource (memory for example) or a bug in docker/kernel.

I’m going to continue investigating what is happening there and make sure it’s stable.