The quick fix for the search index to be rebuilt at startup is removing search.indexVersion global property from the demo sql (by removing an insert or adding a delete statement at the end of the dump).
The proper fix is to have it done automatically by SDK. Created SDK-201.
@cintiadr, if this demo is dockerized, how about using nginx-proxy-letsencrypt so the demo can demonstrate best practice of running OpenMRS using TLS?
$ git clone https://github.com/bmamlin/nginx-proxy-letsencrypt
$ cd nginx-proxy-letsencrypt
$ docker network create web
$ docker-compose up -d
From then on, any docker container you start on the host with appropriate environment vars (VIRTUAL_HOST, LETSENCRYPT_HOST, and LETSENCRYPT_EMAIL) and connected to the specified network (--net web) will automatically be proxied with HTTP redirect to HTTPS LetsEncrypt TLS security (there’s an add convenience script included that demonstrates how to add a container to be proxied).
Could we add an hourly cron job to run something like this?
$ cd /path/to/demo && docker-compose restart openmrs-referenceapplication-mysql
Would that be enough to reset the demo’s database hourly?
Also, in the past (ITSM-3917 → help desk #9245), we added this SQL to the demo database to prevent people from accidentally (or nefariously) changing the admin’s username or password:
DELIMITER ;;
CREATE TRIGGER users_update BEFORE UPDATE ON users FOR EACH ROW
IF OLD.user_id = 1 THEN
SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'admin user account is locked';
END IF;;
CREATE TRIGGER users_delete BEFORE DELETE ON users FOR EACH ROW
IF OLD.user_id = 1 THEN
SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'admin user account is locked';
END IF;;
DELIMITER ;
The triggers can be remove when/if needed with:
drop trigger users_update;
drop trigger users_delete;
As I mentioned before, I don’t want to touch TLS now, it’s lower in my priorities. But my focus is to actually get a process to deploy docker apps automatically and easily to our infra, without manual installation of any way.
As the docker hosts will be shared (quite possibly) by several different docker applications, I’m not sure having a different nginx proxy for each one of them is the best way forward. Also, I’d like to make it as easier as possible for new docker applications to be deployed in our infra.
That’s why initially I deployed nginx from ansible, as part of the base OS, a single one per docker host. There are several different letsencrypt ansible roles too. When we start adding more docker containers apps, it will be more clear if that’s the best way or not.
What would be really nice is to have a best-practice example of a docker-compose that combines openmrs + mysql + nginx/apache with SSL in production-suitable way.
We don’t need to dual-purpose the demo server for this, and I wouldn’t ask Cintia to reprioritize to cover this either. @burke, maybe this would be a good project writeup?
The point of nginx-proxy-letsencrypt is that you don’t need to touch TLS or install nginx or cron jobs on the host. You create a “web” network, docker-compose up, and then TLS is taken care of for you for any additional containers added to the network (i.e., all you need is to do is set VIRTUAL_HOST & LETSENCRYPT_HOST to the target domain name and LETSENCRYPT_EMAIL to the infrastructure@ address).
There’s only one nginix proxy that’s automatically reconfigured to proxy any containers on its network that specify the virtual host environment variables. It will automatically redirect HTTP to HTTPS and install & maintain LetsEncrypt certs for you.
Personally, I don’t think apps should have to worry about TLS. Similar to logging, it’s better to leave implementation details to the system hosting the container. For example, when new nginx settings are recommended to overcome a vulnerability, it’s much easier to update TLS settings in a single proxy than to address it in every app.
That’s fair. I’ve used nginx-proxy-letsencrypt several times now for different projects and got LetsEncrypt working effortlessly, so thought I’d suggest it.
So that would mean that we’d need to configure all the docker-compose apps of a certain host to be using the same network stack (web), right?
Having the same network for all of them is undesired (for security reasons, port collisions).
I’m not aware of any way of defining multiple networks for a docker container, am I missing any step?
There’s still some issues with stability (which are currently being investigated, which I should have hopefully solved by tomorrow), but I’m thinking about doing the migration this week.
So the other application on the same machine had problems too.
There was a problem with disk space, but I’m not sure that was the cause of the problems. It could be some other resource (memory for example) or a bug in docker/kernel.
I’m going to continue investigating what is happening there and make sure it’s stable.