GSOC 2019 : OpenMRS Atlas 3.1

Tags: #<Tag:0x00007f01bf4e97c0>

Marker’s can have optional contact information (name & email). We only want/need one point of contact per marker and we shouldn’t presume that this will be the email address of the creator or even a particular individual’s email address (e.g., an organization might want to put their company’s contact email here). So, don’t worry about displaying email addresses in marker’s beyond the optional contact email we already have ( Also, don’t assume the is the address to send notifications for the creator. Instead, we want to send to the creator’s primary email on file within LDAP (which may differ from the marker’s contact email).

Ideally, notifications would go to the primary address on file with OpenMRS ID. While we could cache an email address from LDAP, we might send to the wrong address if someone has updated their email with OpenMRS ID. I understand & appreciate the goal of avoiding lots of LDAP calls… but perhaps we could accomplish this through batching the LDAP – i.e., instead of separate calls for every user, ask LDAP (in one request) for email for all users we need to notify. These events (a marker fading or changed) are going to be relatively rare (likely averaging less than one event per day for the entire Atlas).

One option would be, as you describe, to create temporary authentication tokens that would update the marker regardless of who made the request. Since we can assume a user is activating this link in a browser, we could avoid the need for a temporary token and, instead, use a URL that forces authentication. In other words, imagine a URL like{id}

  • If the user is already authenticated to the Atlas server and an owner for the marker, the Atlas page would load with the marker’s bubble open and the last updated date already updated.
  • If the user wasn’t authenticated to the Atlas server, they would be redirect to /login?redirect=%3Fupdate%3D{id}, which would prompt them to login and then redirect them to /?update={id}.
  • If the user was already authenticated but not an owner of the marker, then the Atlas would load with the given marker’s bubble displayed and a popup alert saying “You are not authorized to update this marker.”

On a related note, ALL emails sent to users should contain an unsubscribe link at the bottom that, if clicked, will prevent future emails. So, for ATLAS-168, we’ll probably want to support /?email=subscribe and /?email=unsubscribe URLs that behave similar to what I’ve described for updating but, in this case, control whether or not the user gets notifications. Perhaps an email_unsubscribe table with a list of OpenMRS IDs would suffice (i.e., we never send notifications to someone if their ID is in this list and these URLs would add/remove their OpenMRS ID from that table).

1 Like

I just realized that production Atlas doesn’t have any ‘versions’, making it impossible to edit an existing marker (because of the NOT NULL constraint). How do we solve this? I think a null check when saving to database should fix it. We could also simply add a few OpenMRS versions using the admin UI.

I went ahead and added versions 1.8 - 2.8 so people have choices; however, version should be optional – i.e., default to blank value as first item in selection and allow someone to “delete” their version info by switching to the blank value.

FYI – while looking at this I found a little bug and created ATLAS-185 and ATLAS-186 for your entertainment. :slight_smile:

1 Like


FYI – while testing out your PR for ATLAS-166, I ran into a few small issues. First, bcrypt has some requirements that aren’t included in the alpine container for node we’ve been using. I gave you a suggested replacement for Dockerfile and will let you add that to the PR.

In the process of debugging, I found a few small issues and github issued a severe security alert for lodash, so I did some direct commits for those. Make sure to pull changes to your local copy!

1 Like

@cintiadr, I deployed recent changes to atlas-stg that required a db change (running rss_table.sql). I didn’t see where the db container would have access to the sql files in our code’s db folder… so, to apply the sql, I copied the content into my clipboard and then did something like this:

$ sudo docker exec -it atlasstg_db_1 bash
# cd /opt
# cat <<'EOF' > foo.sql
(pasted contents of sql file)
# mysql -u root -p atlasdb
> source foo.sql
# rm foo.sql

It worked. But I’m guessing you might have a better/simpler approach to applying db changes? Just want to learn, so I can help do these.

@heliostrike, I’m holding off on #90 (db cleanup) until we’ve merged the other PRs, since several other PRs are making db changes. In the meantime, I would suggest taking the approach of putting any db changes into a separate file (like you’ve been doing) but use a prepared statement to apply the changes only if they haven’t already been applied. For example:

-- Convert atlas.image to blob (ATLAS-178)
SET @dbname = DATABASE();
SET @tablename = "atlas";
SET @columnname = "image";
SET @columntype = "blob";
SET @preparedStatement = (SELECT IF(
      (table_name = @tablename)
      AND (table_schema = @dbname)
      AND (column_name = @columnname)
      AND (data_type <> @columntype)
  ) = 0,
  "SELECT 1",
  CONCAT("ALTER TABLE ", @tablename, " CHANGE ", @columnname, " ", @columnname, " ", @columntype, ";")
PREPARE alterColumnType FROM @preparedStatement;
EXECUTE alterColumnType;

Then, at a moment when we don’t have other PRs touching the database, you could refactor #90 to combine all of these changes (and any prior ones since 3.1 was put into production) one after the other into a single upgrade.sql. This way, running upgrade.sql would ensure the db is properly updated. As new PRs need to change the db, you would append to update.sql and, when executed, all the previously made changes to the db would be silently skipped.

1 Like

@burke this is how I do.

I create the file on the host. Using either scp/copy to vi.

docker cp <file>.sql atlasstg_db_1:/tmp/
docker exec -it atlasstg_db_1 bash
mysql -u ${MYSQL_USER} -p${MYSQL_PASSWORD} ${MYSQL_DATABASE} < /tmp/<file>.sql 

Just like that, the environment variable have everything you need.

Same theme, if you want the msql cmd line tool:


@heliostrike, Atlas is really coming together nicely. Let’s turn to #90 and consolidate the db scripts… but first, there are a few issues I discovered in testing your last PR that we should address. I don’t have time to create tickets for them, so I’ll let you do that…

  • When saving a new marker, I see an undefined in the server’s console log just before the POST /marker 200 entry. I’m not sure what it’s coming from, but it looks like a debug statement that we missed. It might be related to the other issue I found.
  • When I save a new marker (without an image… just add marker and save), reload the page, and then show its bubble, it looks like it’s trying to show an image that doesn’t exist. Looking in the database, it looks like a new marker without an image is storing a blank value instead of null into the database and then this is being treated as “non-null” and the client tries to load an image for the marker. Look at how we store & retrieve markers when there’s no image. I’m guessing we are misinterpreting the missing image and not storing null as we intended. It wouldn’t hurt to make the server handle a blank/invalid image entry more robustly (e.g., return null instead of an invalid link or a “valid” link to a invalid image entry).
  • I also say the message (node:46) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues. Please use the Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() methods instead. in the server console logs. I’m assuming this comes from this line and you just need to refactor it not to use a deprecated constructor.

If any/all of these are quick, obvious fixes you feel don’t need code review, then feel free to commit them directly to master (without a PR). Straightforward (e.g., one liner) fixes or typo corrections don’t need to go through a code review once you’re this familiar with the repository.

If you can find & destroy those issues, then turn to #90. As I mentioned earlier, once we move to just having schema, upgrade, and demo-data scripts, we can deploy to staging, verify everything seems to be working okay, and then – assuming so – deploy to production! With a little final testing in production, we’ll be ready to announce a new life for Atlas!

Great work! :slight_smile:


I loaded latest changes onto atlas-stg. The server was crashing because of legacy values in altas.image that were URLs. I cleared these out with update atlas set image=null, which updated the date_changed for all sites… so nearly all sites on atlas-stg were marked as updated. We’ll need to be careful not to do this in production.

This line crashes with TypeError: Cannot read property 'length' of null if the image column contains something unexpected (like a URL).

Finally, it looks like req.protocol (like here) doesn’t work as expected, since the Atlas is proxied through nginx. So, requests to the Atlas server (inside the docker container) are using http. As a result, image links are being created with http protocol instead of https, which is causing warning errors within Chrome. We may have to just assume (i.e., hardcode) the https:// protocol when creating links.

I think I fixed all the issues. :slight_smile:

I experienced this previously too, I couldn’t find this bug earlier. Before you merged the PR, I made a few tiny changes, including removing a few unnecessary console.logs. I think it may have been one of them. Let me know if you see it again. :slight_smile:

Turns out MySQL doesn’t store ‘undefined’ as null! That was causing the issue!

I could spot one more bug. When updating a marker, without updating the image, the image is overwritten. I remember taking care of this bug earlier, but I think I might’ve messed something up while resolving merge conflicts. Here’s the PR for it.


FYI – I found and fixed a small typo we had conveniently made twice (which kept it from becoming a bug).

Speaking of which, I think it’s worth cleaning up the confusion around atlas_version. I created ATLAS-190 for this and I think you can knock it out pretty easily. atlas_version is too confusing. I know it has confused me and, given atlasversions.sql and this code, I’m guessing it has confused you too. In fact, atlas_version was a poorly named attribute that was actually created to store the version number of the atlas module pinging the server, so the server would know how to interpret the contents of the sent by the module. Looking at it now, we don’t really need to know the module version… rather, we just need to know the version of's structure. If we never change the format of sent by the module, then we could have 17 versions of the module that all use version “1”. It’s only if a future version of the module decides to send some more data or restructure the data that we’d need to switch to version “2” and update the server code to look out for it. Anyway, the ticket describes killing off “atlas_version” and renaming the attribute in the atlas table to data_version to be a bit clearer. The version of OpenMRS used by sites is stored in the versions table, which is something completely different.

One last thing I’d like you to look at before we move to production: loading the markers is pretty slow for me on atlas-stg, taking up to 10 seconds before the markers are rendered. I’m not convinced Google Maps is slow, since I can – for example – render 1000 markers on this example site in less than a second. Is it a slow database query we could improve with indexing? Do we have some slow javascript? Or is it a little of both? If you could do some profiling/testing, I’m hopeful we could take the Atlas load time down from 10 seconds to less than a second.

1 Like

@burke Here’s the PR for ATLAS-190. Do we need some sql to move data from ‘atlas_versions’ table to ‘data_versions’ table? I’m having a bit of trouble accessing the atlas-stg mysql database, so I’m not sure if the table has any entries. We never had to use it anywhere, so I’m assuming its empty, and doesn’t need any update sql.

I think one huge factor for that is this slow js I wrote. I was still making changes to the PR, so I didn’t bother to write a faster implementation, and forgot about it until now. :confused:

function loadSites(json, authrules) {
    var bounds = new google.maps.LatLngBounds();
    for (i = 0; i < json.length; i++) { 
        var site = json[i];
        site.auth = [];
        // O(n^2) time complexity!!!
        site.auth = authrules.filter(function(rule) {
            return rule.atlas_id === && rule.privileges === 'UPDATE';

I rolled stg back to before ATLAS-166 got merged, and there’s definitely improvement, but its still a little slower than prod atlas. I’ll investigate the issue further and try to make a PR today. :slight_smile:

Maybe we could impute current user privileges in SQL to avoid having to loop through markers.

We might be able to defer some things – e.g., only check when a marker is clicked, since markers are only opened one at a time.

If we do need to loop, maybe we could make it run asynchronously, so it doesn’t block the UI.

1 Like

I caught a few culprits, and made a PR for ATLAS-191! The changes are deployed on atlas-stg, so please have a look :slight_smile:

I put a bunch of console logs in the code to find out how fast certain lines are running. Here’s the paste of a few runs.

We don’t need a data_versions table. data_versions is just an attribute of a row in the atlas table used to indicate which version (schema) of data is stored in the corresponding data attribute. As of now, data is a list of module details and the format has never changed, so data_version is “1” for everything. In truth, we could just drop the atlas.atlas_version attribute and the atlas_versions table and reintroduce a atlas.data_version attribute if/when we ever need to change the format of data being sent from the module. That’d probably be simpler.

I remember us wanting an atlas_versions table to check whether a version was valid or not. I wrote sql to rename the atlas column ‘atlas_version’ to ‘data_version’ in ATLAS-190. I just renamed the ‘atlas_versions’ table to ‘data_versions’ since it wasn’t being used at all in atlas-stg and doesn’t need updates (I was able to get in, thanks to Cintia! :smiley: ).


We track the version of OpenMRS for a site in atlas.openmrs_version. The poorly-named atlas.atlas_version attribute was created to record the atlas module version whenever a module pinged the server. The intent was to make sure the server knew how to interpret the “blob of json data” sent as data by the module (and stored in, which might change in future versions of the module. Looking back, even that was a poor design decision, since the format of this data doesn’t change with each new release of the module. Tracking versions of the module on the server just adds maintenance overhead and no benefit. If/when we ever changed the format of this data payload sent by the module, we’d want to version it and that is the version (i.e., the schema version of the data payload… not the module version) that we’d need to store alongside While changing the name from atlas_version to data_version is better, we are still storing a module version and not a version of the data format. Since the data version in all instances would be “1” (since we have not changed the module’s data format) and it’s for an imagined need (i.e., we need to change the format of the data payload in a way that breaks some code interpreting the module data on the server), I think we should eliminate data_version (aka atlas_version) completely. We can introduce it if/when it’s ever needed in the future. I really appreciated your work on ATLAS-190 and apologize for switching directions on you after you’ve done the work… but I’ve revised the ticket to remove atlas_version altogether.

Another discovery I’ve made is the db folder is mounted as docker-entryopint-initdb.d in our docker-compose.yml. This means, when building the database container, the scripts will be executed in alphabetic order. So, scripts in the db folder should be prefixed with a number (e.g., “01_”, “02_”, etc.) in the order in which they should be executed. I’ve updated ATLAS-184 accordingly.

BTW… Atlas on atlas-stg is loading much faster now. Thanks!

1 Like

I’m kind of glad we’re getting rid of atlas_version as its meaning has been ambiguous and confusing, and it hasn’t done much beyond existing in the code. Removing it must be fairly easy. I’ll update the PR soon. :slight_smile:

Also, updated the ATLAS-184 PR. :slight_smile:

Updated the PR for ATLAS-190.

@burke @cintiadr

I’m resuming my work on ATLAS-168. This is what I’m planning on.

We create 2 new tables with schemas described here. The notifications table maps an auth rule to the last time it was notified. We run a scheduled task periodically (what do you think the period should be?) to iterate over the auth rules, and if the marker in an auth rule hasn’t been updated/notified in a while, we send them an email containing update,delete, and unsubscribe links embedding with tokens (What should we use to generate these tokens?). These tokens are managed using the ‘autotokens’ table. Here, tokens are mapped to the respective marker and action. Old tokens are cleared by a scheduler. We need to write an API to handle these links and perform the appropriate action. What do you think the API should look like?

I made an untested PR of all the changes until now. The API isn’t built yet, but I think most of the remaining work(besides testing…) is done. What do you think of this idea, and let me know of any changes you’d like (especially the names… ‘auto tokens’… there has to be a technical word for such tokens). :slight_smile:

I’ll starting testing this feature on staging once the table schemas are approved. Don’t worry, I’ll make sure to only send mails to my email. :slight_smile:

For notifications, we want to target atlas.creator (implied “auth”) and any co-owners in the auth table, filtering out any usernames existing in an unsubscribe table.

Daily or even weekly should be fine; I’d go with daily. And we aren’t iterating over auth rules, which can include modules and doesn’t include creators; rather, we’re iterating over all markers that are changing state (at 12, 18, or 24 months since last updated). The notifications table can be a log of notifications sent, so… for each marker at a milestone, we compute recipients as creator + co-owners - unsubscribed - already notified, send out notifications, and log (add) them to the notifications table.

See my earlier comments on ATLAS-168. We don’t need tokens. Make sure /login supports a redirect parameters and you can force authentication at URLs like /unsubscribe?id={id}. No token needed; users login (if needed) instead. If we can control the duration of an auth cookie for atlas, it should be at least 90 days so people don’t have to login every time they visit the Atlas.

When you think of a REST API, you are dealing with resources, so try to think of nouns instead of verbs (methods). Maybe API endpoints like /subscriptions, subscription/{id}. The web URLs in emails would be something like /?subscribe={id}, /?unsubscribe={id}, and /?delete={id}.

1 Like

@heliostrike, latest changes deployed to atlas-stg. Try exercising it. Seems pretty good to me. I did commit a small change to allow the link protocol to be driven by an environment variable (more helpful for local development than for staging or production), so be sure to pull those changes to your local copy.

If atlas-stg looks good to you, then we can ask others in the community to play with it and report any issues. If we don’t get significant bug reports, we can promote this to production and make a big announcement that Atlas is back! (with the caveat that the Atlas module will need some changes before it catches up)

Yay!!! Good work!

1 Like

@burke Everything seems to be working properly, except the Download screenshot button. It works perfectly locally, but on atlas-stg, it generates an image without any markers. :confused: I’ll investigate the issue.

Lets plan to get the Atlas module working in the next 2-3 weeks, and focus on quality-of-life changes after that. There are so many places where the user could be notified better of internal errors!