Sync 2.0 Showcase

Thank you very much for your feedback.

  1. One of the solution which came to mind is to use the identifier prefix for that identifier type (" Manage Patient Identifier Sources" → “Configure OpenMRS ID” → prefix).

  2. We didn’t notice that issue before, kindly create the bug for that in the project Jira space.

If you have any idea how to fix those issue feel free to made changes.

I’m adding to this thread as it seems to attract the right audience.

The server running sync servers is running out of disk for docker. You seem to have 2 containers that each one has almost 10GB of size (sync3refapp_openmrs-referenceapplication_1 and sync1refapp_openmrs-referenceapplication_1).

There’s nothing I can actually delete, so maybe you want to destroy/redeploy those containers before they eat all the disk :smiley:

cc @pkornowski

1 Like

Thank you very much for the information. I could take care of it.

However, 10GB (for each one) is a very huge size for those containers (those aren’t databases). Could you first try to identify what exactly consume those memories space? Or maybe can I get ssh access for those servers?

cc: @kmadej

There’s nothing inside those containers that are making them particularly big, so I think you might have a file or folder you are constantly writing and it’s not a docker volume. Maybe logs? Or something. It’s a little bit tricky for me to know which folders are constantly changing (and are on the container, not as a volume).

1 Like

Okay, thank you for investigation. I will try to use the OpenMRS Bamboo to destroy and redeploy all containers.

If this is not a problem please let us know if the situation happens again.

I redeployed mentioned Sync 2.0 containers. Anything should be right. Please let me know if not.

I additionally updated information about servers owner on that page:

1 Like

Thank you so much, @alalo. If I happen to see a problem on that, I will try to find out why, but our disk alerts were resolved.

But we have now CPU alarms :smiley:

Both containers are eating a surprisingly amount of CPU. And it’s the java processes on both cases:

This seems wrong to me. I still seem to have enough free memory on the system to not think it’s garbage collection. Would you have any idea what’s the JVM/openmrs app is doing using so much CPU?

1 Like

Actually, scratch that.

java org.apache.catalina.startup.Bootstrap start -Djava.util.logging.config.file=/usr/local/tomcat/conf/ -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Dfile.encoding=UTF-8 -server -Xms256m -Xmx768m -XX:PermSize=256m -XX:MaxPermSize=512m -Djdk.tls.ephemeralDHKeySize=2048 -agentlib:jdwp=transport -DOPENMRS_INSTALLATION_SCRIPT=/usr/local/tomcat/ -DOPENMRS_APPLICATION_DATA_DIRECTORY=/usr/local/tomcat/.OpenMRS -Dignore.endorsed.dirs= -classpath /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar -Dcatalina.base=/usr/local/tomcat -Dcatalina.home=/usr/local/tomcat

This is the command. I think you might be reaching your JVM limit. I think you need to increase the memory configuration a little bit (in theory, you still have 4GBs free on the machine in total).

Now it has become quiet again. I suppose the JVM managed to settle down.

Thanks for update.

I should have more time to take care of this at the end of the week.

We ran out of disk again :smiley:

So I checked, and what’s happening is that the docker process is generating a lot of logs. Like, we are talking around more than 8GBs of logs per container.

A little bit excessive if you ask me :smiley: There are a lot of exceptions. I’d recommend you redeploy your containers, and make sure they are not so verbose.

Thank you for the update. I redeployed the containers again and decreased the log level. I hope this should help but if you noticed the issue please let me know.

Thanks in advance :slight_smile:

@alalo Just checking if there has been any work on the above mentioned tickets

I have placed two additional tickets based on this discussion


Has anyone done any successful manual push on child node previously ? It keeps throwing a “504 Gateway Time-out” , am thinking it has some issues and needs to be re-deployed . My self and @irenyak1 have experienced same error trying to reproduce different use cases .

cc. @ssmusoke @dkayiwa @cintiadr

1 Like

True that @tendomart .

Has any of you tried to set this up locally? The OpenMRS Standalone version would be very useful here.

I would also recommend getting some background about sync2 and configuration by reading these resources:

@odorajonathan can you add the above resources to the sprint wiki page?


@dkayiwa let me try setting up with the standalone. Thanks for the resources.

Can you please just redeploy sync again from CI? If you run a customised build, you can force the container/openmrs to restart with a variable.

I have alarms for really high cpu usage on that machine.

You also have a build to retrieve all logs.

Hi @ssmusoke, sorry for the delay. We didn’t start working on that yet.