Getting the back-end to connect to a SQLite database

Yes

Yes… Nothing in the /openmrs/data directory on the Docker image should be added to the repo.

1 Like

@ibacher So I just found out that the UUID extension is not built in the native library that the JDBC driver I selected uses. I was also informed that I can probably compile the extension myself and load it dynamically. Should I go with that or is it a must to have the UUID extension bundled into the driver? Should I start looking for a different JDBC driver?

You might be able to get away with not having the UUID extension available; My point 4 was based on the experience of adding support for Postgres where we created a UUID() function mapped to one of Postgres’s UUID generator functions. Personally, I’d probably try:

  1. See how far things get without the UUID function; if everything works, it’s fine
  2. See how hard it is to get that library to load a custom-compiled SQLite DLL… It looks like there’s some support for that.

I don’t actually see another modern SQLite JDBC driver available.

Thank you @ibacher, I myself have been unable to find another modern SQLite JDBC driver. For now I’ll just leave it as is and see how far it goes.

So for now I’m going to start working on the Liquibase changesets. What exactly am I supposed to look for? The one example you gave is basically mapping what function to call when there’s a need to generate a UUID, my guess is that’s not the only thing I need to look for. Do I just search for any Liquibase XML files that have dbms or modifySql nodes?

I think I’d probably look through for either dbms tags or dbms attributes, which likely covers most of the cases. You’d also want to check through for sql tags, since those represent raw SQL.

1 Like

@ibacher So when I see code like this:

<preConditions onFail="MARK_RAN">
  <or>
    <dbms type="mysql" />
    <dbms type="oracle" />
    <dbms type="mssql" />
    <dbms type="postgresql" />
  </or>
</preConditions>

My assumption is to just add another line like this:

<preConditions onFail="MARK_RAN">
  <or>
    <dbms type="mysql" />
    <dbms type="oracle" />
    <dbms type="mssql" />
    <dbms type="postgresql" />
    <dbms type="sqlite" />
  </or>
</preConditions>

But I’m assuming I need to define that database type somewhere. My first guess is to set the $OMRS_DB environment variable to sqlite and then make some modifications on the startup-init.sh file:

# Setup database configuration properties
if [[ -z $OMRS_DB || "$OMRS_DB" == "mysql" ]]; then
  OMRS_DB_JDBC_PROTOCOL=${OMRS_DB_JDBC_PROTOCOL:-mysql}
  OMRS_DB_DRIVER_CLASS=${OMRS_DB_DRIVER_CLASS:-com.mysql.jdbc.Driver}
  OMRS_DB_PORT=${OMRS_DB_PORT:-3306}
  OMRS_DB_ARGS="${OMRS_DB_ARGS:-?autoReconnect=true&sessionVariables=default_storage_engine=InnoDB&useUnicode=true&characterEncoding=UTF-8}"
elif [[ "$OMRS_DB" == "postgresql" ]]; then
  OMRS_DB_JDBC_PROTOCOL=${OMRS_DB_JDBC_PROTOCOL:-postgresql}
  OMRS_DB_DRIVER_CLASS=${OMRS_DB_DRIVER_CLASS:-org.postgresql.Driver}
  OMRS_DB_PORT=${OMRS_DB_PORT:-5432}
elif [[ "$OMRS_DB" == "sqlite" ]]; then
  # Set whatever needs to be set for SQLite
else
  echo "Unknown database type $OMRS_DB. Using properties for MySQL"
  OMRS_DB_JDBC_PROTOCOL=${OMRS_DB_JDBC_PROTOCOL:-mysql}
  OMRS_DB_DRIVER_CLASS=${OMRS_DB_DRIVER_CLASS:-com.mysql.jdbc.Driver}
  OMRS_DB_PORT=${OMRS_DB_PORT:-3306}
  OMRS_DB_ARGS="${OMRS_DB_ARGS:-?autoReconnect=true&sessionVariables=default_storage_engine=InnoDB&useUnicode=true&characterEncoding=UTF-8}"
fi

Does this sound correct?

Probably not. We’re using Liquibase to do database migrations, so it’s outside of OpenMRS. I’d imagine you just need to use the string sqlite, but the Liquibase documentation doesn’t seem to be helpful in this regard.

But, yes, that’s the general idea. For some changes, you may need to have SQLite-specific versions, but something that works on most DBMS’s seems likely to work on SQLite without changes.

@ibacher should I update changesets like this one to also run for SQLite implementations?

<changeSet id="20090414-0804" author="bwolfe" dbms="mysql">
  <preConditions onFail="MARK_RAN">
    <foreignKeyConstraintExists foreignKeyName="is_a" />
  </preConditions>
  <comment>Dropping foreign key on concept_set.concept_id table</comment>
  <dropForeignKeyConstraint baseTableName="concept_set" constraintName="is_a" />
</changeSet>

I don’t think that changeset (or the related ones) should be marked with dbms=“mysql” at all. @dkayiwa do you have any memory of why this change might be restricted to mysql only?

That was a result of changeset copy and paste. It should not have that restriction.

1 Like

@ibacher and @dkayiwa I have seen a bunch of these on the file I’m currently working on. Should I wait for some updates or do I just remove those restrictions myself? At least for our current implementation of OpenMRS since we’re just doing an assessment?

Just remove them for now. We’ll likely do the same thing anyways.

Sounds good @ibacher. So for this changeset, I will be removing the restriction:

<changeSet id="20090414-0812" author="bwolfe" dbms="mysql">
  <preConditions onFail="MARK_RAN">
    <not><columnExists tableName="concept_word" columnName="concept_word_id" /></not>
  </preConditions>
  <comment>Adding integer primary key to concept word table</comment>
  <addColumn tableName="concept_word">
    <column name="concept_word_id" type="int(11)" autoIncrement="true">
      <constraints nullable="false" primaryKey="true" primaryKeyName="concept_word_pk" 
                   unique="true" uniqueConstraintName="concept_word_id_unique" />
    </column>
  </addColumn>
  <modifySql dbms="mysql">
    <append value=" FIRST" />
  </modifySql>
</changeSet>

But then there’s a modifySql tag that I don’t know what to do with.

It seems like modifying the changesets is a very laborious task. What do you think about approaching this by leaving the Liquibase files alone and then converting the resulting MySQL database to a SQLite and then just connect OpenMRS to the SQLite database instead? This feels very troublesome for maintenance though like when OpenMRS releases a new version and we try to update to it.

For a POC, sure, that makes sense. As you note, though, I don’t think that technique would work well for maintenance / production, but probably just to see if things work it’d be fine.

2 Likes

@ibacher I undid all of the changes I made to the Liquibase files and dumped the MySQL database and converted it to a SQLite database that I mounted. I then created a new docker-compose.sqlite.yml file:

version: "3.7"

services:
  api:
    image: openmrs/openmrs-core:${TAG:-nightly}
    build: .
    ports:
      - "8080:8080"
    environment:
      OMRS_DB_DRIVER_CLASS: ${OMRS_DB_DRIVER_CLASS:-org.sqlite.JDBC}
      OMRS_DB_URL: ${OMRS_DB_URL:-jdbc:sqlite:/openmrs/data/database.sqlite3}
      OMRS_HIBERNATE_DIALECT: ${OMRS_HIBERNATE_DIALECT:-org.sqlite.hibernate.dialect.SQLiteDialect}
    volumes:
      - openmrs-data:/openmrs/data/
      - ${PWD}/sqlite/database.sqlite3:/openmrs/data/database.sqlite3

volumes:
  openmrs-data:

This should fully disconnect from the MySQL database and only connect to the mounted SQLite database on /openmrs/data/database.sqlite3.

I deleted all of the images/containers/volumes from docker and even pruned:

docker system prune -a

I then rebuilt the image:

docker compose build --build-arg MVM_ARGS='install -DskipTests'

I then started the container:

docker compose -f docker-compose.sqlite.yml up -d

Nothing seems to have been broken but somehow the changes I made to install the SQLite JDBC driver and the SQLite dialect didn’t do anything. I couldn’t find the JAR files inside the webapp’s WEB-INF/lib folder anymore but the POM files still have them listed as dependencies. Any idea why?

On a side note, when I go inside the openmrs-core-api-1 container I get:

wait-for-it.sh: timeout occurred after waiting 3600 seconds for localhost:3306

This is confusing because on the startup.sh file, I commented out the following lines:

# source /openmrs/startup-init.sh

# echo "Waiting for database to initialize..."

# /openmrs/wait-for-it.sh -t 3600 -h "${OMRS_DB_HOSTNAME}" -p "${OMRS_DB_PORT}"

You probably need to make some changes to the Dockerfile if you’re still running the distro-platform project. As configured, this still downloads and builds the distro using our images without your changes.

You can probably fix things by changing your docker-compose file to:

  1. Not have the build: . line
  2. Instead of image: openmrs/openmrs-core:${TAG:-nightly} point to the same image you have running as the openmrs-core-api-1 container.

You can get the image from the container by running:

docker inspect --format='{{.Config.Image}}' openmrs-core-api-1

I am not working on the openmrs-distro-platform repo yet as I am not worried about packaging the components together for now. All of the changes I’ve made so far are on the openmrs-core repo.

Also, when I ran

docker inspect --format='{{.Config.Image}}' openmrs-core-api-1

I got openmrs/openmrs-core:nightly which is the same image you’re asking me to replace.

That should be fine then. I mostly think you should skip the docker compose build step, because seems to be reloading the base images.

So I pruned everything again:

docker system prune -a

And then skipped the build step, just went straight to running the container.

docker compose -f docker-compose.sqlite.yml up -d

But the container logs shows:

2023-03-23 09:36:34 Initiating OpenMRS startup
2023-03-23 09:36:34 Deleting modules, OWAs and configuration from OpenMRS
2023-03-23 09:36:34 Loading distribution artifacts into OpenMRS
2023-03-23 09:36:34 Writing out /openmrs/openmrs-server.properties
2023-03-23 09:36:34 Waiting for database to initialize...
2023-03-23 09:36:34 wait-for-it.sh: waiting 3600 seconds for localhost:3306

I don’t understand why the log for the database initialization is showing up when I commented that out of the startup.sh script. I searched for the string on the project and that seems to be the only file that has it.

Well, if you pruned everything, it would’ve downloaded openmrs/openmrs-core:nightly from Docker Hub, which is not the image you customised. If that image still exists locally, I guess you could pull out the exact identifier by doing docker inspect --format='{{.Image}}' openmrs-core-api-1