Bahmni implementation, Creating robust deployment

Hi all, We are facing with synch issue with our bahmni implementation where sometime new patients’s synch to OpenElis/ OpenERP stops during working hours or lab’s requests stop synching to OpenElis. Each time this happens, we have to stop the openmrs, openerp, bahmni-lab, bahmni-erp-connect and atomfeed-console services and then empty the markers, event_records and failed_events tables of the openmrs, openerp and clinlims databases. It creates a lot of user frustration because it disrupts their work. So while searching on bahmni implementation wiki, I came across this page which talks about creating robust deployment. I thought this could help me solve my problem but I can’t understand its content or how to implement its content. Someone could help us to understand how to implement the content of this page in order to solve our synchronization problem, or simply help me to solve this synchronization problem which makes us suffer so much. We are using Bahmni version 0.91. Thank you for your support, Saidi

One of the reasons the markers stop working could be because of incorrect master data setup. For instance, if a patient is missing in target system, or a test is missing, or a drug is missing, etc… because while syncing the item from OpenMRS to Lab/Odoo, the sync service is looking for the exact Item to update … and if it is different from the one it is looking for, it won’t make updates… and move the item to Failed queue.

Maybe next time you see this issue, paste the contents of the FAILED_EVENTS and other queues, to see what is the error message, and reason for failure. This will help pin-point if some entities are being modified by users manually in ELIS/Odoo, and causing sync failures.

Also - what is the hardware configuration of the servers on which you are running Bahmni and databases? RAM/CPU/etc. Please also check for free space and swap space on those servers.

cc: @arjun @angshuonline @sthote @mohant – any tips?

As Gurpreet pointed out, most of the times sync issue happens when master data setup becomes dirty.

A scenario that I have come across is when a concept (ex: panel) is added and deleted sequentially without the consuming application running, then when the atomfeed runs again this can cause a failed event. In this situation updating the marker table would resume the sync again.

Also sometimes when you have a huge master data (example : OpenMRS) and your consuming application is having some issue in processing the events then after a certain count of FAILED_EVENTS further sync stops. This can be fixed by making the application stable and then resetting marker table.

There are a few things that I would like to mention about troubleshooting Sync

  1. Never ever delete the entries from the “markers” table (At the client system) - it will just cause the feeds to be processed yet again. While most of the client code is written as “idempotent” and hopefully no untoward state change will happen, but this increases more troubleshooting and heartburn!

Again: Do not delete entries from “markers” table, unless you really want to setup and process right from the begining!

  1. Get some understanding of the protocol - you can read more about here. While you do not need to go deep, and if you understand “doubly linked list”, you understand most of it.
  • feed(s) and entries in there are produced by a server owning/serving a particular information - the producer
  • feed(s) and its entries are consumed by a client - the Consumer/client.
  • Consumer remembers to which point it has read and processed and resumes from the last execution point.

You can get away from the details, but do progress systematically and without shooting in the dark!

  1. Know which system you are dealing with - there is no need to restart all the systems! In simple words - the entries in feeds are consumed by the consumer. So start from the consumer - if a lab-order is not reflecting on LIS, there can be number of things that can go wrong, so dont assume the sync is at fault. And you must investigate to figure out whats wrong …
  • Check the “failed_events” table on the client (e.g. in LIS - its Clinlims db)
  • there maybe many entries there in … till a threshold is reached, the client will keep trying. After that the circuit breaker will kick in, meaning you must fix the errors before it can safely run again.
  • has a threshold reached? run the following query

select count(*) from failed_events

  • check the properties in “/opt/bahmni-lab/etc/” … check the two properties feed.maxFailedEvents, feed.failedEventMaxRetry

  • if the above query is equal to “feed.maxFailedEvents” - then the no further processing of events will be done.

  • an particular event will be retried till it reaches 5 or if overridden by the “feed.failedEventMaxRetry”. Find the corresponding record in the failed events and reset the failed_events.retries for that event to 0. It will retry again. Keep observing the logs and also the “failed_events.error_message” - try to see and make sense of the error. Maybe the reference data is wrong, maybe the sample type or the department of the test is wrong … fix that and set the failed_events.retries to 0 again.

  • If you think the client is alright and there is no reason for it to fail - maybe the problem is at the producer. check the “failed_events.event_content” - it will be an URL … try hitting the URL (of course with right credentials) and see if the response is alright. For example, if a test was deleted and the processing was really slow, then you really need to fix the data on the producer side (i.e OMRS)

  • In some cases, where the data is really at fault - remove the entry at the “failed_events” for that event.

  • In some cases, where you have changed the producer system (e.g. moved OMRS to another IP or machine), you need to change the marker entry to point to the right URL (and also in the property file)

  • Only in extreme cases, you will need to “advance” the marker (i.e to the next entry of a feed) … but I would not advise that now.

  1. Finally and yet again - please do not clear the markers at the consumer system randomly. its like sweeping dirt under the rug! Sooner or later, you are gonna be beaten!


It would be very helpful if besides the error message: “Obs id is null”, one can see more details… like Which Patient/Observation/test etc does the error occur and why? I think that will make it easier for people to fix the issue, without fully getting into the simple feed protocol (since that is in implementation concern).

I think we should figure out some way to make it easier for implementors to debug the issue. The atom feed console was supposed to fix that problem… but it doesn’t seem to have. We likely need to review the error handling code blocks and see if there is a better way to surface the real problem.

Thank you guys for your interventions. I tried to follow your guide step by step but I still can’t figure out why our implementation is crashing. Indeed, by following the logs especially on the openerp side, I noticed that the new patients continue to get synchronize to openerp and clinlims but only the orders stop synching. And then, the failed_events tables remain empty on openmrs and clinlims. Only openerp’s failed_events table contains entries. At first I thought that there would be a lack of hardware resources, especially RAM memory. I upgraded it to 24G of RAM but nothing has changed. Attached are some of tables’s contents and openerp’s logs. Thanks again for your involvement.

markers_202203101007_openmrs.csv (235 Bytes) markers_202203101006_openerp.csv (537 Bytes) markers_202203101005_clinlims.csv (435 Bytes) failed_events_202203100956_openerp.csv (72.6 KB)

I am not an Odoo expert, but I see this error message in your odoo logs:

2022-03-10 07:53:15,095 3080 INFO openerp openerp.addons.bahmni_atom_feed.atom_feed_client: {'category': 'create.customer', 'feed_uri': '', 'uuid': 'f162d39d-1314-4187-824c-12d1bb3f802b', 'local_name': '', 'feed_uri_for_last_read_entry': '', 'preferredAddress': '{"address1":"RUMONGE","address2":"RUMONGE","address3":null,"cityVillage":"MUTAMBARA","countyDistrict":null,"stateProvince":"BURUNDI","country":null}', 'village': 'MUTAMBARA', 'last_read_entry_id': '', 'attributes': '{"nomtuteur":"XX","MaritalStatus":"Single","occupation":"SANS","secondaryRelative":"XX","primaryRelative":"XX","langue":"French","XX":"Non Displaced"}', 'ref': 'OMRSXXXX', 'name': 'XXXXX'}

2022-03-10 07:53:15,282 3080 WARNING openerp openerp.osv.orm: No such field(s) in model res.partner.attributes: x_StatusRefugie, x_MaritalStatus, x_occupation, x_nomtuteur.

Possibly need to configure Odoo to have certain patient attributes configured? See this: Data sent to OpenERP/oodo - #7 by ramashish


Bahmni 0.88.244 (historical!), the res.partner.attributes was saving the person attributes in <name, value> format ie the code was (good old code!) saving an attribute key in the ‘name’ column and its value in the ‘value’ column.

Though the res.partner.attributes still have the ‘name’ and ‘value’ columns the code got changed at this point in the history and it started to store the attribute as a custom column!

So, the code forced implementers to create columns in the res.partner.attributes model as “x_<attribute key>”.

In your case, as @gsluthra has pointed out, the log shows that the following columns are missing from res.partner.attributes model

x_StatusRefugie, x_MaritalStatus, x_occupation, x_nomtuteur

These attributes are sent to the erp sync code only if the attributes are not null.

So, (most likely!) you will be able to replicate the issue by

  1. Will not sync to ERP - Creating a patient with a value for at least one of these attributes
  2. Will sync ERP - Create a patient with no value for any of these attributes.

The bottom line is, you will have to create these attributes in res.partner.models as shown on this wiki gif/video for consistently syncing patients to ERP.

Hope it helps!

1 Like

@asaidi can you check your odoo under sales > Settings and see if your order type is defined. Also check to be sure order type-shop mapping, and shops are not blank. We had same issue and observed that the shops, order type - shop mapping and syncable units were blank.

1 Like