I can look into this but @plypy and @elliott know more and probably can solve it quicker than me – I’d have to look through the code…part of GSoC this year is fixing backend stuff – so this falls under GSoC – it will definitely be fixed but a short-term fix should be implemented ASAP…
What is interesting is that this seems to have happened much more frequently in the last week or so…
From the error message ‘Already exists’, it seems that OpenLDAP has already stored that entry before we ldap_add it. So I think, we’d better check our OpenLDAP.
FYI, here is the flowchart of the normal process of creating a OpenMRS ID, and the error happened in the very last stage associated with OpenLDAP.
Hi, @michael. It seems the problems are getting more and more annoying. If you don’t mind, I want to get the SSH access to our production server, so I can dive deeper into this problem and maybe solve it. Or the only thing I can do now is to guess randomly based on the very limited error log…
And also, I am very sorry to those who are disturbed, I’ll try to figure out why… Some part of the Dashboard seems to get rusty
During my development… I noticed that we’ve lost the track of our production’s version… I think we’d better create a production branch or something else that could track it.
What’s more, @michael please tell me the commit hash of the current Dashboard on production. I’m trying to implement the auto-injection feature for TALK, so I need to know it.
The current master should NOT be deployed in its current state – even post GSoC – I have work I need to do to bring it to production quality. Currently there’s a few things we need to resolve.
So this is still happening several times each week. What’s the best way to track down what’s failing in the code?
Scrutinizing again to the backtrace, this error should be related with
ldapadd procedure. It’s either the problem of OpenLDAP or somehow the code have sent more than one
And based on your workaround and my guess to the current state of Dashboard in production, which should have no delete sync logic from Mongo to LDAP and could dynamic sync user from LDAP to Mongo. See, in your workaround,
- User is deleted from Mongo, but remained still in LDAP
- Sync is triggered from LDAP to Mongo
- Sync in 2. doesn’t create
I’m examining the code as well. But really, the commit hash of current production server would be much helpful!!!
I believe I can work out a temporary solution for this, please provide me the hash and I’ll change the code to ignore this error when it happens.
Anything I can do to help out with this? @plypy, I can do some work on production if you need it. I’ll PM you the prod hash.
Thanks @elliott; I was tied up this weekend. @plypy in case you need access to production or have specific questions, probably best way to reliably get answers is through email to helpdesk@ or new case at https://help.openmrs.org/
Keep up the good work!
I can’t find the commit @elliott has provided…
So for the sake of efficiency I’ve already sent my request for SSH access.
I think we should just patch production and then fix the resulting line in our development…
From the error log I downloaded from production server and some tests I did, I can say,
This error happens frequently and earlier than we thought… from April or even earlier, 5 or more times a week.
This problem has created lots of orphan data that only exist in LDAP but not in mongo.
Personally I’d like to say, it should be other LDAP stuff that’s to blame not Dashboard. As they’ll report “Already Exists” though Dashboard only performed only one insert operation… This sucks
Besides… the log is very poor to provide useful information, this needs to changing.
We shall create a production branch on openmrs repo as well deploy this ASAP.
Interesting. Was it really just an ignorable error?
When I looked that the “bad” accounts in Formage, it showed as status Locked. Is that just because they did not complete email confirmation process?
Yep, see this flowchart I posted
and this anlyzing
Something related with LDAP operation is broken, that would report
Already Exists error. But on Dashboard’s perspective, we can safely ignore this.
I made a big mistake of rewriting git history – so the code in production was a bit behind…how diverged is your production branch? That’s my only concern.
Protip: rewriting git commit history which others have worked on is bad…if it’s only your work and others don’t have it…then it’s fine.
This branch diverges from the current state of production server, which is at this point 00c223e, though someone has accidentally made a dumb commit on production server…
That said, you can merge and deploy this on production safely, and from now on we can track the production state in this branch.