Who is responsible for ERR tickets?

Every week, JIRA notifies me about TRUNK tickets that need assessment. So i once in a while curate a few of them to keep the number from growing. But i have just noticed another category that i had never put into my routine. Should these also be the responsibility of the community dev swim lane lead? https://issues.openmrs.org/issues/?jql=project%20%3D%20ERR

1 Like

For those who do not have the time to open that link, they are already 795 tickets. :blush:

Dusting off the history books …

These are created through our poorly-designed dependency on JIRA for reporting errors/crashes within the application. We built a simple PHP script called SCRAP (Superfluous Crash Report Acceptance Processor) to format the error data for JIRA and submit it via HTTP to the ERR project.

The idea was that our bug report triage role (deprecated) would regularly review them for validity/reproducability, and, if appropriate, convert them into bug issues in the appropriate JIRA project(s).

I believe the web form at http://openmrs.org/help/report-a-bug/ also submits data to this same project.

So yes, until some a better technique is developed for crash reports (which I’d personally strongly encourage!) it’s important that someone in the engineering group be reviewing them to catch important issues.

Do we have a way of getting the reporter? That is for followup questions. Or even assigning as reporter when creating the appropriate JIRA ticket.

This is a weakness of the system. Basically the reports are submitted anonymously because there’s no linkage to our OpenMRS ID identity which JIRA would use for “Reporter”. (And I don’t think we want to build a dependency of OpenMRS on those systems!)

An improved system prompt people to provide an email address for follow-up.

Great discovery @dkayiwa!

There are 102 created this year. 22 of them were manual (web form) submissions.

A quick look shows that there’s a fair number of spam messages (blank description, single word description, bot-generated, etc.). I’m also suspicious that some of these are generated from development systems or custom configurations of systems that aren’t appropriate for the community to be trying to address. That said, there are probably some useful, actionable information amongst these reports.

At the moment, it looks like you can distinguish the auto-generated from manually submitted (web form) because the auto-generated issues do not properly populate the affectedVersion. So, project = ERR and affectedVersion != "Unknown" yields the 209 manually submitted issues.

Some things that would help:

  • Automatically set issue type or label so we can reliably distinguish between automated vs. manually submitted (via web form) errors.

  • Populate affectedVersion when automatically generating issues, so we can more easily look at tickets by version.

  • Add a “Ignore” action to the workflow, so it’s easier to quickly close tickets that require no further action (e.g., adding an “Ignore” action to close as “Won’t fix” would allow .iEnter to quickly discard spam/noise) . Neither “Cannot reproduce” nor “Duplicate” are apropriate for spam or other forms of noise (tickets that lack enough information, come from unsupported systems, etc.).

  • @bwolfe is probably not the appropriate lead for the project. :wink:

  • Have someone take a first pass through the existing tickets to glean additional insights beyond what I got in ~5 minutes of looking at them – e.g., are there some general categories of problems we can identify? Do we think there’s a way to identify multiple tickets for the same error, so we could focus on five issues generating 300 reports instead of seeing these as 900+ independent reports?

We do need a policy for handling these, though I don’t think it’s reasonable to approach ERR as a support desk (as if it’s our responsibility to address any error that is generated); rather, I’d use ERR as a resource for discovering bugs in the system.

For example:

  • Prioritize reviewing manually submitted reports
  • Iterate on ways to filter out bogus/spam reports
  • Use queries to cull useful information from the auto-generated reports (e.g., identify when the same error has been reported a hundred times)
  • Eventually, use bots to handle previously identified errors and handle them automatically.
1 Like

Very good tips! Keep them coming! :slight_smile: