(Platform) Increased response time for requests related to the identifier-types and identifier sources

Hi @Platform_Team ,

I am writing this talk post for an increased response time for a few requests as mentioned below:

  1. Fetching all identifiers

When we fetch all the identifiers from the endpoint: /ws/rest/v1/patientidentifiertype?v=full, the response time for this request touches:

On DEV3: Avg. time to fetch all identifiers is 1.9s

On Localhost (proxying to dev3): > 7sec

  1. Fetching auto generation options

On dev3: 300ms (but when observed last time it was > 1min) On Localhost (proxing to dev3): 1.9 min

(Localhost) image

  1. Fetching identifier sources for every identifier type ws/rest/v1/idgen/identifiersource?identifierType={identifierTypeUUID}

On dev3: A few requests take 250ms, and a few are taking upto 28s

image

On localhost: 1 sec - 1 min

image

Since for the UI we have implemented, the order of requests are as follows:

  1. Fetching primary identifier
  2. Fetching all identifiers
  3. Fetching all the auto-generation options
  4. Fetching all the identifier source for every identifier type.

This chaining takes a lot of time to load the patient identifier types with their sources and auto generation options.

This is a performance issue and also blocks the E2E testing as it exceeds more than 3 mins to load.

CC: @jayasanka

Does it also take more than 3 mins outside of the testing environment (on a live dev3 server)?

For me, https://dev3.openmrs.org/openmrs/ws/rest/v1/patientidentifiertype?v=full takes 246ms on first request and ~130ms on subsequent requests. Running this through the Webpack proxy, takes about 200ms per request.

First, I notice the start script in patient-management is misconfigured (it currently runs openmrs develop --sources 'packages/esm-*-app' which means that its starting 7 Webpack dev servers. Fixing that will presumably help quite a bit—actually, this is far and away the single thing I think will make the biggest difference.

Second, when I go to dev3, I see we are almost immediately making 6 requests (1 per identifier type) to https://dev3.openmrs.org/openmrs/ws/rest/v1/idgen/autogenerationoption?v=full, which seems redundant since there’s no parameter that varies by identifier type. Subsequently, if I actually open the registration app, I see 6 more requests to that same endpoint. From the speeds you’re showing, reducing that to one call should shave about 6 seconds off the total request time.

Third, there’s a request-per-identifier-type to the /openmrs/ws/rest/v1/idgen/identifiersource endpoint, which makes sense; however, the only non-empty response for those requests is the one for the single identifier returned from the autogenerationoption endpoint… Could we maybe leverage that to reduce the number of calls we need to make to those endpoints? (I also see these requests loaded initially and then re-done if I open the registration app, which is also duplicate work we probably don’t need to do).

Reducing the number of calls we’re making should speed things up since there’s less request blocking and also less work for the local proxies to do…

Hi Ian, I was only running the registration app using the yarn turbo run start --filter=@openmrs/esm-patient-registration-app.

Second, when I go to dev3, I see we are almost immediately making 6 requests (1 per identifier type) to https://dev3.openmrs.org/openmrs/ws/rest/v1/idgen/autogenerationoption?v=full, which seems redundant since there’s no parameter that varies by identifier type.

I have made up a PR for improvement in this here: (fix) O3-1901: Patient Registration Page taking too long to load by vasharma05 · Pull Request #573 · openmrs/openmrs-esm-patient-management · GitHub. So this is something which will be improved with this.

(I also see these requests loaded initially and then re-done if I open the registration app, which is also duplicate work we probably don’t need to do).

Yes Ian, I’ll look into this too. Actually the first request is being made by the offline-tools, and the second requests that you see are the one made by SWRImmutable (here), calling the same functions.

Oh… I see… so there’s a pre-caching stage I guess we’ll have to somehow retain that. Maybe a slightly different design then: can we use the current offline flow but push the values into the SWR cache using the mutate function (assuming SWR 2.0)?