Amazing future question: Client-side rendering performance

I raised a concern about client-side app performance in the discussion about Single-SPA on Monday. I appreciate that @joeldenning addressed it, however, I think it warrants further discussion.

For a tablet deployment, which this is in part intended to facilitate, I think it’s reasonable to expect that the server performance will greatly exceed the client. Even when the clients are computers, it’s not unlikely that the server machine will be more performant than the client machines, which in at least one case that I’m very familiar with, are mostly old donated laptops.

Going with the industry flow toward client rendering does seem like the right choice, for the myriad reasons we’re familiar with by now. But I think we need to talk about what we can do to prevent performance from becoming a bigger problem.

There are a few dimensions to this question:

  • CPU/RAM: This will probably mostly depend on choice of frameworks, right? Should developers be thinking about some of the lesser-known-but-faster frameworks like Preact, Elm, and Svelte?
  • AJAX/Fetch: I’ve been viewing the shift toward a new UI as an opportunity to do smarter data management and reduce the number of independent AJAX requests, which presently is in the zillions. @joeldenning is against having a consolidated client-side data store, arguing that this would constrain apps too much and be fragile. How might we handle AJAX more efficiently than at present? Can a client-side / single-spa approach facilitate that? Are there deeper changes, such as switching to GraphQL, that might yield bigger improvements?
  • Initial page load: Probably not really a problem. Providers will load the thing once and then work for a while. We’re not worried about our bounce rate :slight_smile:

Looking forward to hearing what people think!

My ideal is an architecture that takes care of both server side and client side rendering. Not just for the sake of providing an upgrade path for legacy applications, but also considering the fact that some developers may have circumstances where server side rendering is the desired approach, even for new functionality or applications.

For a tablet deployment, which this is in part intended to facilitate, I think it’s reasonable to expect that the server performance will greatly exceed the client. Even when the clients are computers, it’s not unlikely that the server machine will be more performant than the client machines, which in at least one case that I’m very familiar with, are mostly old donated laptops.

I think the first question to answer here is which kinds of devices and resolutions we support. This is a great topic to discuss - worthy of creating an RFC proposal for. I don’t know much about OpenMRS’ current approach to mobile/tablet. Here are some open questions that I think an RFC should discuss:

  • which combination of phone, tablet, desktop do we want to support? What is min resolution we support?
  • Does feature parity for desktop and mobile make sense? Some features might be better suited for only one or the other.
  • single codebase that’s fully responsive? A separate codebase for mobile vs desktop?
  • Have there been any talks of native phone/tablet app? If so, that might change answers to other questions.

Client rendering performance

The answers to the above questions would inform our decisions about what we should shoot for with our network perf, cpu, and memory usage.

Generally speaking, phones and tablets are 100% capable of doing client rendering very well and efficiently. They have been doing so ever since smartphones and tablets have existed. Unless we’re supporting very very old phones, client rendering by itself will work just fine. That said, we will want to make sure we don’t let browser memory and network usage get out of hand. One thing I’m proposing that will help with this is the creation and use of common dependencies, which prevents large shared libraries from having to be downloaded and executed more than once.

I’ve been viewing the shift toward a new UI as an opportunity to do smarter data management and reduce the number of independent AJAX requests, which presently is in the zillions.

:+1: Agreed

@joeldenning is against having a consolidated client-side data store, arguing that this would constrain apps too much and be fragile.

I’m against having a shared redux store, because of the implicit contracts that arise between the microfrontends, the actions, and the shape of the data in the store. But I am in favor of preventing duplicate network requests via other strategies, such as a global, in-memory cache of highly used data. Another option is to have a core frontend module “own” the data. Or even using global variables. Preventing lots of duplicate network requests is something I’m committed to.

CPU/RAM: This will probably mostly depend on choice of frameworks, right? Should developers be thinking about some of the lesser-known-but-faster frameworks like Preact, Elm, and Svelte?

When talking about CPU/RAM, there is 1) initial download of javascript, 2) parse and initial execution of the js, and 3) all ongoing memory allocated as the javascript continues to do things (such as React elements created), and 4) javascript statements executed as the framework does its thing. I have used Preact, Svelte, and Elm in the past. My understanding is that Preact and Svelte are better than React/Vue/Angular at 1) and 2), but not necessarily 3) and 4). In regards to Elm, I haven’t checked into the byte size of the compiled runtime library, but I wasn’t under the impression that it was significantly smaller than React/Vue/Angular for 1) or 2), nor 3) and 4). The typechecking and compiler for Elm is cool, but in the browser it is ultimately using a virtual dom fairly similar in concept to what React is doing. I haven’t understood Elm to be a performance revolution, but a developer experience revolution.

Summary of this ^^ is that the major frameworks (react, vue, angular) all are used on many existing mobile sites and they do fine (or even great). First step for us at OpenMRS is to figure out what types of devices and resolutions we want to support. I doubt that we’ll run into a situation where the extra gzipped 100kb to download React (instead of Preact) would be so inhibitive that we couldn’t use React.

1 Like

I’m against having a shared redux store, because of the implicit contracts that arise between the microfrontends, the actions, and the shape of the data in the store. But I am in favor of preventing duplicate network requests via other strategies, such as a global, in-memory cache of highly used data. Another option is to have a core frontend module “own” the data. Or even using global variables. Preventing lots of duplicate network requests is something I’m committed to.

Interesting! Why would a shared data store be more restrictive than the REST API? Don’t they both present data in a certain shape, which the recipient is responsible for creating views into as necessary? And in what way would contracts between actions or microfrontends be created? Isn’t the point of an authoritative client-side data source that, no matter who does what to the data, everyone always agrees what the data is?

Thanks for all your thoughtful feedback. I’m on board on all other accounts.

With respect to the “what should we support” question, just to say what I’ve already said on Slack over here as well: mobile first is a great design paradigm and a great developer experience. One codebase, one feature set. Orgs can fine-tune the grid widths and CSS corresponding to whatever screen widths they care about. I defer to designers and UX people about what those designs should be like, but I think this is the path that will be most painless, technology-wise.

Why would a shared data store be more restrictive than the REST API?

  • It’s not easy to use redux in all frameworks. It is (waningly) popular within the React ecosystem, but Vue and Angular have other ways of handling data that are more popular than redux in their ecosystems and directly compete with it. REST, on the other hand, is a universal standard.
  • Redux stores generally don’t have explicit and documented contracts, which makes it harder for them to be maintained and used when there are many microfrontends interacting with each other.
  • The data within a redux store is global state that can be read from and written to by any part of any microfrontend. The redux ecosystem tries to hide that its global state by doing things like making you bind action creators and not exposing the dispatch function directly, but it is 100% just a shared global variable. With a shared, complex global variable, you write code hoping that no one else messes up the state and that you don’t mess it up for anyone else.
  • REST is frontend to backend, whereas redux is frontend to frontend. The nature of UI state is different than the nature of database/API state in that it represents transient state such as “is the modal open” instead of API state such as “who is the logged in user.” In my experience, UI state in a global redux store is really hard to do well.
  • Redux use within a frontend project often fundamentally changes how devs write code. Whereas using a REST endpoint is so universal that it doesn’t change the architecture of your code.
  • No one can agree on what “good redux” is, because it’s highly subjective and controversial. The main author of redux, dan abramov, has updated his blog posts from 2015 about redux to add caviats that the patterns there are maybe outdated. See Presentational and Container Components | by Dan Abramov | Medium. The two coauthors of redux disagree on whether it’s good for large applications or small applications. One says “small,” the other says “large.”
  • The main difference is that shared data stores are microfrontend to microfrontend, whereas a REST API is frontend to backend. REST APIs generally have a contract that is written down in documentation, whereas redux stores do not.
  • The React community has moved more and more toward local component state in the last two years. The author of redux, Dan Abramov, has jumped ship on the project and now is in favor of local component state. The introduction of the useReducer hook last December is a good example of how the React team is encouraging local component state more and more. See React core team member tweet: https://twitter.com/acdlite/status/1131455129276694528?s=21

And in what way would contracts between actions or microfrontends be created?

Example: “Logged in user” object is in redux state. One microfrontend adds a derived property on it called “isAdmin” that is a combination of other permission checks. Other microfrontends start reading that property. A microfrontend dispatches a new user object after having updated it in the db, and doesn’t put the isAdmin property on it. Other microfrontends start assuming no one is an admin because services property isn’t there.

Another example: one microfrontend assumes the “patient” object is defined by the time it mounts. It works when you go to patient dashboard first which populates the redux state, but breaks if you refresh the page and the patient dashboard didn’t have a chance to populate the redux state.

Final one: a microfrontend dispatches an action “update-patient”, expecting that it changes the “patient” object and also another property in the redux state called “isLoadingPatient”. The reducer is in a different microfrontend or in the global client store. That reducer is modified in that other codebase such that isLoadingPatient isn’t modified anymore. The first microfrontend is now showing a loader when it shouldn’t, even though none of its code changed.

1 Like

Great explanation, thanks Joel.

I have one nit to pick:

With a shared, complex global variable, you write code hoping that no one else messes up the state and that you don’t mess it up for anyone else.

Redux aside, I think we need to be clear about the difference between global application state and global data, the latter of which this is really about. It’s not all just “complex global variables.” Global application state is a bad thing, but global (singular, authoritative) data is a very good thing.

I was going to try and steel the Redux argument a bit, but I think what one ends up with, when constraining Redux sufficiently to mitigate most of your concerns, is just a really bloated and cumbersome cache.

How can we build a global data cache that isn’t so fraught?

How can we build a global data cache that isn’t so fraught?

It’s a cache for API state only, not for UI state. And you can’t modify the data, only read from it. It allows multiple microfrontends to write fetch/axios requests without actually incurring multiple API requests.

It’s not a state management tool, just a way to avoid duplicate network requests.

// As long as we're on the /patients/1 route, all microfrontends reuse the patient object
// Bust the cache once the frontend navigates away from the /patients/1 route
const routesToCache = [
  '/patients/1',
]

getWithCache('/api/patients/1', routesToCache)
  .then(data => {
    console.log(data.patient)
  })

This way works with react, angular, vue, or any other framework ^. Building a react-specific abstraction on top of it is an option for those using react. React suspense is the perfect fit for interacting with the API state cache:

import {PatientResource} from './user-resource'

function PatientDashboard(props) {
  const patient = PatientResource.getPatient(props.patientId)
  return <div>{patient.name}</div>
}
1 Like