Amazing future question: Client-side rendering performance

(Brandon Istenes) #1

I raised a concern about client-side app performance in the discussion about Single-SPA on Monday. I appreciate that @joeldenning addressed it, however, I think it warrants further discussion.

For a tablet deployment, which this is in part intended to facilitate, I think it’s reasonable to expect that the server performance will greatly exceed the client. Even when the clients are computers, it’s not unlikely that the server machine will be more performant than the client machines, which in at least one case that I’m very familiar with, are mostly old donated laptops.

Going with the industry flow toward client rendering does seem like the right choice, for the myriad reasons we’re familiar with by now. But I think we need to talk about what we can do to prevent performance from becoming a bigger problem.

There are a few dimensions to this question:

  • CPU/RAM: This will probably mostly depend on choice of frameworks, right? Should developers be thinking about some of the lesser-known-but-faster frameworks like Preact, Elm, and Svelte?
  • AJAX/Fetch: I’ve been viewing the shift toward a new UI as an opportunity to do smarter data management and reduce the number of independent AJAX requests, which presently is in the zillions. @joeldenning is against having a consolidated client-side data store, arguing that this would constrain apps too much and be fragile. How might we handle AJAX more efficiently than at present? Can a client-side / single-spa approach facilitate that? Are there deeper changes, such as switching to GraphQL, that might yield bigger improvements?
  • Initial page load: Probably not really a problem. Providers will load the thing once and then work for a while. We’re not worried about our bounce rate :slight_smile:

Looking forward to hearing what people think!

(Daniel Kayiwa) #2

My ideal is an architecture that takes care of both server side and client side rendering. Not just for the sake of providing an upgrade path for legacy applications, but also considering the fact that some developers may have circumstances where server side rendering is the desired approach, even for new functionality or applications.

(Joel Denning) #3

For a tablet deployment, which this is in part intended to facilitate, I think it’s reasonable to expect that the server performance will greatly exceed the client. Even when the clients are computers, it’s not unlikely that the server machine will be more performant than the client machines, which in at least one case that I’m very familiar with, are mostly old donated laptops.

I think the first question to answer here is which kinds of devices and resolutions we support. This is a great topic to discuss - worthy of creating an RFC proposal for. I don’t know much about OpenMRS’ current approach to mobile/tablet. Here are some open questions that I think an RFC should discuss:

  • which combination of phone, tablet, desktop do we want to support? What is min resolution we support?
  • Does feature parity for desktop and mobile make sense? Some features might be better suited for only one or the other.
  • single codebase that’s fully responsive? A separate codebase for mobile vs desktop?
  • Have there been any talks of native phone/tablet app? If so, that might change answers to other questions.

Client rendering performance

The answers to the above questions would inform our decisions about what we should shoot for with our network perf, cpu, and memory usage.

Generally speaking, phones and tablets are 100% capable of doing client rendering very well and efficiently. They have been doing so ever since smartphones and tablets have existed. Unless we’re supporting very very old phones, client rendering by itself will work just fine. That said, we will want to make sure we don’t let browser memory and network usage get out of hand. One thing I’m proposing that will help with this is the creation and use of common dependencies, which prevents large shared libraries from having to be downloaded and executed more than once.

I’ve been viewing the shift toward a new UI as an opportunity to do smarter data management and reduce the number of independent AJAX requests, which presently is in the zillions.

:+1: Agreed

@joeldenning is against having a consolidated client-side data store, arguing that this would constrain apps too much and be fragile.

I’m against having a shared redux store, because of the implicit contracts that arise between the microfrontends, the actions, and the shape of the data in the store. But I am in favor of preventing duplicate network requests via other strategies, such as a global, in-memory cache of highly used data. Another option is to have a core frontend module “own” the data. Or even using global variables. Preventing lots of duplicate network requests is something I’m committed to.

CPU/RAM: This will probably mostly depend on choice of frameworks, right? Should developers be thinking about some of the lesser-known-but-faster frameworks like Preact, Elm, and Svelte?

When talking about CPU/RAM, there is 1) initial download of javascript, 2) parse and initial execution of the js, and 3) all ongoing memory allocated as the javascript continues to do things (such as React elements created), and 4) javascript statements executed as the framework does its thing. I have used Preact, Svelte, and Elm in the past. My understanding is that Preact and Svelte are better than React/Vue/Angular at 1) and 2), but not necessarily 3) and 4). In regards to Elm, I haven’t checked into the byte size of the compiled runtime library, but I wasn’t under the impression that it was significantly smaller than React/Vue/Angular for 1) or 2), nor 3) and 4). The typechecking and compiler for Elm is cool, but in the browser it is ultimately using a virtual dom fairly similar in concept to what React is doing. I haven’t understood Elm to be a performance revolution, but a developer experience revolution.

Summary of this ^^ is that the major frameworks (react, vue, angular) all are used on many existing mobile sites and they do fine (or even great). First step for us at OpenMRS is to figure out what types of devices and resolutions we want to support. I doubt that we’ll run into a situation where the extra gzipped 100kb to download React (instead of Preact) would be so inhibitive that we couldn’t use React.

1 Like
(Brandon Istenes) #4

I’m against having a shared redux store, because of the implicit contracts that arise between the microfrontends, the actions, and the shape of the data in the store. But I am in favor of preventing duplicate network requests via other strategies, such as a global, in-memory cache of highly used data. Another option is to have a core frontend module “own” the data. Or even using global variables. Preventing lots of duplicate network requests is something I’m committed to.

Interesting! Why would a shared data store be more restrictive than the REST API? Don’t they both present data in a certain shape, which the recipient is responsible for creating views into as necessary? And in what way would contracts between actions or microfrontends be created? Isn’t the point of an authoritative client-side data source that, no matter who does what to the data, everyone always agrees what the data is?

Thanks for all your thoughtful feedback. I’m on board on all other accounts.

With respect to the “what should we support” question, just to say what I’ve already said on Slack over here as well: mobile first is a great design paradigm and a great developer experience. One codebase, one feature set. Orgs can fine-tune the grid widths and CSS corresponding to whatever screen widths they care about. I defer to designers and UX people about what those designs should be like, but I think this is the path that will be most painless, technology-wise.