Performance testing web pages

Performance of web apps is a big topic and has a direct impact on users engagement. Users will simply not wait for, nor use, slow apps (over 50% of users abandon a website which takes more than 3 seconds to load). 

Performance is part of the User Experience. What users experience and what it costs to use. As well as direct user experience, large apps also impact users plan bandwidth costs and cpu performance of their device.

Performance (aka speed) is not just the time it takes to load an app. It includes loading feedback, time to show useful content when a user can interact with the page and the ongoing experience of using the app. Google in web.dev identify phases which they call "core web vitals" which are

  • "loading"                 
  • "interactivity"         
  • "visual stability"  
Metrics such as First Paint, First Meaningful Paint and Time to interactive can be used to measure these "core web vitals". 

"core web vitals" are metrics for web sitees designed to bring focus to the user experience (not just page load) and have identified metrics related to these three key user events:
  • "loading"                 
  • "interactivity"         
    • Time to Interactive (TTI)
      • how long it takes a page to become fully interactive (page responds to user interactions within 50ms, main content loaded, handlers registered)
    • Total Blocking Time (TBT)
      • the time between First Contentful Paint and Time to Interactive, where main thread was blocked; 
      • "how much did I block the main thread after load"
      • main thread is considered blocked if a task which runs more than 50ms and Total Blocking Time is total time spent over 50ms
      • should be 200ms or less 
      • see here for more including remedies 
  • "visual stability"    
    • Cumulative Layout Shift (CLS) 
      • measures page instability; unexpected movement of page controls
      • measured in units of viewport shifted; 1.0 means entire viewport shifted
There are other metrics (e.g. time to first byte., % unused css etc), but these core web vitals are the ones that Google thinks are most important and best representation at this time.

But there is more to app performance than just the loading experience. The actual in app experience is also very important and needs to be tuned. The in app experience basically involves reducing the time for user to complete actions (or user flows). This is broad because it may or may not involve api calls. So you need to understand your users, what their critical flow path is and how to optimize that flow.

Most apps involve CRUD operations and we need to measure the speed and efficiency of those api calls, identify issues and mitigations such as
  • change blocking to non blocking
  • combine apis
  • change order of requests
  • prefetching

Google search ranking include web vitals scores in it's page ranking algorithm. Rolled out for mobile in 2021 and desktop in February 2022. All things being equal, pages with a better scores will rank higher.


Your own assets

The size of your javascript is often a big determinant of your apps performance. Consider how you can make use of the PRPL guide:
  • P pre-load
    • a directive (not a hint) to browser to fetch something sooner than it otherwise would do (<link rel="preload". >
    • can be applies to js, css, fonts even fetching json
    • can you identify critical js and css needed for initial load, separate then from the rest of js and css and preload them
    • use sparingly (because browsers are efficient) and measure outcomes
  • R render initial route asap
    • this helps improve loading and showing feedback to the user asap
  • P pre-cache
    • use of service workers to fetch assets from cache
  • L lazy load
    • split your app into what has to be loaded now and what can be loaded later when needed. webpack supports lazy loading react components and their code. 
    • can & should also lazy load content for large lists (e.g. over 500)
    • Lazy load images below the fold/viewport. There are libraries such as lazysizes to help with this 
    • Chromium browsers have now have built in support in for lazy loading images (partial support in safari). Add a new attribute "loading=lazy" to the image tag for images not in first viewport (note: img width & height values recommended). Important to only add lazy for images not in the viewport.
      • completely defer the loading of offscreen images that can be reached by scrolling, until it reaches a calculated distance from the viewport 

Remove unused code

In addition to optimizing using PRPL, getting rid of code is a great idea (and it feels great too). Lighthouse provides tooling to understand better how much code is unused (warning, can be scary). 
The Webpack Bundle Analyzer can show what libraries are in your app bundle and what modules take up the most size.
Can you remove the library? do you really need mobx and redux both? or either? Can you replace font awesome with something smaller? or use fontawesome kits
Can you downsize the library you use? instead of default builds for 3rd party libs, you can custom build and then check the savings?
How you import libraries into your code can have a significant impact on bundle size.

Optimize use of third party scripts

The size of your 3rd party javascript can also have a big impact your apps performance. You have control over your own code but what about 3rd party scripts you don't have control over?

Many web apps have lots of 3rd party scripts: trackers, analytics, payment gateways, alerting, social, a/b testing etc. Sum them all up and it can be a sizable hit in terms of load but also time to establish non origin connections, scripts which trigger more requests, insufficient caching and compression etc.

Some good guidelines:

  • I think one of the most important things it to establish a 3rd party scripts performance budget. Could include goals for: total size, total requests, total impact to load time, cookies size & usage,  etc. That way if someone wants to add another 3rd party script you can measure the cumulative impact. We did this at Opentable, we identified 10 3rd party scripts were loaded and sum totaled almost 1MB!
    • key: important to explain impact in terms of the user experience
  • prefer 3rd party scripts which send the least amount of code and make the fewest http requests
  • audit and remove unused scripts (don't just keep adding)
    • watch out too for zombie cookies; created by scripts which no longer are used but cookies they created still exists on your users machines (I've seen it with Optimizely cookies)
  • try to keep 3rd party scripts off the critical rendering path

You can either remove or optimize 3rd party scripts. If you can remove then do it. Otherwise try these optimize options:

  1. use async or defer on the <script> tag
    • a way to not have script block DOM construction and page rendering
      • tell the browser to parse the HTML while loading the script in the background and then execute script after load
    • async - script executes after downloading before window load event; so could still interrupt DOM building
      • script execution order random
    • defer - script executes only after HTML parsing completely finished
      • script execution order in order of how appear in html
  2. use preconnect or dns-prefetch to preconnect to required origins
    • use preconnect for important connections
    • e.g. <link rel="preconnect" href="https://cdn.mydependency.com">
  3.  lazy load third party resources
    • if not needed initially then why pay the upfront cost; load only when needed
  4. consider serving 3rd party scripts instead of from their cdn
    • there is a cost and overhead to connect to a different origin
      • benefit of from cdn is it could be cached in the browser already especially if is popular (but then again you don't control their cache rules)
    • instead you could self host, especially if the script is small
    • with self hosting you also have the option to create and fine tune your own bundle 
When evaluating 3rd party scripts need to understand: size of scripts downloaded (not just the initial), number script and http requests triggered, cookies created and sizes grow to, data captured and stored, GDPR deletion support, localization, monitoring/alerting, window. global impact,

In Practice

Performance tuning an app involves methodically taking measurements, analyzing the data, identifying and trying improvements and measuring outcomes. It's detail oriented work and can take time and some changes may not provide the improvement you hoped for. But persist.

Here's a starter template to help with that, it includes core web vitals for baseline and outcomes after targeted improvements.


Often the biggest impact is the size of the assets downloaded and the time it takes to download and parse those (javascript). 
When the browser is parsing html and it encounters a script tag, it stops and blocks document parsing, loads the javascript, executes the javascript and then resumes parsing the document. So a script tag is blocking. Nothing else is happening in the dom while script tags are processed.

The process involves trading off the cost of improvements versus the value it provides (some improvements will require code changes). You usually start by looking for best improvements for least cost.

Chrome devtools has a way to block requests to urls. This can be a quick'n'dirty way to test out impacts of not downloading and processing downloads.



Tooling 

There are many possible changes which can be made to improve an apps performance. The first step is taking a baseline and figuring out how your app performs. Then you can look at targeted improvements. 

Thankfully there is a lot of tooling available to help. Googles devtools includes Lighthouse, Performance and Network to audit apps, make recommendations and measure improvements.

Devtools Lighthouse

  • audits and scores your app against "core web vitals" metrics
  • provides recommendations for improvements
  • targets bigger picture
  • more information


Devtools Performance

  • record a flow and then can go very deep to function level
  • includes fps measurements


Webpack bundle analyzer

Webpack bundle analyzer is a very cool tool which analyses your webpack bundles and shows a breakdown of what's used and their relative sizes. You can drill in and it shows gzipped size as well. You definitely need to use an analyzer like this when tuning your app.

"canvg" why is that in my bundle? I don't use it. 
You may not directly bring in a package in package.json, but it may be a dependency on a package you do use. To understand dependencies you can look at your lock file to understand what brings in what packages. Then you a clearer idea of the true cost.

  • Add to devDependencies yarn add -D webpack-bundle-analyzer
  • Add helper scripts to package.json scripts like so
    • "webpack:view": "webpack-bundle-analyzer dist/stats.json",
    • "webpack:analyze": "webpack --config webpack.prod.js --json > dist/stats.json && yarn run webpack:view",
  • then yarn webpack:analyze

bundlephobia

bundlephobia (love the name) gives information on packages you use including size raw and zipped and download speeds.
If you scroll down it also (usually) suggests more efficient alternatives to a package you use e.g. date-fns, day.js and luxon instead of moment.js


Performance budgets

Getting faster once is doable, staying fast over time is more challenging. Things change, more is added and the app gets slower. 
A recommendation is to establish "performance budgets" for metrics which are important to you e.g. how quickly users see your site appearing, how long it takes to become usable, how large your bundle of javascript is. You can configure CI to constantly measure and report against your metrics.

Terms

  • critical rendering path (CRP) includes all resources that the browser needs to display the first screen's worth of content. "The document object model is created as the HTML is parsed. The HTML may request JavaScript, which may, in turn, alter the DOM. The HTML includes or makes requests for styles, which in turn builds the CSS object model. The browser engine combines the two to create the Render Tree. Layout determines the size and location of everything on the page. Once layout is determined, pixels are painted to the screen." from mdn
  • frame per second: should target 60 fps or more, slower than that the more the screen seems jumpy, not smooth
  • FE optimization is mostly all about optimizing the CRP i.e. getting as few resources as possible as quickly as possible (lazy load, defer, tree shake, optimize, cdn, caching, reduce size, reduce # requests etc)

References

https://medium.com/dev-channel/javascript-loading-priorities-in-chrome-57c54cfa6672

js script defer, async etc

Budget calculator

Interesting article measuring performance

Lighthouse on twitter

General Paul Irish

https://web.dev/fast/

critical rendering path




Comments

Popular posts from this blog

deep dive into Material UI TextField built by mui

angular js protractor e2e cheatsheet

react-router v6.4+ loaders, actions, forms and more