Archive for the ‘Web’ Category

Bugs I’ve filed on browsers

I think filing bugs on browsers is one of the most useful things a web developer can do.

When faced with a cross-browser compatibility problem, a lot of us are conditioned to just search for some quick workaround, or to keep cycling through alternatives until something works. And this is definitely what I did earlier in my career. But I think it’s too short-sighted.

Browser dev teams are just like web dev teams – they have priorities and backlogs, and they sometimes let bugs slip through. Also, a well-written bug report with clear steps-to-repro can often lead to a quick resolution – especially if you manage to nerd-snipe some bored or curious engineer.

As such, I’ve filed a lot of bugs on browsers over the years. For whatever reason – stubbornness, frustration, some highfalutin sense of serving the web at large – I’ve made a habit of nagging browser vendors about whatever roadblock I’m hitting that day. And they often fix it!

So I thought it might be interesting to do an analysis of the bugs I’ve filed on the major browser engines – Chromium, Firefox, and WebKit – over my roughly 10-year web development career. I’ve excluded older and lesser-known browser engines that I never filed bugs on, and I’ve also excluded Trident/EdgeHTML, since the original “Microsoft Connect” bug tracker seems to be offline. (Also, I was literally paid to file bugs on EdgeHTML for a good 2 years, so it’s kind of unfair.)

Some notes about this data set, before people start drawing conclusions:

  • Chromium is a bit over-represented, because I tend to use Chromedriver-related tools (e.g. Puppeteer) a lot more than other browser automation tools.
  • WebKit is kind of odd in that a lot of these bugs turned out to be in proprietary Apple systems (Safari, iOS, etc.) rather than WebKit proper. (At least, this is what I assume the enigmatic rdar:// response means. [1])
  • I excluded one bug from Firefox that was actually on MDN (which uses the same bug tracker).

The data

So without further ado, here is the data set:

Browser Filed Open Fixed Invalid Fixed%
Chromium 27 4 14 9 77.78%
Firefox 18 3 8 7 72.73%
WebKit 25 6 12 7 66.67%
Total 70 13 34 23 72.34%

Notes: For “Invalid,” I’m being generous and including “duplicate,” “obsolete,” “wontfix,” etc. For “Fixed%,” I’m counting only the number fixed as a proportion of valid bugs.

Some things that jump out at me from the data set:

  • The “Fixed%” is pretty similar for all three browsers, although WebKit’s is a bit lower. When I look at the unfixed WebKit bugs, 2 are related to iOS rather than WebKit, one is in WebSQL (RIP), and the remaining 3 are honestly pretty minor. So I can’t really blame the WebKit team. (And one of those minor issues wound up in Interop 2024, so it may get fixed soon.)
  • For the 3 open Firefox issues, 2 of them are quite old and have terrible steps-to-repro (mea culpa), and the remaining one is a minor CSS issue related to shadow DOM.
  • For the 4 open Chromium issues, one of them is actually obsolete (I pinged the thread), 2 are quite minor, and the remaining one is partially fixed (it works when BFCache is enabled).
  • I was surprised that the total number of bugs filed on Firefox wasn’t even lower. My hazy memory was that I had barely filed any bugs on Firefox, and when I did, it usually turned out that they were following the spec but the other browsers weren’t. (I learned to really triple-check my work before filing bugs on them!)
  • 6 of the bugs I filed on WebKit were for IndexedDB, which definitely matches my memory of hounding them with bug reports for IDB. (In comparison, I filed 3 IDB bugs on Chromium and 0 on Firefox.)
  • As expected, 5 issues I filed on Chromium were due to ChromeDriver, DevTools, etc.

If you’d like to peruse the raw data, it can be found below:

Chromium data
Status ID Title Date
Fixed 41495645 Chromium leaks Document/Window/etc. when navigating in multi-page site using CDP-based heapsnapshots Feb 22, 2024 01:02AM
New 40890306 Reflected ARIA properties should not treat setting undefined the same as null Mar 2, 2024 05:50PM
New 40885158 Setting outerHTML on child of DocumentFragment throws error Jan 8, 2024 10:52PM
Fixed 40872282 aria-label on a should be ignored for accessible name calculation Jan 8, 2024 11:24PM
Duplicate 40229331 Style calculation takes much longer for multiple s vs one big Jun 20, 2023 09:14AM
Fixed 40846966 customElements.whenDefined() resolves with undefined instead of constructor Jan 8, 2024 10:13PM
Fixed 40827056 ariaInvalid property not reflected to/from aria-invalid attribute Jan 8, 2024 08:33PM
Obsolete 40767620 Heap snapshot includes objects referenced by DevTools console Jan 8, 2024 11:53PM
New 40766136 Restoring selection ranges causes text input to ignore keypresses Jan 8, 2024 07:20PM
Fixed 40759641 Poor style calculation performance for attribute selectors compared to class selectors May 5, 2021 01:40PM
Obsolete 40149430 performance.measureMemory() disallowed in headless mode Feb 27, 2023 12:26AM
Obsolete 40704787 Add option to disable WeakMap keys and circular references in Retainers graph Jan 8, 2024 05:52PM
Fixed 40693859 Chrome crashes due to WASM file when DevTools are recording trace Jun 24, 2020 07:20AM
Fixed 40677812 npm install chrome-devtools-frontend fails due to preinstall script Jan 8, 2024 05:17PM
New 40656738 Navigating back does not restore focus to clicked element Jan 8, 2024 03:26PM
Obsolete 41477958 Compositor animations recalc style on main thread every frame with empty requestAnimationFrame Aug 29, 2019 04:06AM
Duplicate 41476815 OffscreenCanvas convertToBlob() is >300ms slower than Feb 18, 2020 11:51PM
Fixed 41475186 Insertion and removal of overflow:scroll element causes large style calculation regression Aug 26, 2019 01:34AM
Obsolete 41354172 IntersectionObserver uses root’s padding box rather than border box Nov 12, 2018 09:43AM
Obsolete 41329253 word-wrap:break-word with odd Unicode characters causes long layout Jul 11, 2017 01:50PM
Fixed 41327511 IntersectionObserver boundingClientRect has inaccurate width/height Jun 1, 2019 08:28PM
Fixed 41267419 Chrome 52 sends a CORS preflight request with an empty Access-Control-Request-Headers when all author headers are CORS-safelisted Mar 18, 2017 02:27PM
Fixed 41204713 IndexedDB blocks DOM rendering Jan 24, 2018 04:03PM
Fixed 41189720 chrome://inspect/#devices flashing “Pending authorization” for Android device Oct 5, 2015 03:59AM
Obsolete 41154786 Chrome for iOS: openDatabase causes DOM Exception 11 or 18 Feb 9, 2015 08:30AM
Fixed 41151574 Empty IndexedDB blob causes 404 when fetched with ajax Mar 16, 2015 11:37AM
Fixed 40400696 Blob stored in IndexedDB causes null result from FileReader Feb 9, 2015 10:34AM
Firefox data
ID Summary Resolution Updated
1704551 Poor style calculation performance for attribute selectors compared to class selectors FIXE 2021-09-02
1861201 Support ariaBrailleLabel and ariaBrailleRoleDescription reflection FIXE 2024-02-20
1762999 Intervening divs with ids reports incorrect listbox options count to NVDA FIXE 2023-10-10
1739154 delegatesFocus changes focused inner element when host is focused FIXE 2022-02-08
1707116 Replacing shadow DOM style results in inconsistent computed style FIXE 2021-05-10
1853209 ARIA reflection should treat setting null/undefined as removing the attribute FIXE 2023-10-20
1208840 IndexedDB blocks DOM rendering 2022-10-11
1531511 Service Worker fetch requests during ‘install’ phase block fetch requests from main thread 2022-10-11
1739682 Bare ::part(foo) CSS selector selects parts inside shadow roots 2024-02-20
1331135 Performance User Timing entry buffer restricted to 150 DUPL 2019-03-13
1699154 :focus-visible – JS-based focus() on back nav treated as keyboard input FIXE 2021-03-19
1449770 position:sticky inside of position:fixed does’t async-scroll in Firefox for Android (and asserts in ActiveScrolledRoot::PickDescendant() in debug build) WORK 2023-02-23
1287221 WebDriver:Navigate results in slower performance.timing metrics DUPL 2023-02-09
1536717 document.scrollingElement.scrollTop is incorrect DUPL 2022-01-10
1253387 Safari does not support IndexedDB in a worker FIXE 2016-03-17
1062368 Ajax requests for blob URLs return 0 as .status even if the load succeeds DUPL 2014-09-04
1081668 Blob URL returns xhr.status of 0 DUPL 2015-02-25
1711057 :focus-visible does not match for programmatic keyboard focus after mouse click FIXE 2021-06-09
1471297 fetch() and importScripts() do not share HTTP cache WORK 2021-03-17
WebKit data
ID Resolution Summary Changed
225723 Restoring selection ranges causes text input to ignore keypresses 2023-02-26
241704 Preparser does not download stylesheets before running inline scripts 2022-06-23
263663 Support ariaBrailleLabel and ariaBrailleRoleDescription reflection 2023-11-01
260716 FIXE adoptedStyleSheets (ObservableArray) has non-writable length 2023-09-03
232261 FIXE :host::part(foo) selector does not select elements inside shadow roots 2021-11-04
249420 DUPL :host(.foo, .bar) should be an invalid selector 2023-08-07
249737 FIXE Setting outerHTML on child of DocumentFragment throws error 2023-07-15
251383 INVA Reflected ARIA properties should not treat setting undefined the same as null 2023-10-25
137637 Null character causes early string termination in Web SQL 2015-04-25
202655 iOS Safari: timestamps can be identical for consecutive rAF callbacks 2019-10-10
249943 Emoji character is horizontally misaligned when using COLR font 2023-01-04
136888 FIXE IndexedDB onupgradeneeded event has incorrect value for oldVersion 2019-07-04
137034 FIXE Completely remove all IDB properties/constructors when it is disabled at runtime 2015-06-08
149953 FIXE Modern IDB: WebWorker support 2016-05-11
151614 FIXE location.origin is undefined in a web worker 2015-11-30
156048 FIXE We sometimes fail to remove outdated entry from the disk cache after revalidation and when the resource is no longer cacheable 2016-04-05
137647 FIXE Fetching Blob URLs with XHR gives null content-type and content-length 2017-06-07
137756 INVA WKWebView: JavaScript fails to load, apparently due to decoding error 2014-10-20
137760 DUPL WKWebView: openDatabase results in DOM Exception 18 2016-04-27
144875 INVA WKWebView does not persist IndexedDB data after app close 2015-05-28
149107 FIXE IndexedDB does not throw ConstraintErrors for unique keys 2016-03-21
149205 FIXE IndexedDB openKeyCursor() returns primaryKeys in wrong order 2016-03-30
149585 DUPL Heavy LocalStorage use can cause page to freeze 2016-12-14
156125 INVA Fetching blob URLs with query parameters results in 404 2022-05-31
169851 FIXE Safari sends empty “Access-Control-Request-Headers” in preflight request 2017-03-22

Conclusion

I think cross-browser compatibility has improved a lot over the past few years. We have projects like Interop and Web Platform Tests, which make it a lot more streamlined for browser teams to figure out what’s broken and what they should prioritize.

So if you haven’t yet, there’s no better time to get started filing bugs on browsers! I’d recommend first searching for your issue in the right bug tracker (Chromium, Firefox, WebKit), then creating a minimal repro (CodePen, JSBin, plain HTML, etc.), and finally just including as much detail as you can (browser version, OS version, screenshots, etc.). I’d also recommend reading “How to file a good browser bug”.

Happy bug hunting!

Footnotes

1. Some folks have pointed out to me that rdar:// links can mean just about anything. I always assumed it meant that the bug got re-routed to some internal team, but I guess not.

Web component gotcha: constructor vs connectedCallback

A common mistake I see in web components is this:

class MyComponent extends HTMLElement {
  constructor() {
    super()
    setupLogic()
  }
  disconnectedCallback() {
    teardownLogic()
  }
}

This setupLogic() can be just about anything – subscribing to a store, setting up event listeners, etc. The teardownLogic() is designed to undo those things – unsubscribe from a store, remove event listeners, etc.

The problem is that constructor is called once, when the component is created. Whereas disconnectedCallback can be called multiple times, whenever the element is removed from the DOM.

The correct solution is to use connectedCallback instead of constructor:

class MyComponent extends HTMLElement {
  connectedCallback() {
    setupLogic()
  }
  disconnectedCallback() {
    teardownLogic()
  }
}

Unfortunately it’s really easy to mess this up and to not realize that you’ve done anything wrong. A lot of the time, a component is created once, inserted once, and removed once. So the difference between constructor and connectedCallback never reveals itself.

However, as soon as your consumer starts doing something complicated with your component, the problem rears its ugly head:

const component = new MyComponent()  // constructor

document.body.appendChild(component) // connectedCallback
document.body.removeChild(component) // disconnectedCallback

document.body.appendChild(component) // connectedCallback again!
document.body.removeChild(component) // disconnectedCallback again!

This can be really subtle. A JavaScript framework’s diffing algorithm might remove an element from a list and insert it into a different position in the list. If so: congratulations! You’ve been disconnected and reconnected.

Or you might call appendChild() on an element that’s already appended somewhere else. Technically, the DOM considers this a disconnect and a reconnect:

// Calls connectedCallback
containerOne.appendChild(component)

// Calls disconnectedCallback and connectedCallback
containerTwo.appendChild(component)

The bottom line is: if you’re doing something in disconnectedCallback, you should do the mirror logic in connectedCallback. If not, then it’s a subtle bug just lying in wait for the right moment to strike.

Note: See also “You’re (probably) using connectedCallback wrong” by Hawk Ticehurst, which offers similar advice.

Shadow DOM and the problem of encapsulation

Web components are kind of having a moment right now. And as part of that, shadow DOM is having a bit of a moment too. Or it would, except that much of the conversation seems to be about why you shouldn’t use shadow DOM.

For example, “HTML web components” are based on the idea that you should use most of the goodness of web components (custom elements, lifecycle hooks, etc.), while dropping shadow DOM like a bad habit. (Another name for this is “light DOM components.”)

This is a perfectly fine pattern for certain cases. But I also think some folks are confused about the tradeoffs with shadow DOM, because they don’t understand what shadow DOM is supposed to accomplish in the first place. In this post, I’d like to clear up some of the misconceptions by explaining what shadow DOM is supposed to do, while also weighing its success in actually achieving it.

What the heck is shadow DOM for

The main goal of shadow DOM is encapsulation. Encapsulation is a tricky concept to explain, because the benefits are not immediately obvious.

Let’s say you have a third-party component that you’ve decided to include on your website or webapp. Maybe you found it on npm, and it solved some use case very nicely. Let’s say it’s something simple, like a dropdown component.

Blue button that says click and has a downward-pointing chevron icon

You know what, though? You really don’t like that caret character – you’d rather have a 👇 emoji. And you’d really prefer rounded corners. And the theme color should be red instead of blue. So you hack together some CSS:

.dropdown {
  background: red;
  border-radius: 8px;
}
.dropdown .caret::before {
  content: '👇';
}

Red button that says click and has a downward-pointing index finger emoji icon

Great! You get the styling you want. Ship it.

Except that 6 months later, the component has an update. And it’s to fix a security vulnerability! Your boss is pressuring you to update the component as fast as possible, since otherwise the website won’t pass a security audit anymore. So you go to update, and…

Everything’s broken.

It turns out that the component changed their internal class name from dropdown to picklist. And they don’t use CSS content for the caret anymore. And they added a wrapper <div>, so the border-radius needs to be applied to something else now. Suddenly you’re in for a world of hurt, just to get the component back to the way it used to look.

Global control is great until it isn’t

CSS gives you an amazing superpower, which is that you can target any element on the page as long as you can think of the right selector. It’s incredibly easy to do this in DevTools today – a lot of people are trained to right-click, “Inspect Element,” and rummage around for any class or attribute to start targeting the element. And this works great in the short term, but it affects the long-term maintainability of the code, especially for components you don’t own.

This isn’t just a problem with CSS – JavaScript has this same flaw due to the DOM. Using document.querySelector (or equivalent APIs), you can traverse anywhere you want in the DOM, find an element, and apply some custom behavior to it – e.g. adding an event listener or changing its internal structure. I could tell the same story above using JavaScript rather than CSS.

This openness can cause headaches for component authors as well as component consumers. In a system where the onus is on the component author to ship new versions (e.g. a monorepo, a platform, or even just a large codebase), component authors can effectively get frozen in time, unable to ship any internal refactors for fear of breaking their downstream consumers.

Shadow DOM attempts to solve these problems by providing encapsulation. If the third-party dropdown component were using shadow DOM, then you wouldn’t be able to target arbitrary content inside of it (except with elaborate workarounds that I don’t want to get into).

Of course, by closing off access to global styling and DOM traversal, shadow DOM also greatly limits a component’s customizability. Consumers can’t just decide they want a background to be red, or a border to be rounded – the component author has to provide an explicit styling API, using tools like CSS custom properties or parts. E.g.:

snazzy-dropdown {
  --dropdown-bg: red;
}

snazzy-dropdown::part(caret)::before {
  content: '👇';
}

By exposing an explicit styling API, the risk of breakage across component upgrades is heavily reduced. The component author is effectively declaring an API surface that they intend to support, which limits what they need to keep stable over time. (This API can still break, as with a major version bump, but that’s another story.)

Tradeoffs

When people complain about shadow DOM, they seem to mostly be complaining about style encapsulation. They want to reach in and add a rounded corner on some component, and roll the dice that the component doesn’t change in the future. Depending on what kind of website you’re building, this can be a perfectly acceptable tradeoff. For example:

  • A portfolio site
  • A news article with interactive charts
  • A marketing site for a Super Bowl campaign
  • A landing page that will be rewritten in 2 years anyway

In all of these cases, long-term maintenance is not really a big concern. The page either has a limited shelf life, or it’s just not important to keep its dependencies up to date. So if the dropdown component breaks in a year or two, nobody cares.

Of course, there is also the opposite world where long-term maintenance matters a lot:

  • An interactive productivity app
  • A design system
  • A platform with its own app store for UI components
  • An online multiplayer game

I could go on, but the point is: the second group cares a lot more about long-term maintainability than the first group. If you’ve spent your entire career working on the first group, then you may indeed find shadow DOM to be baffling. You can’t possibly understand why you should be prevented from globally styling whatever you want.

Conversely, if you’ve spent your entire career in the second group, then you may be equally baffled by people who want global access to everything. (“Are they trying to shoot themselves in the foot?”) This is why I think people are often talking past each other about this stuff.

But does it work

So now that we’ve established the problem shadow DOM is trying to solve, there’s the inevitable question: does it actually solve it?

This is an important question, because I think it’s the source of the other major tension with shadow DOM. Even people who understand the problem are not in agreement that shadow DOM actually solves it.

If you want to get a good sense of people’s frustrations with shadow DOM, there are two massive GitHub threads you can check out:

There are a lot of potential solutions being tossed around in those threads (including by me), but I’m not really convinced that any one of them is the silver bullet that is going to solve people’s frustrations with shadow DOM. And the reason is that the core problem here is a coordination problem, not a technical problem.

For example, take “open-stylable shadow roots.” The idea is that a shadow root can inherit the styles from its parent context (exactly like light DOM). But then of course, we get into the coordination problem:

  • Will every web component on npm need to enable open-stylable shadow roots?
  • Or will page authors need a global mechanism to force every component into this mode?
  • What if a component author doesn’t want to be opted-in? What if they prefer the lower maintenance costs of a small API surface?

There’s no right answer here. And that’s because there’s an inherent conflict between the needs of the component author and the page author. The component author wants minimal maintenance costs and to avoid breaking their downstream consumers with every update, and the page author wants to style every component on the page to pixel-perfect precision, while also never being broken.

Stated that way, it sounds like an unsolvable problem. In practice, I think the problem gets solved by favoring one group over the other, which can make some sense depending on the context (largely based on whether your website is in group one or group two above).

A potential solution?

If there is one solution I find promising, it’s articulated by my colleague Caridy Patiño:

Build building blocks that encapsulate logic and UI elements that are “fully” customizable by using existing mechanisms (CSS properties, parts, slots, etc.). Everything must be customizable from outside the shadow.

If a building block is using another building block in its shadow, it must do it as part of the default content of a well-defined slot.

Essentially, what Caridy is saying is that instead of providing a dropdown component to be used like this:

<snazzy-dropdown></snazzy-dropdown>

… you instead provide one like this:

<snazzy-dropdown>
  <snazzy-trigger>
    <button>Click ▼</button>
  </snazzy-trigger>
  <snazzy-listbox>
    <snazzy-option>One</snazzy-option>
    <snazzy-option>Two</snazzy-option>
    <snazzy-option>Three</snazzy-option>
  </snazzy-listbox>
</snazzy-dropdown>

In other words, the component should expose its “guts” externally (using <slot>s in this example) so that everything is stylable. This way, anything the consumer may want to customize is fully exposed to light DOM.

This is not a totally new idea. In fact, outside of the world of web components, plenty of component systems have run into similar problems and arrived at similar solutions. For example, so-called “headless” component systems (such as Radix UI, Headless UI, and Tanstack) have embraced this kind of design.

For comparison, here is an (abridged) example of the dropdown menu from the Radix docs:

<DropdownMenu.Root>
  <DropdownMenu.Trigger>
    <Button variant="soft">
      Options
      <CaretDownIcon />
    </Button>
  </DropdownMenu.Trigger>
  <DropdownMenu.Content>
    <DropdownMenu.Item shortcut="⌘ E">Edit</DropdownMenu.Item>
    <DropdownMenu.Item shortcut="⌘ D">Duplicate</DropdownMenu.Item>
    <!-- ... --->
  <DropdownMenu.Content>
<DropdownMenu.Root>

This is pretty similar to the web component sketch above – the “guts” of the dropdown are on display for all to see, and anything in the UI is fully customizable.

To me, though, these solutions are clearly taking the burden of complexity and shifting it from the component author to the component consumer. Rather than starting with the simplest case and providing a bare-bones default, the component author is instead starting with the complex case, forcing the consumer to (likely) copy-paste a lot of boilerplate into their codebase before they can start tweaking.

Now, maybe this is the right solution! And maybe the long-term maintenance costs are worth it! But I think the tradeoff should still be acknowledged.

As I understand it, though, these kinds of “headless” solutions are still a bit novel, so we haven’t gotten a lot of real-world data to prove the long-term benefits. I have no doubt, though, that a lot of component authors see this approach as the necessary remedy to the problem of runaway configurability – i.e. component consumers ask for every little thing to be configurable, all those configuration options get shoved into one top-level API, and the overall experience starts to look like recursive Swiss Army Knives. (Tanner Linsley gives a great talk about this, reflecting on 5 years of building React Table.)

Personally, I’m intrigued by this technique, but I’m not fully convinced that exposing the “guts” of a component really reduces the overall maintenance cost. It’s kind of like, instead of selling a car with a predefined set of customizations (color, window tint, automatic vs manual, etc.), you’re selling a loose set of parts that the customer can mix-and-match into whatever kind of vehicle they want. Rather than a car off the assembly line, it reminds me of a jerry-rigged contraption from Minecraft or Zelda.

Screenshot from Zelda Tears of the Kingdom showing Link riding a four-wheeled board with a ball and a fan glued to it

In Tears of the Kingdom, you can glue together just about anything, and it will kind of work.

I haven’t worked on such a component system, but I’d worry that you’d get bugs along the lines of, “Well, when I put the slider on the left it works, but when I put it on the right, the scroll position gets messed up.” There is so much potential customizability, that I’m not sure how you could even write tests to cover all the possible configurations. Although maybe that’s the point – there’s effectively no UI, so if the UI is messed up, then it’s the component consumer’s job to fix it.

Conclusion

I don’t have all the answers. At this point, I just want to make sure we’re asking the right questions.

To me, any proposed solution to the current problems with shadow DOM should be prefaced with:

  • What kind of website or webapp is the intended context?
  • Who stands to benefit from this change – the component author or page author?
  • Who needs to shift their behavior to make the whole thing work?

I’m also not convinced that any of this stuff is ripe enough for the standards discussion to begin. There are so many options that can be explored in userland right now (e.g. the “expose the guts” proposal, or a polyfill for open-stylable shadow roots), that it’s premature to start asking standards bodies to standardize anything.

I also think that the inherent conflict between the needs of component authors and component consumers has not really been acknowledged enough in the standards discussions. And the W3C’s priority of constituencies doesn’t help us much here:

User needs come before the needs of web page authors, which come before the needs of user agent implementors, which come before the needs of specification writers, which come before theoretical purity.

In the above formulation, there’s no distinction between component authors and component consumers – they are both just “web page authors.” I suppose conceptually, if we imagine the whole web platform as a “stack,” then we would place the needs of component consumers over component authors. But even that gets muddy sometimes, since component authors and component consumers can work on the same team or even be the same person.

Overall, what I would love to see is a thorough synopsis of the various groups involved in the web component ecosystem, how the existing solutions have worked in practice, what’s been tried and what hasn’t, and what needs to change to move forward. (This blog post is not it; this is just my feeble groping for a vocabulary to even start talking about the problem.)

In my mind, we are still chasing the holy grail of true component reusability. I often think back to this eloquent talk by Jan Miksovsky, where he explains how much has been standardized in the world of building construction (e.g. the size of windows and door frames), whereas us web developers are still stuck rebuilding the same thing over and over again. I don’t know if we’ll ever reach true component reusability (or if building construction is really as rosy as he describes – I can barely wield a hammer), but I do know that I still find the vision inspiring.

Rebuilding emoji-picker-element on a custom framework

In my last post, we went on a guided tour of building a JavaScript framework from scratch. This wasn’t just an intellectual exercise, though – I actually had a reason for wanting to build my own framework.

For a few years now, I’ve been maintaining emoji-picker-element, which is designed as a fast, simple, and lightweight web component that you can drop onto any page that needs an emoji picker.

Screenshot of an emoji picker showing a search box and a grid of emoji

Most of the maintenance work has been about simply keeping pace with the regular updates to the Unicode standard to add new emoji versions. (Shout-out to Emojibase for providing a stable foundation to build on!) But I’ve also worked to keep up with the latest Svelte versions, since emoji-picker-element is based on Svelte.

The project was originally written in Svelte v3, and the v4 upgrade was nothing special. The v5 upgrade was only slightly more involved, which is astounding given that the framework was essentially rewritten from scratch. (How Rich and the team managed to pull this off boggles my mind.)

I should mention at this point that I think Svelte is a great framework, and a pleasure to work with. It’s probably my favorite JavaScript framework (other than the one I work on!). That said, a few things bugged me about the Svelte v5 upgrade:

  • It grew emoji-picker-element‘s bundle size by 7.1kB minified (it was originally a bit more, but Dominic Gannaway graciously made improvements to the tree-shaking).
  • It dropped support for older browsers due to syntax changes, in particular Safari 12 (which is 0.25-0.5% of browsers depending on who you ask).

Now, neither of these things really ought to be a dealbreaker. 7.1kB is not a huge amount for the average webapp, and an emoji picker should probably be lazy-loaded most of the time anyway. Also, Safari 12 might not be worth worrying about (and if it is, it won’t be in a couple years).

I also don’t think there’s anything wrong with building a standalone web component on top of a JavaScript framework – I’ve said so in the past. There are lots of fiddly bits that are hard to get right when you’re building a web component, and 99 times out of 100, you’re much better off using something like Svelte, or Lit, or Preact, or petite-vue, than to try to wing it yourself in Vanilla JS and building a half-baked framework in the process.

That said… I enjoy building a half-baked framework. And I have a bit of a competitive streak that makes me want to trim the bundle size as much as possible. So I decided to take this as an opportunity to rebuild emoji-picker-element on top of my own custom framework.

The end result is more or less what you saw in the previous post: a bit of reactivity, a dash of tagged template literals, and poof! A new framework is born.

This new framework is honestly only slightly more complex than what I sketched out in that post – I ended up only needing 85 lines of code for the reactivity engine and 233 for the templating system (as measured by cloc).

Of course, to get this minuscule size, I had to take some shortcuts. If this were an actual framework I was releasing to the world, I would need to handle a long tail of edge cases, perf hazards, and gnarly tradeoffs. But since this framework only needs to support one component, I can afford to cut some corners.

So does this tiny framework actually cut the mustard? Here are the results:

  • The bundle size is 6.1kB smaller than the current implementation (and ~13.2kB smaller than the Svelte 5 version).
  • Safari 12 is still supported (without needing code transforms).
  • There is no regression in runtime performance (as measured by Tachometer).
  • Initial memory usage is reduced by 140kB.

Here are the stats:

Metric Svelte v4 Svelte v5 Custom
Bundle size (min) 42.6kB 49.7kB 36.5kB
↳ Delta +7.1kB (+17%) -6.1kB (-14%)
Bundle size (min+gz) 14.9kB 18.8kB 12.6kB
↳ Delta +3.9kB (+26%) -2.3kB (-15%)
Initial memory usage 1.23MB 1.5MB 1.09MB
↳ Delta +270kB (+22%) -140kB (-11%)

Note: I’m not trying to say that Svelte 5 is bad, or that I’m smarter than the Svelte developers. As mentioned above, the only way I can get these fantastic numbers is by seriously cutting a lot of corners. And I actually really like the new features in Svelte v5 (snippets in particular are amazing, and the benchmark performance is truly impressive). I also can’t fault Svelte for focusing on their most important consumers, who are probably building entire apps out of Svelte components, and don’t care much about a higher baseline bundle size.

So was it worth it? I dunno. Maybe I will get a flood of bug reports after I ship this, and I will come crawling back to Svelte. Or maybe I will find that it’s too hard to add new features without the flexibility of a full framework. But I doubt it. I enjoyed building my own framework, and so I think I’ll keep it around just for the fun of it.

Side projects for me are always about three things: 1) learning, 2) sharing something with the world, and 3) having fun while doing so. emoji-picker-element ticks all three boxes for me, so I’m going to stick with the current design for the time being.

Let’s learn how modern JavaScript frameworks work by building one

Hand-drawn looking JavaScript logo saying DIY JS

In my day job, I work on a JavaScript framework (LWC). And although I’ve been working on it for almost three years, I still feel like a dilettante. When I read about what’s going on in the larger framework world, I often feel overwhelmed by all the things I don’t know.

One of the best ways to learn how something works, though, is to build it yourself. And plus, we gotta keep those “days since last JavaScript framework” memes going. So let’s write our own modern JavaScript framework!

What is a “modern JavaScript framework”?

React is a great framework, and I’m not here to dunk on it. But for the purposes of this post, “modern JavaScript framework” means “a framework from the post-React era” – i.e. Lit, Solid, Svelte, Vue, etc.

React has dominated the frontend landscape for so long that every newer framework has grown up in its shadow. These frameworks were all heavily inspired by React, but they’ve evolved away from it in surprisingly similar ways. And although React itself has continued innovating, I find that the post-React frameworks are more similar to each other than to React nowadays.

To keep things simple, I’m also going to avoid talking about server-first frameworks like Astro, Marko, and Qwik. These frameworks are excellent in their own way, but they come from a slightly different intellectual tradition compared to the client-focused frameworks. So for this post, let’s only talk about client-side rendering.

What sets modern frameworks apart?

From my perspective, the post-React frameworks have all converged on the same foundational ideas:

  1. Using reactivity (e.g. signals) for DOM updates.
  2. Using cloned templates for DOM rendering.
  3. Using modern web APIs like <template> and Proxy, which make all of the above easier.

Now to be clear, these frameworks differ a lot at the micro level, and in how they handle things like web components, compilation, and user-facing APIs. Not all frameworks even use Proxys. But broadly speaking, most framework authors seem to agree on the above ideas, or they’re moving in that direction.

So for our own framework, let’s try to do the bare minimum to implement these ideas, starting with reactivity.

Reactivity

It’s often said that “React is not reactive”. What this means is that React has a more pull-based rather than a push-based model. To grossly oversimplify things: in the worst case, React assumes that your entire virtual DOM tree needs to be rebuilt from scratch, and the only way to prevent these updates is to implement React.memo (or in the old days, shouldComponentUpdate).

Using a virtual DOM mitigates some of the cost of the “blow everything away and start from scratch” strategy, but it doesn’t fully solve it. And asking developers to write the correct memo code is a losing battle. (See React Forget for an ongoing attempt to solve this.)

Instead, modern frameworks use a push-based reactive model. In this model, individual parts of the component tree subscribe to state updates and only update the DOM when the relevant state changes. This prioritizes a “performant by default” design in exchange for some upfront bookkeeping cost (especially in terms of memory) to keep track of which parts of the state are tied to which parts of the UI.

Note that this technique is not necessarily incompatible with the virtual DOM approach: tools like Preact Signals and Million show that you can have a hybrid system. This is useful if your goal is to keep your existing virtual DOM framework (e.g. React) but to selectively apply the push-based model for more performance-sensitive scenarios.

For this post, I’m not going to rehash the details of signals themselves, or subtler topics like fine-grained reactivity, but I am going to assume that we’ll use a reactive system.

Note: there are lots of nuances when talking about what qualifies as “reactive.” My goal here is to contrast React with the post-React frameworks, especially Solid, Svelte v5 in “runes” mode, and Vue Vapor.

Cloning DOM trees

For a long time, the collective wisdom in JavaScript frameworks was that the fastest way to render the DOM is to create and mount each DOM node individually. In other words, you use APIs like createElement, setAttribute, and textContent to build the DOM piece-by-piece:

const div = document.createElement('div')
div.setAttribute('class', 'blue')
div.textContent = 'Blue!'

One alternative is to just shove a big ol’ HTML string into innerHTML and let the browser parse it for you:

const container = document.createElement('div')
container.innerHTML = `
  <div class="blue">Blue!</div>
`

This naïve approach has a big downside: if there is any dynamic content in your HTML (for instance, red instead of blue), then you would need to parse HTML strings over and over again. Plus, you are blowing away the DOM with every update, which would reset state such as the value of <input>s.

Note: using innerHTML also has security implications. But for the purposes of this post, let’s assume that the HTML content is trusted. 1

At some point, though, folks figured out that parsing the HTML once and then calling cloneNode(true) on the whole thing is pretty danged fast:

const template = document.createElement('template')
template.innerHTML = `
  <div class="blue">Blue!</div>
`
template.content.cloneNode(true) // this is fast!

Here I’m using a <template> tag, which has the advantage of creating “inert” DOM. In other words, things like <img> or <video autoplay> don’t automatically start downloading anything.

How fast is this compared to manual DOM APIs? To demonstrate, here’s a small benchmark. Tachometer reports that the cloning technique is about 50% faster in Chrome, 15% faster in Firefox, and 10% faster in Safari. (This will vary based on DOM size and number of iterations, but you get the gist.)

What’s interesting is that <template> is a new-ish browser API, not available in IE11, and originally designed for web components. Somewhat ironically, this technique is now used in a variety of JavaScript frameworks, regardless of whether they use web components or not.

Note: for reference, here is the use of cloneNode on <template>s in Solid, Vue Vapor, and Svelte v5.

There is one major challenge with this technique, which is how to efficiently update dynamic content without blowing away DOM state. We’ll cover this later when we build our toy framework.

Modern JavaScript APIs

We’ve already encountered one new API that helps a lot, which is <template>. Another one that’s steadily gaining traction is Proxy, which can make building a reactivity system much simpler.

When we build our toy example, we’ll also use tagged template literals to create an API like this:

const dom = html`
  <div>Hello ${ name }!</div>
`

Not all frameworks use this tool, but notable ones include Lit, HyperHTML, and ArrowJS. Tagged template literals can make it much simpler to build ergonomic HTML templating APIs without needing a compiler.

Step 1: building reactivity

Reactivity is the foundation upon which we'll build the rest of the framework. Reactivity will define how state is managed, and how the DOM updates when state changes.

Let's start with some "dream code" to illustrate what we want:

const state = {}

state.a = 1
state.b = 2

createEffect(() => {
  state.sum = state.a + state.b
})

Basically, we want a “magic object” called state, with two props: a and b. And whenever those props change, we want to set sum to be the sum of the two.

Assuming we don’t know the props in advance (or have a compiler to determine them), a plain object will not suffice for this. So let’s use a Proxy, which can react whenever a new value is set:

const state = new Proxy({}, {
  get(obj, prop) {
    onGet(prop)
    return obj[prop]
  },
  set(obj, prop, value) {
    obj[prop] = value
    onSet(prop, value)
    return true
  }
})

Right now, our Proxy doesn’t do anything interesting, except give us some onGet and onSet hooks. So let’s make it flush updates after a microtask:

let queued = false

function onSet(prop, value) {
  if (!queued) {
    queued = true
    queueMicrotask(() => {
      queued = false
      flush()
    })
  }
}

Note: if you’re not familiar with queueMicrotask, it’s a newer DOM API that’s basically the same as Promise.resolve().then(...), but with less typing.

Why flush updates? Mostly because we don’t want to run too many computations. If we update whenever both a and b change, then we’ll uselessly compute the sum twice. By coalescing the flush into a single microtask, we can be much more efficient.

Next, let’s make flush update the sum:

function flush() {
  state.sum = state.a + state.b
}

This is great, but it’s not yet our “dream code.” We’ll need to implement createEffect so that the sum is computed only when a and b change (and not when something else changes!).

To do this, let’s use an object to keep track of which effects need to be run for which props:

const propsToEffects = {}

Next comes the crucial part! We need to make sure that our effects can subscribe to the right props. To do so, we’ll run the effect, note any get calls it makes, and create a mapping between the prop and the effect.

To break it down, remember our “dream code” is:

createEffect(() => {
  state.sum = state.a + state.b
})

When this function runs, it calls two getters: state.a and state.b. These getters should trigger the reactive system to notice that the function relies on the two props.

To make this happen, we’ll start with a simple global to keep track of what the “current” effect is:

let currentEffect

Then, the createEffect function will set this global before calling the function:

function createEffect(effect) {
  currentEffect = effect
  effect()
  currentEffect = undefined
}

The important thing here is that the effect is immediately invoked, with the global currentEffect being set in advance. This is how we can track whatever getters it might be calling.

Now, we can implement the onGet in our Proxy, which will set up the mapping between the global currentEffect and the property:

function onGet(prop) {
  const effects = propsToEffects[prop] ?? 
      (propsToEffects[prop] = [])
  effects.push(currentEffect)
}

After this runs once, propsToEffects should look like this:

{
  "a": [theEffect],
  "b": [theEffect]
}

…where theEffect is the “sum” function we want to run.

Next, our onSet should add any effects that need to be run to a dirtyEffects array:

const dirtyEffects = []

function onSet(prop, value) {
  if (propsToEffects[prop]) {
    dirtyEffects.push(...propsToEffects[prop])
    // ...
  }
}

At this point, we have all the pieces in place for flush to call all the dirtyEffects:

function flush() {
  while (dirtyEffects.length) {
    dirtyEffects.shift()()
  }
}

Putting it all together, we now have a fully functional reactivity system! You can play around with it yourself and try setting state.a and state.b in the DevTools console – the state.sum will update whenever either one changes.

Now, there are plenty of advanced cases that we’re not covering here:

  1. Using try/catch in case an effect throws an error
  2. Avoiding running the same effect twice
  3. Preventing infinite cycles
  4. Subscribing effects to new props on subsequent runs (e.g. if certain getters are only called in an if block)

However, this is more than enough for our toy example. Let’s move on to DOM rendering.

Step 2: DOM rendering

We now have a functional reactivity system, but it’s essentially “headless.” It can track changes and compute effects, but that’s about it.

At some point, though, our JavaScript framework needs to actually render some DOM to the screen. (That’s kind of the whole point.)

For this section, let’s forget about reactivity for a moment and imagine we’re just trying to build a function that can 1) build a DOM tree, and 2) update it efficiently.

Once again, let’s start off with some dream code:

function render(state) {
  return html`
    <div class="${state.color}">${state.text}</div>
  `
}

As I mentioned, I’m using tagged template literals, ala Lit, because I found them to be a nice way to write HTML templates without needing a compiler. (We’ll see in a moment why we might actually want a compiler instead.)

We’re re-using our state object from before, this time with a color and text property. Maybe the state is something like:

state.color = 'blue'
state.text = 'Blue!'

When we pass this state into render, it should return the DOM tree with the state applied:

<div class="blue">Blue!</div>

Before we go any further, though, we need a quick primer on tagged template literals. Our html tag is just a function that receives two arguments: the tokens (array of static HTML strings) and expressions (the evaluated dynamic expressions):

function html(tokens, ...expressions) {
}

In this case, the tokens are (whitespace removed):

[
  "<div class=\"",
  "\">",
  "</div>"
]

And the expressions are:

[
  "blue",
  "Blue!"
]

The tokens array will always be exactly 1 longer than the expressions array, so we can trivially zip them up together:

const allTokens = tokens
    .map((token, i) => (expressions[i - 1] ?? '') + token)

This will give us an array of strings:

[
  "<div class=\"",
  "blue\">",
  "Blue!</div>"
]

We can join these strings together to make our HTML:

const htmlString = allTokens.join('')

And then we can use innerHTML to parse it into a <template>:

function parseTemplate(htmlString) {
  const template = document.createElement('template')
  template.innerHTML = htmlString
  return template
}

This template contains our inert DOM (technically a DocumentFragment), which we can clone at will:

const cloned = template.content.cloneNode(true)

Of course, parsing the full HTML whenever the html function is called would not be great for performance. Luckily, tagged template literals have a built-in feature that will help out a lot here.

For every unique usage of a tagged template literal, the tokens array is always the same whenever the function is called – in fact, it’s the exact same object!

For example, consider this case:

function sayHello(name) {
  return html`<div>Hello ${name}</div>`
}

Whenever sayHello is called, the tokens array will always be identical:

[
  "<div>Hello ",
  "</div>"
]

The only time tokens will be different is for completely different locations of the tagged template:

html`<div></div>`
html`<span></span>` // Different from above

We can use this to our advantage by using a WeakMap to keep a mapping of the tokens array to the resulting template:

const tokensToTemplate = new WeakMap()

function html(tokens, ...expressions) {
  let template = tokensToTemplate.get(tokens)
  if (!template) {
    // ...
    template = parseTemplate(htmlString)
    tokensToTemplate.set(tokens, template)
  }
  return template
}

This is kind of a mind-blowing concept, but the uniqueness of the tokens array essentially means that we can ensure that each call to html`...` only parses the HTML once.

Next, we just need a way to update the cloned DOM node with the expressions array (which is likely to be different every time, unlike tokens).

To keep things simple, let’s just replace the expressions array with a placeholder for each index:

const stubs = expressions.map((_, i) => `__stub-${i}__`)

If we zip this up like before, it will create this HTML:

<div class="__stub-0__">
  __stub-1__
</div>

We can write a simple string replacement function to replace the stubs:

function replaceStubs (string) {
  return string.replaceAll(/__stub-(\d+)__/g, (_, i) => (
    expressions[i]
  ))
}

And now whenever the html function is called, we can clone the template and update the placeholders:

const element = cloned.firstElementChild
for (const { name, value } of element.attributes) {
  element.setAttribute(name, replaceStubs(value))
}
element.textContent = replaceStubs(element.textContent)

Note: we are using firstElementChild to grab the first top-level element in the template. For our toy framework, we’re assuming there’s only one.

Now, this is still not terribly efficient – notably, we are updating textContent and attributes that don’t necessarily need to be updated. But for our toy framework, this is good enough.

We can test it out by rendering with different state:

document.body.appendChild(render({ color: 'blue', text: 'Blue!' }))
document.body.appendChild(render({ color: 'red', text: 'Red!' }))

This works!

Step 3: combining reactivity and DOM rendering

Since we already have a createEffect from the rendering system above, we can now combine the two to update the DOM based on the state:

const container = document.getElementById('container')

createEffect(() => {
  const dom = render(state)
  if (container.firstElementChild) {
    container.firstElementChild.replaceWith(dom)
  } else {
    container.appendChild(dom)
  }
})

This actually works! We can combine this with the “sum” example from the reactivity section by merely creating another effect to set the text:

createEffect(() => {
  state.text = `Sum is: ${state.sum}`
})

This renders “Sum is 3”:

You can play around with this toy example. If you set state.a = 5, then the text will automatically update to say “Sum is 7.”

Next steps

There are lots of improvements we could make to this system, especially the DOM rendering bit.

Most notably, we are missing a way to update content for elements inside a deep DOM tree, e.g.:

<div class="${color}">
  <span>${text}</span>
</div>

For this, we would need a way to uniquely identify every element inside of the template. There are lots of ways to do this:

  1. Lit, when parsing HTML, uses a system of regexes and character matching to determine whether a placeholder is within an attribute or text content, plus the index of the target element (in depth-first TreeWalker order).
  2. Frameworks like Svelte and Solid have the luxury of parsing the entire HTML template during compilation, which provides the same information. They also generate code that calls firstChild and nextSibling to traverse the DOM to find the element to update.

Note: traversing with firstChild and nextSibling is similar to the TreeWalker approach, but more efficient than element.children. This is because browsers use linked lists under the hood to represent the DOM.

Whether we decided to do Lit-style client-side parsing or Svelte/Solid-style compile-time parsing, what we want is some kind of mapping like this:

[
  {
    elementIndex: 0, // <div> above
    attributeName: 'class',
    stubIndex: 0 // index in expressions array
  },
  {
    elementIndex: 1 // <span> above
    textContent: true,
    stubIndex: 1 // index in expressions array
  }
]

These bindings would tell us exactly which elements need to be updated, which attribute (or textContent) needs to be set, and where to find the expression to replace the stub.

The next step would be to avoid cloning the template every time, and to just directly update the DOM based on the expressions. In other words, we not only want to parse once – we want to only clone and set up the bindings once. This would reduce each subsequent update to the bare minimum of setAttribute and textContent calls.

Note: you may wonder what the point of template-cloning is, if we end up needing to call setAttribute and textContent anyway. The answer is that most HTML templates are largely static content with a few dynamic “holes.” By using template-cloning, we clone the vast majority of the DOM, while only doing extra work for the “holes.” This is the key insight that makes this system work so well.

Another interesting pattern to implement would be iterations (or repeaters), which come with their own set of challenges, like reconciling lists between updates and handling “keys” for efficient replacement.

I’m tired, though, and this blog post has gone on long enough. So I leave the rest as an exercise to the reader!

Conclusion

So there you have it. In the span of one (lengthy) blog post, we’ve implemented our very own JavaScript framework. Feel free to use this as the foundation for your brand-new JavaScript framework, to release to the world and enrage the Hacker News crowd.

Personally I found this project very educational, which is partly why I did it in the first place. I was also looking to replace the current framework for my emoji picker component with a smaller, more custom-built solution. In the process, I managed to write a tiny framework that passes all the existing tests and is ~6kB smaller than the current implementation, which I’m pretty proud of.

In the future, I think it would be neat if browser APIs were full-featured enough to make it even easier to build a custom framework. For example, the DOM Part API proposal would take out a lot of the drudgery of the DOM parsing-and-replacement system we built above, while also opening the door to potential browser performance optimizations. I could also imagine (with some wild gesticulation) that an extension to Proxy could make it easier to build a full reactivity system without worrying about details like flushing, batching, or cycle detection.

If all those things were in place, then you could imagine effectively having a “Lit in the browser,” or at least a way to quickly build your own “Lit in the browser.” In the meantime, I hope that this small exercise helped to illustrate some of the things framework authors think about, and some of the machinery under the hood of your favorite JavaScript framework.

Thanks to Pierre-Marie Dartus for feedback on a draft of this post.

Footnotes

1. Now that we’ve built the framework, you can see why the content passed to innerHTML can be considered trusted. All HTML tokens either come from tagged template literals (in which case they’re fully static and authored by the developer) or are placeholders (which are also written by the developer). User content is only set using setAttribute or textContent, which means that no HTML sanitization is required to avoid XSS attacks. Although you should probably just use CSP anyway!

Catching errors thrown from connectedCallback

Here’s a deep-in-the-weeds thing about web components that I ran into recently.

Let’s say you have a humble component:

class Hello extends HTMLElement {}
customElements.define('hello-world', Hello);

And let’s say that this component throws an error in its connectedCallback:

class Hello extends HTMLElement {
  connectedCallback() {
    throw new Error('haha!');
  }
}

Why would it do that? I dunno, maybe it needs to validate its props or something. Or maybe it’s just having a bad day.

In any case, you might wonder: how could you test this functionality? You might naïvely try a try/catch:

const element = document.createElement('hello-world');
try {
  document.body.appendChild(element);
} catch (error) {
  console.log('Caught?', error);
}

Unfortunately, this doesn’t work:

In the DevTools console, you’ll see:

Uncaught Error: haha!

Our elusive error is uncaught. So… how can you catch it? In the end, it’s fairly simple:

window.addEventListener('error', event => {
  console.log('Caught!', event.error);
});
document.body.appendChild(element);

This will actually catch the error:

As it turns out, connectedCallback errors bubble up directly to the window, rather than locally to where you called appendChild. (Even though appendChild is what caused connectedCallback to fire in the first place. For the spec nerds out there, this is apparently called a custom element callback reaction.)

Our addEventListener solution works, but it’s a little janky and error-prone. In short:

  • You need to remember to call event.preventDefault() so that nobody else (like your persnickety test runner) catches the error and fails your tests.
  • You need to remember to call removeEventListener (or AbortSignal if you’re fancy).

A full-featured utility might look like this:

function catchConnectedError(callback) {
  let error;
  const listener = event => {
    event.preventDefault();
    error = event.error;
  };
  window.addEventListener('error', listener);
  try {
    callback();
  } finally {
    window.removeEventListener('error', listener);
  }
  return error;
}

…which you could use like so:

const error = catchConnectedError(() => {
  document.body.appendChild(element);
});
console.log('Caught!', error);

If this comes in handy for you, you might add it to your testing library of choice. For instance, here’s a variant I wrote recently for Jest.

Hope this quick tip was helpful, and keep connectin’ and errorin’!

Update: This is also true of any other “callback reactions” such as disconnectedCallback, attributeChangedCallback, form-associated custom element lifecycle callbacks, etc. I’ve just found that, most commonly, you want to catch errors from connectedCallback.

Use web components for what they’re good at

Web components logo of two wrenches together

Dave Rupert recently made a bit of a stir with his post “If Web Components are so great, why am I not using them?”. I’ve been working with web components for a few years now, so I thought I’d weigh in on this.

At the risk of giving the most senior-engineer-y “It depends” answer ever: I think web components have strengths and weaknesses, and you have to understand the tradeoffs before deciding when to use them. So let’s explore some cases where web components really shine, before moving on to where they might fall flat.

Client-rendered leaf components

To me, this is the most unambiguously slam-dunk use case for web components. You have some component at the leaf of the DOM tree, it doesn’t need to be rendered server-side, and it doesn’t <slot> any content inside of it. Examples include: a rich text editor, a calendar widget, a color picker, etc.

At this point, you’ve already bypassed a bunch of tricky bits of web components, such as Server-Side Rendering (SSR), hydration, slotting, maybe even shadow DOM. If you’re not using a framework, or you’re using one that supports web components, you can just plop the <fancy-component> tag into your template or JSX and call it a day.

For instance, take my emoji-picker-element. It’s one line of HTML to import it:

<script type="module" href="https://cdn.jsdelivr.net/npm/emoji-picker-element@1/index.js"></script>

And one line to use it:

<emoji-picker></emoji-picker>

No bundler, no transpiler, no framework integration, just copy-paste. It’s almost like ye olde days of jQuery plugins. And yet, I’ve also seen it used in complex SPA projects – web components can run the gamut.

This is about as close as you can get to the original vision for web components, which is that using <fancy-element> should be as easy as using built-in HTML elements.

Glue code, or avoiding the big rewrite

Picture this: you’ve got a big React codebase, it’s served you well for a while, but now your team wants to move to Svelte. Time to rewrite the whole thing, right? Including finding new Svelte versions of every third-party component you’re using?

This is the way a lot of frontend devs think about frameworks, with all these huge switching costs when moving from one to the other. The biggest misconception I’ve seen about web components is that they’re just another flavor of the same story.

They’re not. The whole point of web components is to liberate us from this churn. If you decide to switch from Vue to Lit, or from Angular to Stencil (or whatever), and if you’re rewriting all your components in one go, then you’re signing up for a lot of unnecessary pain.

Just let your old code and your new code live side-by-side. Use web components as the interoperability later to glue the two together. You don’t need to rewrite everything all at once:

<old-component>
  <new-component>
  </new-component>
</old-component>

Web components can pass props/attributes down, and send events back up. (That’s kind of their whole thing.) If your framework supports web components, then this works out of the box. (And if not, you can write some lite glue code.)

Now, some people get squeamish at the idea of two frameworks living on the same page, but I think this is more gut-based than evidence-based. And to be fair, if you’re using a meta-framework to do SSR/hydration, then this partial migration may be easier said than done. But if web components are good at anything, it’s defining a high-level contract for composing two components together, on the client side anyway.

So if you’re tired of rewriting your whole UI every year (or your boss is tired of you doing it), then maybe web components are worth considering.

Design systems and enterprise

If you watch Cassondra Robert’s talk from CSS Day, there’s a nice slide with a lot of corporate logos attesting to web components’ penetration:

CSS Day talk screenshot showing Cassondra Roberts alongside a slide showing company logos of Adobe, RedHat, Microsoft, IBM, Google, Apple, ING, GitHub, Netlify, Salesforce, GitLab

If this isn’t enough, you could also look at Oracle, SAP, ServiceNow… the list goes on and on.

What you’ll notice is that a lot of big companies (like the one I work for) are quietly using web components, especially in their design systems and component libraries. If you spend a lot of time on webdev social media, this might surprise you. It might also surprise you to learn that, by some measures, React is used on roughly 8% of page loads, whereas web components are used on 20%.

The thing is, a lot of big companies are not on social media (Twitter/X, Reddit, etc.) trying to sell you on web components or teach you how to use them. On the other hand, there are plenty of tech influencers on Twitter busily keeping up to date with every minor version of React and what’s new in that ecosystem. The reason for this is pretty simple: big companies tend to talk a lot internally, but not so much externally, whereas small companies (agencies, startups, freelancers, etc.) tend to be more active on social media relative to their company size. So if web components are more popular inside the enterprise than outside of it, you’d never know it from browsing Twitter all day.

So why are big enterprises so gaga for web components? For one thing, design systems based on web components work across a variety of environments. A big company might have frontends written in React, Angular, Ember, and static HTML, and they all have to play nicely with the company’s theming and branding. The big rewrite (as described above) may be a fun exercise for your average startup, but it’s just not practical in the enterprise world.

Having a lot of consumers of your codebase, and having to think on longer timescales, just leads to different technical decisions. And to me, this points to the main reason enterprises love web components: stability and longevity.

Think about your average React codebase, and how updating any dependency (React Router, Redux, React itself, etc.) can lead to a weeks-long effort of rewriting your code to accommodate all the breaking changes. Cue the inevitable appearance of Hyrum’s Law at enterprise scale, where even a tiny change can cause a butterfly effect that breaks thousands of components, and even bumping a minor version can lead to weeks of testing, validating, and documenting. In this world, your average React minor version bump is an ordeal – a major version bump is a disaster.

Compare this to the backwards-compatibility guarantees of the web platform, where the venerable Space Jam website from 1996 still works to this day. Web components hook into this stability story, which is a huge plus for any company that doesn’t have the luxury of rewriting their frontend every couple years.

When you use a web component, connectedCallback is just connectedCallback – it’s not going to change. And shadow DOM style scoping, with all of its subtleties, is not going to change either. Whatever code you can delegate to the browser, that’s code you’re not having to maintain or validate over the years; you’ve effectively outsourced that responsibility to Google, Apple, and Mozilla.

Enterprises are slow, cautious, and risk-averse – just like the web platform. No wonder web components have taken the enterprise world by storm.

Downsides of web components

All of the pluses of web components should be weighed against their weaknesses. And web components have their fair share:

  • Server-side rendering (SSR). I would argue that this is still not a solved problem in web-components-land. Sure, we have Declarative Shadow DOM, but that’s just one part of the puzzle. There’s no standard for rendering web components on the server, so every framework does it a bit differently. The fact that Lit SSR is still under “Lit Labs” should tell you something. Maybe in the future, when you can render 3 different web component frameworks on the server, and they compose together and hydrate nicely, then I’ll consider this solved. But I think we’re a few years away from that, at least.
  • Accessibility. You can’t have ARIA references that easily cross shadow boundaries, and dialogs and focus can be tricky. At the very least, if you don’t want to mess up accessibility, then you have to think really carefully about your component structure from day one. There’s a lot of ongoing work to solve this, but I’d say it’s definitely rough out there in 2023.

Aside from that, there are also problems of lock-in (e.g. meta-frameworks, bundlers, test runners), the ongoing legacy of IE11 (some folks are scarred for life; the last thing they want to do is #useThePlatform), and overall platform exhaustion (“I learned React, it works, I don’t want to learn something else”). Not everyone is going to be sold on web components, and I’m fine with that. The web is a big tent, and everybody is using it for different things; that’s part of what makes it so amazing.

Conclusion

Use web components. Or don’t use them. Or come back and check in again in a few years, when the features and web standards are a bit more fleshed out.

I think web components are cool, but I understand that not everyone feels the same way. I don’t feel like I need to go around evangelizing for their adoption. They’re just another tool in the toolbelt; the trick is leveraging their strengths while avoiding their pitfalls.

The thing I like about web components, and web standards in general, is that I get to outsource a bunch of boring problems to the browser. How do I compose components? How do I scope styles? How do I pass data around? Who cares – just take whatever the browser gives you. That way, I can spend more time on the problems that actually matter to my end-users, like performance, accessibility, security, etc.

Too often, in web development, I feel like I’m wrestling with incidental complexity that has nothing to do with the actual problem at hand. I’m wrangling npm dependencies, or debugging my state manager, or trying to figure out why my test runner isn’t playing nicely with my linter. Some people really enjoy this kind of stuff, and I find myself getting sucked into it sometimes too. But I think ultimately it’s a kind of fake-work that feels good but doesn’t accomplish much, because your end-user doesn’t care if your bundler is up-to-date with your TypeScript transpiler.

That said, in 2023, choosing web components comes with its own baggage of incidental complexity, such as the aforementioned problems of SSR and accessibility. Compromising on either of those things could actively harm your end-users in ways that actually matter to them, so the tradeoff may not be worth it to you.

I think the tradeoff is often worth it, but again, there are nuances here. “Use web components for what they’re good at” isn’t catchy, but it’s a good way to think about it in 2023.

Thanks to Westbrook Johnson for feedback on a draft of this blog post.

My talk on CSS runtime performance

A few months ago, I gave a talk on CSS performance at performance.now in Amsterdam. The recording is available online:

(You can also read the slides.)

This is one of my favorite talks I’ve ever given. It was the product of months (honestly, years) of research, prompted by a couple questions:

  1. What is the fastest way to implement scoped styles? (Surprisingly few people seem to ask this question.)
  2. Does using shadow DOM improve style performance? (Short answer: yes.)

To answer these questions (and more), I did a bunch of research into how browsers work under the hood. This included combing through old Servo discussions from 2013, reaching out to browser developers like Manuel Rego Casasnovas and Emilio Cobos Alvarez, reading browser PRs, and writing lots of benchmarks.

In the end, I’m pretty satisfied with the talk. My main goal was to shine a light on all the heroic work that browser vendors have done over the years to make CSS so performant. Much of this stuff is intricate and arcane (like Bloom filters), but I hoped that with some simple diagrams and animations, I could bring this work to life.

The two outcomes I’d love to see from this talk are:

  1. Web developers spend more time thinking about and measuring CSS performance. (Pssst, check out the SelectorStats section of my guide to Chrome tracing!)
  2. Browser vendors provide better DevTools to understand CSS performance. (In the talk, I pitch this as a SQL EXPLAIN for CSS.)

What I didn’t want to do in this talk was rain on anybody’s parade who is trying to do sophisticated things with CSS. More and more, I am seeing ambitious usage of new CSS features like :has and container queries, and I don’t want people to feel like they should avoid these techniques and limit themselves to classes and IDs. I just want web developers to consider the cost of CSS, and to get more comfortable with using the DevTools to understand which kinds of CSS patterns may be slowing down their website.

I also got some good feedback from browser DevTools folks after my talk, so I’m hopeful for the future of CSS performance. As techniques like shadow DOM and native CSS scoping become more widespread, it may even mitigate a lot of my worries about CSS perf. In any case, it was a fascinating topic to research, and I hope that folks were intrigued and entertained by my talk.

A beginner’s guide to Chrome tracing

I’ve been doing web performance for a while, so I’ve spent a lot of time in the Performance tab of the Chrome DevTools. But sometimes when you’re debugging a tricky perf problem, you have to go deeper. That’s where Chrome tracing comes in.

Chrome tracing (aka Chromium tracing) lets you record a performance trace that captures low-level details of what the browser is doing. It’s mostly used by Chromium engineers themselves, but it can also be helpful for web developers when a DevTools trace is not enough.

This post is a short guide on how to use this tool, from a web developer’s point of view. I’m not going to cover everything – just the bare minimum to get up and running.

Setup

First off, as described in this helpful post, you’re going to want a clean browser window. The tracing tool measures everything going on in the browser, including background tabs and extensions, which just adds unnecessary noise.

You can launch a fresh Chrome window using this command (on Linux):

google-chrome \
  --user-data-dir="$(mktemp -d)" --disable-extensions

Or on macOS:

/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome \
  --user-data-dir="$(mktemp -d)" --disable-extensions

Or if you’re lazy (like me), you can install a standalone browser like Chrome Canary and run that.

Record

Next, go to about:tracing in the URL bar. (chrome:tracing or edge:tracing will also work, depending on your browser.) You’ll see a screen like this:

Screenshot of tracing tool with arrow pointing at Record

Click “Record.”

Next, you’ll be given a bunch of options. Here’s where it gets interesting.

Screenshot of tracing tools showing Edit categories with an arrow pointing at it

Usually “Web developer” is a fine default. But sometimes you want extra information, which you can get by clicking “Edit categories.” Here are some of the “cheat codes” I’ve discovered:

  • Check blink.user_timing to show user timings (i.e. performance.measures) in the trace. This is incredibly helpful for orienting yourself in a complex trace.
  • Check blink.debug to get SelectorStats, i.e. stats on slow CSS selectors during style calculation.
  • Check v8.runtime_stats for low-level details on what V8 is doing.

Note that you probably don’t want to go in here and check boxes with wild abandon. That will just make the trace slower to load, and could crash the tab. Only check things you think you’ll actually be using.

Next, click “Record.”

Now, switch over to another tab and do whatever action you want to record – loading a page, clicking a button, etc. Note that if you’re loading a page, it’s a good idea to start from about:blank to avoid measuring the unload of the previous page.

When you’re done recording, switch back and click “Stop.”

Analyze

Screenshot of tracing tool showing arrows pointing at Processes, None, and the Renderer process

In the tracing UI, the first thing you’ll want to do is remove the noise. Click “Processes,” then “None,” then select only the process you’re interested in. It should say “Renderer” plus the title of the tab where you ran your test.

Moving around the UI can be surprisingly tricky. Here is what I usually do:

  • Use the WASD keys to move left, right, or zoom in and out. (If you’ve played a lot of first-person shooters, you should feel right at home.)
  • Click-and-drag on any empty space to pan around.
  • Use the mousewheel to scroll up and down. Use /Alt + mousewheel to zoom in and out.

You’ll want to locate the CrRendererMain thread. This is the main thread of the renderer process. Under “Ungrouped Measure,” you should see any user timings (i.e. performance.measures) that you took in the trace.

In this example, I’ve located the Document::updateStyle slice (i.e. style calculation), as well as the SelectorStats right afterward. Below, I have a detailed table that I can click to sort by various columns. (E.g. you can sort by the longest elapsed time.)

Screenshot of tracing tool with arrows pointing to CrRendererMain, UpdateStyle, SelectorStats, and table of selectors

Note that I have a performance.measure called “total” in the above trace. (You can name it whatever you want.)

General strategy

I mostly use Chrome tracing when there’s an unexplained span of time in the DevTools. Here are some cases where I’ve seen it come in handy:

  • Time spent in IndexedDB (the IndexedDB flag can be helpful here).
  • Time spent in internal subsystems, such as accessibility or spellchecking.
  • Understanding which CSS selectors are slowest (see SelectorStats above).

My general strategy is to first run the tool with the default settings (plus blink.user_timing, which I almost always enable). This alone will often tell you more than the DevTools would.

If that doesn’t provide enough detail, I try to guess which subsystem of the browser has a performance problem, and tick flags related to that subsystem when recording. (For instance, skia is related to rendering, blink_style and blink.invalidation are probably related to style invalidation, etc.) Unfortunately this requires some knowledge of Chromium’s internals, along with a lot of guesswork.

When in doubt, you can always file a bug on Chromium. As long as you have a consistent repro, and you can demonstrate that it’s a Chromium-only perf problem, then the Chromium engineers should be able to route it to the right team.

Conclusion

The Chrome tracing tool is incredibly complex, and it’s mostly designed for browser engineers. It can be daunting for a web developer to pick up and use. But with a little practice, it can be surprisingly helpful, especially in odd perf edge cases.

There is also a new UI called Perfetto that some may find easier to use. I’m a bit old-school, though, so I still prefer the old UI for now.

I hope this short guide was helpful if you ever find yourself stuck with a performance problem in Chrome and need more insight into what’s going on!

See also: “Chrome Tracing for Fun and Profit” by Jeremy Rose.

Style performance and concurrent rendering

I was fascinated recently by “Why we’re breaking up with CSS-in-JS” by Sam Magura. It’s a great overview of some of the benefits and downsides of the “CSS-in-JS” pattern, as implemented by various libraries in the React ecosystem.

What really piqued my curiosity, though, was a link to this guide by Sebastian Markbåge on potential performance problems with CSS-in-JS when using concurrent rendering, a new feature in React 18.

Here is the relevant passage:

In concurrent rendering, React can yield to the browser between renders. If you insert a new rule in a component, then React yields, the browser then have to see if those rules would apply to the existing tree. So it recalculates the style rules. Then React renders the next component, and then that component discovers a new rule and it happens again.

This effectively causes a recalculation of all CSS rules against all DOM nodes every frame while React is rendering. This is VERY slow.

This concept was new and confusing to me, so I did what I often do in these cases: I wrote a benchmark.

Let’s benchmark it!

This benchmark is similar to my previous shadow DOM vs style scoping benchmark, with one twist: instead of rendering all “components” in one go, we render each one in its own requestAnimationFrame. This is to simulate a worst-case scenario for React concurrent rendering – where React yields between each component render, allowing the browser to recalculate style and layout.

In this benchmark, I’m rendering 200 “components,” with three kinds of stylesheets: unscoped (i.e. the most unperformant CSS I can think of), scoped-ala-Svelte (i.e. adding classes to every selector), and shadow DOM.

The “unscoped” CSS tells the clearest story:

Screenshot of Chrome DevTools showing style/layout calculation costs steadily increasing over time

In this Chrome trace, you can see that the style calculation costs steadily increase as each component is rendered. This seems to be exactly what Markbåge is talking about:

When you add or remove any CSS rules, you more or less have to reapply all rules that already existed to all nodes that already existed. Not just the changed ones. There are optimizations in browsers but at the end of the day, they don’t really avoid this problem.

In other words: not only are we paying style costs as every component renders, but those costs actually increase over time.

If we batch all of our style insertions before the components render, though, then we pay much lower style costs on each subsequent render:

Screenshot of Chrome DevTools, showing low and roughly consistent style/layout calculation costs over time

To me, this is similar to layout thrashing. The main difference is that, with “classic” layout thrashing, you’re forcing a style/layout recalculation by calling some explicit API like getBoundingClientRect or offsetLeft. Whereas in this case, you’re not explicitly invoking a recalc, but instead implicitly forcing a recalc by yielding to the browser’s normal style/layout rendering loop.

I’ll also note that the second scenario could still be considered “layout thrashing” – the browser is still doing style/layout work on each frame. It’s just doing much less, because we’ve only invalidated the DOM elements and not the CSS rules.

Update: This benchmark does not perfectly simulate how React renders DOM nodes – see below for a slightly tweaked benchmark. The conclusion is still largely the same.

Here are the benchmark results for multiple browsers (200 components, median of 25 samples, 2014 Mac Mini):

Chart data, see table below

Click for table
Scenario Chrome 106 Firefox 106 Safari 16
Unscoped 20807.3 13589 14958
Unscoped – styles in advance 3392.5 3357 3406
Scoped 3330 3321 3330
Scoped – styles in advance 3358.9 3333 3339
Shadow DOM 3366.4 3326 3327

As you can see, injecting the styles in advance is much faster than the pay-as-you-go system: 20.8s vs 3.4s in Chrome (and similar for other browsers).

It also turns out that using scoped CSS mitigates the problem – there is little difference between upfront and per-component style injection. And shadow DOM doesn’t have a concept of “upfront styles” (the styles are naturally scoped and attached to each component), so it benefits accordingly.

Is scoping a panacea?

Note though, that scoping only mitigates the problem. If we increase the number of components, we start to see the same performance degradation:

Screenshot of Chrome DevTools showing style/layout calculation costs steadily getting worse over time, although not as bad as in the other screenshot

Here are the benchmark results for 500 components (skipping “unscoped” this time around – I didn’t want to wait!):

Chart data, see table below

Click for table
Scenario Chrome 106 Firefox 106 Safari 16
Scoped 12490.6 8972 11059
Scoped – styles in advance 8413.4 8824 8561
Shadow DOM 8441.6 8949 8695

So even with style scoping, we’re better off injecting the styles in advance. And shadow DOM also performs better than “pay-as-you-go” scoped styles, presumably because it’s a browser-native scoping mechanism (as opposed to relying on the browser’s optimizations for class selectors). The exception is Firefox, which (in a recurring theme), seems to have some impressive optimizations in this area.

Is this something browsers could optimize more? Possibly. I do know that Chromium already weighs some tradeoffs with optimizing for upfront rendering vs re-rendering when stylesheets change. And Firefox seems to perform admirably with whatever CSS we throw at it.

So if this “inject and yield” pattern were prevalent enough on the web, then browsers might be incentivized to target it. But given that React concurrent rendering is somewhat new-ish, and given that the advice from React maintainers is already to batch style insertions, this seems somewhat unlikely to me.

Considering concurrent rendering

Unmentioned in either of the above posts is that this problem largely goes away if you’re not using concurrent rendering. If you do all of your DOM writes in one go, then you can’t layout thrash unless you’re explicitly calling APIs like getBoundingClientRect – which would be something for component authors to avoid, not for the framework to manage.

(Of course, in a long-lived web app, you could still have steadily increasing style costs as new CSS is injected and new components are rendered. But it seems unlikely to be quite as severe as the “rAF-based thrashing” above.)

I assume this, among other reasons, is why many non-React framework authors are skeptical of concurrent rendering. For instance, here’s Evan You (maintainer of Vue):

The pitfall here is not realizing that time slicing can only slice “slow render” induced by the framework – it can’t speed up DOM insertions or CSS layout. Also, staying responsive != fast. The user could end up waiting longer overall due to heavy scheduling overhead.

(Note that “time slicing” was the original name for concurrent rendering.)

Or for another example, here’s Rich Harris (maintainer of Svelte):

It’s not clear to me that [time slicing] is better than just having a framework that doesn’t have these bottlenecks in the first place. The best way to deliver a good user experience is to be extremely fast.

I feel a bit torn on this topic. I’ve seen the benefits of a “time slicing” or “debouncing” approach even when building Svelte components – for instance, both emoji-picker-element and Pinafore use requestIdleCallack (as described in this post) to improve responsiveness when typing into the text inputs. I found this improved the “feel” when typing, especially on a slower device (e.g. using Chrome DevTool’s 6x CPU throttling), even though both were written in Svelte. Svelte’s JavaScript may be fast, but the fastest JavaScript is no JavaScript at all!

That said, I’m not sure if this is something that should be handled by the framework rather than the component author. Yielding to the browser’s rendering loop is very useful in certain perf-sensitive scenarios (like typing into a text input), but in other cases it can worsen the overall performance (as we see with rendering components and their styles).

Is it worth it for the framework to make everything concurrent-capable and try to get the best of both worlds? I’m not so sure. Although I have to admire React for being bold enough to try.

Afterword

After this post was published, Mark Erikson wrote a helpful comment pointing out that inserting DOM nodes is not really something React does during “renders” (at least, in the context of concurrent rendering). So the benchmark would be more accurate if it inserted <style> nodes (as a “misbehaving” CSS-in-JS library would), but not component nodes, before yielding to the browser.

So I modified the benchmark to have a separate mode that delays inserting component DOM nodes until all components have “rendered.” To make it a bit fairer, I also pre-inserted the same number of initial components (but without style) – otherwise, the injected CSS rules wouldn’t have many DOM nodes to match against, so it wouldn’t be terribly representative of a real-world website.

As it turns out, this doesn’t really change the conclusion – we still see gradually increasing style costs in a “layout thrashing” pattern, even when we’re only inserting <style>s between rAFs:

Chrome DevTools screenshot showing gradually increasing style costs over time

The main difference is that, when we front-load the style injections, the layout thrashing goes away entirely, because each rAF tick is neither reading from nor writing to the DOM. Instead, we have one big style cost at the start (when injecting the styles) and another at the end (when injecting the DOM nodes):

Chrome DevTools screenshot showing large purple style blocks at the beginning and end and little JavaScript slices in the middle

(In the above screenshot, the occasional purple slices in the middle are “Hit testing” and “Pre-paint,” not style or layout calculation.)

Note that this is still a teensy bit inaccurate, because now our rAF ticks aren’t doing anything, since this benchmark isn’t actually using React or virtual DOM. In a real-world example, there would be some JavaScript cost to running a React component’s render() function.

Still, we can run the modified benchmark against the various browsers, and see that the overall conclusion has not changed much (200 components, median of 25 samples, 2014 Mac Mini):

Chart data, see table below

Click for table
Scenario Chrome 106 Firefox 106 Safari 16
Unscoped 26180 17622 17349
Unscoped – styles in advance 3958.3 3663 3945
Scoped 3394.6 3370 3358
Scoped – styles in advance 3476.7 3374 3368
Shadow DOM 3378 3370 3408

So the lesson still seems to be: invalidating global CSS rules frequently is a performance anti-pattern. (Even moreso than inserting DOM nodes frequently!)

Afterword 2

I asked Emilio Cobos Álvarez about this, and he gave some great insights from the Firefox perspective:

We definitely have optimizations for that […] but the worst case is indeed “we restyle the whole document again”.

Some of the optimizations Firefox has are quite clever. For example, they optimize appending stylesheets (i.e. appending a new <style> to the <head>) more heavily than inserting (i.e. injecting a <style> between other <style>s) or deleting (i.e. removing a <style>).

Emilio explains why:

Since CSS is source-order dependent, insertions (and removals) cause us to rebuild all the relevant data structures to preserve ordering, while appends can be processed more easily.

Some of this work was apparently done as part of optimizations for Facebook.com back in 2017. I assume Facebook was appending a lot of <style>s, but not inserting or deleting (which makes sense – this is the dominant pattern I see in JavaScript frameworks today).

Firefox also has some specific optimizations for classes, IDs, and tag names (aka “local names”). But despite their best efforts, there are cases where everything needs to be marked as invalid.

So as a web developer, keeping a mental model of “when styles change, everything must be recalculated” is still accurate, at least for the worst case.