Archive for June, 2022

SPAs: theory versus practice

I’ve been thinking a lot recently about Single-Page Apps (SPAs) and Multi-Page Apps (MPAs). I’ve been thinking about how MPAs have improved over the years, and where SPAs still have an edge. I’ve been thinking about how complexity creeps into software, and why a developer may choose a more complex but powerful technology at the expense of a simpler but less capable technology.

I think this core dilemma – complexity vs simplicity, capability vs maintainability – is at the heart of a lot of the debates about web app architecture. Unfortunately, these debates are so often tied up in other factors (a kind of web dev culture war, Twitter-stoked conflicts, maybe even a generational gap) that it can be hard to see clearly what the debate is even about.

At the risk of grossly oversimplifying things, I propose that the core of the debate can be summed up by these truisms:

  1. The best SPA is better than the best MPA.
  2. The average SPA is worse than the average MPA.

The first statement should be clear to most seasoned web developers. Show me an MPA, and I can show you how to make it better with JavaScript. Added too much JavaScript? I can show you some clever ways to minimize, defer, and multi-thread that JavaScript. Ran into some bugs, because now you’ve deviated from the browser’s built-in behavior? There are always ways to fix it! You’ve got JavaScript.

Whereas with an MPA, you are delegating some responsibility to the browser. Want to animate navigations between pages? You can’t (yet). Want to avoid the flash of white? You can’t, until Chrome fixes it (and it’s not perfect yet). Want to avoid re-rendering the whole page, when there’s only a small subset that actually needs to change? You can’t; it’s a “full page refresh.”

My second truism may be more controversial than the first. But I think time and experience have shown that, whatever the promises of SPAs, the reality has been less convincing. It’s not hard to find examples of poorly-built SPAs that score badly on a variety of metrics (performance, accessibility, reliability), and which could have been built better and more cheaply as a bog-standard MPA.

Example: subsequent navigations

To illustrate, let’s consider one of the main value propositions of an SPA: making subsequent navigations faster.

Rich Harris recently offered an example of using the SvelteKit website (SPA) compared to the Astro website (MPA), showing that page navigations on the Svelte site were faster.

Now, to be clear, this is a bit of an unfair comparison: the Svelte site is preloading content when you hover over links, so there’s no network call by the time you click. (Nice optimization!) Whereas the Astro site is not using a Service Worker or other offlining – if you throttle to 3G, it’s even slower relative to the Svelte site.

But I totally believe Rich is right! Even with a Service Worker, Astro would have a hard time beating SvelteKit. The amount of DOM being updated here is small and static, and doing the minimal updates in JavaScript should be faster than asking the browser to re-render the full HTML. It’s hard to beat element.innerHTML = '...'.

However, in many ways this site represents the ideal conditions for an SPA navigation: it’s small, it’s lightweight, it’s built by the kind of experts who build their own JavaScript framework, and those experts are also keen to get performance right – since this website is, in part, a showcase for the framework they’re offering. What about real-world websites that aren’t built by JavaScript framework authors?

Anthony Ricaud recently gave a talk (in French – apologies to non-Francophones) where he analyzed the performance of real-world SPAs. In the talk, he asks: What if these sites used standard MPA navigations?

To answer this, he built a proxy that strips the site of its first-party JavaScript (leaving the kinds of ads and trackers that, sadly, many teams are not allowed to forgo), as well as another version of the proxy that doesn’t strip any JavaScript. Then, he scripted WebPageTest to click an internal link, measuring the load times for both versions (on throttled 4G).

So which was faster? Well, out of the three sites he tested, on both mobile (Moto G4) and desktop, the MPA was either just as fast or faster, every time. In some cases, the WebPageTest filmstrips even showed that the MPA version was faster by several seconds. (Note again: these are subsequent navigations.)

On top of that, the MPA sites gave immediate feedback to the user when clicking – showing a loading indicator in the browser chrome. Whereas some of the SPAs didn’t even manage to show a “skeleton” screen before the MPA had already finished loading.

Screenshot from conference talk showing a speaker on the left and a WebPageTest filmstrip on the right. The filmstrip compares two sites: the first takes 5.5 seconds and the second takes 2.5 seconds

Screenshot from Anthony Ricaud’s talk. The SPA version is on top (5.5s), and the MPA version is on bottom (2.5s).

Now, I don’t think this experiment is perfect. As Anthony admits, removing inline <script>s removes some third-party JavaScript as well (the kind that injects itself into the DOM). Also, removing first-party JavaScript removes some non-SPA-related JavaScript that you’d need to make the site interactive, and removing any render-blocking inline <script>s would inherently improve the visual completeness time.

Even with a perfect experiment, there are a lot of variables that could change the outcome for other sites:

  • How fast is the SSR?
  • Is the HTML streamed?
  • How much of the DOM needs to be updated?
  • Is a network request required at all?
  • What JavaScript framework is being used?
  • How fast is the client CPU?
  • Etc.

Still, it’s pretty gobsmacking that JavaScript was slowing these sites down, even in the one case (subsequent navigations) where JavaScript should be making things faster.

Exhausted developers and clever developers

Now, let’s return to my truisms from the start of the post:

  1. The best SPA is better than the best MPA.
  2. The average SPA is worse than the average MPA.

The cause of so much debate, I think, is that two groups of developers may look at this situation, agree on the facts on the ground, but come to two different conclusions:

“The average SPA sucks? Well okay, I should stop building SPAs then. Problem solved.” – Exhausted developer

 

“The average SPA sucks? That’s just because people haven’t tried hard enough! I can think of 10 ways to fix it.” – Clever developer

Let’s call these two archetypes the exhausted developer and the clever developer.

The exhausted developer has had enough with managing the complexity of “modern” web sites and web applications. Too many build tools, too many code paths, too much to think about and maintain. They have JavaScript fatigue. Throw it all away and simplify!

The clever developer is similarly frustrated by the state of modern web development. But they also deeply understand how the web works. So when a tool breaks or a framework does something in a sub-optimal way, it irks them, because they can think of a better way. Why can’t a framework or a tool fix this problem? So they set out to find a new tool, or to build it themselves.

The thing is, I think both of these perspectives are right. Clever developers can always improve upon the status quo. Exhausted developers can always save time and effort by simplifying. And one group can even help the other: for instance, maybe Parcel is approachable for those exhausted by Webpack, but a clever developer had to go and build Parcel first.

Conclusion

The disparity between the best and the average SPA has been around since the birth of SPAs. In the mid-2000s, people wanted to build SPAs because they saw how amazing GMail was. What they didn’t consider is that Google had a crack team of experts monitoring every possible problem with SPAs, right down to esoteric topics like memory leaks. (Do you have a team like that?)

Ever since then, JavaScript framework and tooling authors have been trying to democratize SPA tooling, bringing us the kinds of optimizations previously only available to the Googles and the Facebooks of the world. Their intentions have been admirable (I would put my own fuite on that pile), but I think it’s fair to say the results have been mixed.

An expert developer can stand up on a conference stage and show off the amazing scores for their site (perfect performance! perfect accessibility! perfect SEO!), and then an excited conference-goer returns to their team, convinces them to use the same tooling, and two years later they’ve built a monstrosity. When this happens enough times, the same conference-goer may start to distrust the next dazzling demo they see.

And yet… the web dev community marches forward. Today I can grab any number of “starter” app toolkits and build something that comes out-of-the-box with code-splitting, Service Workers, tree-shaking, a thousand different little micro-optimizations that I don’t even have to know the names of, because someone else has already thought of it and gift-wrapped it for me. That is a miracle, and we should be grateful for it.

Given enough innovation in this space, it is possible that, someday, the average SPA could be pretty great. If it came batteries-included with proper scroll, focus, and screen reader announcements, tooling to identify performance problems (including memory leaks), progressive DOM rendering (e.g. Jake Archibald’s hack), and a bunch of other optimizations, it’s possible that developers would fall into the “pit of success” and consistently make SPAs that outclass the equivalent MPA. I remain skeptical that we’ll get there, and even the best SPA would still have problems (complexity, performance on slow clients, etc.), but I can’t fault people for trying.

At the same time, browsers never stop taking the lessons from userland and upstreaming them into the browser itself, giving us more lines of code we can potentially delete. This is why it’s important to periodically re-evaluate the assumptions baked into our tooling.

Today, I think the core dilemma between SPAs and MPAs remains unresolved, and will maybe never be resolved. Both SPAs and MPAs have their strengths and weaknesses, and the right tool for the job will vary with the size and skills of the team and the product they’re trying to build. It will also vary over time, as browsers evolve. The important thing, I think, is to remain open-minded, skeptical, and analytical, and to accept that everything in software development has tradeoffs, and none of those tradeoffs are set in stone.

Style scoping versus shadow DOM: which is fastest?

Update: this post was updated with some new benchmark numbers in October 2022.

Last year, I asked the question: Does shadow DOM improve style performance? I didn’t give a clear answer, so perhaps it’s no surprise that some folks weren’t sure what conclusion to draw.

In this post, I’d like to present a new benchmark that hopefully provides a more solid answer.

TL;DR: My new benchmark largely confirmed my previous research, and shadow DOM comes out as the most consistently performant option. Class-based style scoping slightly beats shadow DOM in some scenarios, but in others it’s much less performant. Firefox, thanks to its multi-threaded style engine, is much faster than Chrome or Safari.

Shadow DOM and style performance

To recap: shadow DOM has some theoretical benefits to style calculation, because it allows the browser to work with a smaller DOM size and smaller CSS rule set. Rather than needing to compare every CSS rule against every DOM node on the page, the browser can work with smaller “sub-DOMs” when calculating style.

However, browsers have a lot of clever optimizations in this area, and userland “style scoping” solutions have emerged (e.g. Vue, Svelte, and CSS Modules) that effectively hook into these optimizations. The way they typically do this is by adding a class or an attribute to the CSS selector: e.g. * { color: red } becomes *.xxx { color: red }, where xxx is a randomly-generated token unique to each component.

After crunching the numbers, my post showed that class-based style scoping was actually the overall winner. But shadow DOM wasn’t far behind, and it was the more consistently fast option.

These nuances led to a somewhat mixed reaction. For instance, here’s one common response I saw (paraphrasing):

The fastest option overall is class-based scoped styles, ala Svelte or CSS Modules. So shadow DOM isn’t really that great.

But looking at the same data, you could reach another, totally reasonable, conclusion:

With shadow DOM, the performance stays constant instead of scaling with the size of the DOM or the complexity of the CSS. Shadow DOM allows you to use whatever CSS selectors you want and not worry about performance.

Part of it may have been people reading into the data what they wanted to believe. If you already dislike shadow DOM (or web components in general), then you can read my post and conclude, “Wow, shadow DOM is even more useless than I thought.” Or if you’re a web components fan, then you can read my post and think, “Neat, shadow DOM can improve performance too!” Data is in the eye of the beholder.

To drive this point home, here’s the same data from my post, but presented in a slightly different way:

Chart image, see table below for the same data

Click for details

This is 1,000 components, 10 rules per component.

Selector performance (ms) Chrome Firefox Safari
Class selectors 58.5 22 56
Attribute selectors 597.1 143 710
Class selectors – shadow DOM 70.6 30 61
Attribute selectors – shadow DOM 71.1 30 81

As you can see, the case you really want to avoid is the second one – bare attribute selectors. Inside of the shadow DOM, though, they’re fine. Class selectors do beat shadow DOM overall, but only by a rounding error.

My post also showed that more complex selectors are consistently fast inside of the shadow DOM, even if they’re much slower at the global level. This is exactly what you would expect, given how shadow DOM works – the real surprise is just that shadow DOM doesn’t handily win every category.

Re-benchmarking

It didn’t sit well with me that my post didn’t draw a firm conclusion one way or the other. So I decided to benchmark it again.

This time, I tried to write a benchmark to simulate a more representative web app. Rather than focusing on individual selectors (ID, class, attribute, etc.), I tried to compare a userland “scoped styles” implementation against shadow DOM.

My new benchmark generates a DOM tree based on the following inputs:

  • Number of “components” (web components are not used, since this benchmark is about shadow DOM exclusively)
  • Elements per component (with a random DOM structure, with some nesting)
  • CSS rules per component (randomly generated, with a mix of tag, class, attribute, :not(), and :nth-child() selectors, and some descendant and compound selectors)
  • Classes per component
  • Attributes per component

To find a good representative for “scoped styles,” I chose Vue 3’s implementation. My previous post showed that Vue’s implementation is not as fast as that of Svelte or CSS Modules, since it uses attributes instead of classes, but I found Vue’s code to be easier to integrate. To make things a bit fairer, I added the option to use classes rather than attributes.

One subtlety of Vue’s style scoping is that it does not scope ancestor selectors. For instance:

/* Input */
div div {}

/* Output - Vue */
div div[data-v-xxx] {}

/* Output - Svelte */
div.svelte-xxx div.svelte-xxx {}

(Here is a demo in Vue and a demo in Svelte.)

Technically, Svelte’s implementation is more optimal, not only because it uses classes rather than attributes, but because it can rely on the Bloom filter optimization for ancestor lookups (e.g. :not(div) div.svelte-xxx:not(div) div.svelte-xxx, with .svelte-xxx in the ancestor). However, I kept the Vue implementation because 1) this analysis is relevant to Vue users at least, and 2) I didn’t want to test every possible permutation of “scoped styles.” Adding the “class” optimization is enough for this blog post – perhaps the “ancestor” optimization can come in future work. (Update: this is now covered below.)

Note: In benchmark after benchmark, I’ve seen that class selectors are typically faster than attribute selectors – sometimes by a lot, sometimes by a little. From the web developer’s perspective, it may not be obvious why. Part of it is just browser vendor priorities: for instance, WebKit invented the Bloom filter optimization in 2011, but originally it only applied to tags, classes, and IDs. They expanded it to attributes in 2018, and Chrome and Firefox followed suit in 2021 when I filed these bugs on them. Perhaps something about attributes also makes them intrinsically harder to optimize than classes, but I’m not a browser developer, so I won’t speculate.

Methodology

I ran this benchmark on a 2021 MacBook Pro (M1), running macOS Monterey 12.4. The M1 is perhaps not ideal for this, since it’s a very fast computer, but I used it because it’s the device I had, and it can run all three of Chrome, Firefox, and Safari. This way, I can get comparable numbers on the same hardware.

In the test, I used the following parameters:

Parameter Value
Number of components 1000
Elements per component 10
CSS rules per component 10
Classes per element 2
Attributes per element 2

I chose these values to try to generate a reasonable “real-world” app, while also making the app large enough and interesting enough that we’d actually get some useful data out of the benchmark. My target is less of a “static blog” and more of a “heavyweight SPA.”

There are certainly more inputs I could have added to the benchmark: for instance, DOM depth. As configured, the benchmark generates a DOM with a maximum depth of 29 (measured using this snippet). Incidentally, this is a decent approximation of a real-world app – YouTube measures 28, Reddit 29, and Wikipedia 17. But you could certainly imagine more heavyweight sites with deeper DOM structures, which would tend to spend more time in descendant selectors (outside of shadow DOM, of course – descendant selectors cannot cross shadow boundaries).

For each measurement, I took the median of 5 runs. I didn’t bother to refresh the page between each run, because it didn’t seem to make a big difference. (The relevant DOM was being blown away every time.) I also didn’t randomize the stylesheets, because the browsers didn’t seem to be doing any caching that would require randomization. (Browsers have a cache for stylesheet parsing, as I discussed in this post, but not for style calculation, insofar as it matters for this benchmark anyway.)

Update: I realized this comment was a bit blasé, so I re-ran the benchmark with a fresh browser session between each sample, just to make sure the browser cache wasn’t affecting the numbers. You can find those numbers at the end of the post. (Spoiler: no big change.)

Although the benchmark has some randomness, I used random-seedable with a consistent seed to ensure reproducible results. (Not that the randomness was enough to really change the numbers much, but I’m a stickler for details.)

The benchmark uses a requestPostAnimationFrame polyfill to measure style/layout/paint performance (see this post for details). To focus on style performance only, a DOM structure with only absolute positioning is used, which minimizes the time spent in layout and paint.

And just to prove that the benchmark is actually measuring what I think it’s measuring, here’s a screenshot of the Chrome DevTools Performance tab:

Screenshot of Chrome DevTools showing a large amount of time taken up by the User Timing called "total" with most of that containing a time slice called "Recalculate style"

Note that the measured time (“total”) is mostly taken up by “Recalculate Style.”

Results

When discussing the results, it’s much simpler to go browser-by-browser, because each one has different quirks.

One of the things I like about analyzing style performance is that I see massive differences between browsers. It’s one of those areas of browser performance that seems really unsettled, with lots of work left to do.

That is… unless you’re Firefox. I’m going to start off with Firefox, because it’s the biggest outlier out of the three major browser engines.

Firefox

Firefox’s Stylo engine is fast. Like, really fast. Like, so fast that, if every browser were like Firefox, there would be little point in discussing style performance, because it would be a bit like arguing over the fastest kind of for-loop in JavaScript. (I.e., interesting minutia, but irrelevant except for the most extreme cases.)

In almost every style calculation benchmark I’ve seen over the past five years, Firefox smokes every other browser engine to the point where it’s really in a class of its own. Whereas other browsers may take over 1,000ms in a given scenario, Firefox will take ~100ms for the same scenario on the same hardware.

So keep in mind that, with Firefox, we’re going to be talking about really small numbers. And the differences between them are going to be even smaller. But here they are:

Chart data, see details in table below

Click for table
Scenario Firefox 101
Scoping – classes 30
Scoping – attributes 38
Shadow DOM 26
Unscoped 114

Note that, in this benchmark, the first three bars are measuring roughly the same thing – you end up with the same DOM with the same styles. The fourth case is a bit different – all the styles are purely global, with no scoping via classes or attributes. It’s mostly there as a comparison point.

My takeaway from the Firefox data is that scoping with either classes, attributes, or shadow DOM is fine – they’re all pretty fast. And as I mentioned, Firefox is quite fast overall. As we move on to other browsers, you’ll see how the performance numbers get much more varied.

Chrome

The first thing you should notice about Chrome’s data is how much higher the y-axis is compared to Firefox. With Firefox, we were talking about ~100ms at the worst, whereas now with Chrome, we’re talking about an order of magnitude higher: ~1,000ms. (Don’t feel bad for Chrome – the Safari numbers will look pretty similar.)

Chart data, see details in table below

Click for table
Scenario Chrome 102
Scoping – classes 357
Scoping – attributes 614
Shadow DOM 49
Unscoped 1022

Initially, the Chrome data tells a pretty simple story: shadow DOM is clearly the fastest, followed by style scoping with classes, followed by style scoping with attributes, followed by unscoped CSS. So the message is simple: use Shadow DOM, but if not, then use classes instead of attributes for scoping.

I noticed something interesting with Chrome, though: the performance numbers are vastly different for these two cases:

  • 1,000 components: insert 1,000 different <style>s into the <head>
  • 1,000 components: concatenate those styles into one big <style>

As it turns out, this simple optimization greatly improves the Chrome numbers:

Chart data, see details in table below

Click for table
Scenario Chrome 102 – separate styles Chrome 102 – concatenated
Classes 357 48
Attributes 614 43

When I first saw these numbers, I was confused. I could understand this optimization in terms of reducing the cost of DOM insertions. But we’re talking about style calculation – not DOM API performance. In theory, it shouldn’t matter whether there are 1,000 stylesheets or one big stylesheet. And indeed, Firefox and Safari show no difference between the two:

Chart data, see details in table below

Click for table
Scenario Firefox 101 – separate styles Firefox 101 – concatenated
Classes 30 29
Attributes 38 38

Chart data, see details in table below

Click for table
Scenario Safari 15.5 – separate styles Safari 15.5. – concatenated
Classes 75 73
Attributes 812 820

This behavior was curious enough that I filed a bug on Chromium. According to the Chromium engineer who responded (thank you!), this is because of a design decision to trade off some initial performance in favor of incremental performance when stylesheets are modified or added. (My benchmark is a bit unfair to Chrome, since it only measures the initial calculation. A good idea for a future benchmark!)

This is actually a pretty interesting data point for JavaScript framework and bundler authors. It seems that, for Chromium anyway, the ideal technique is to concatenate stylesheets similarly to how JavaScript bundlers do code-splitting – i.e. trying to concatenate as much as possible, while still splitting in some cases to optimize for caching across routes. (Or you could go full inline and just put one big <style> on every page.) Keep in mind, though, that this is a peculiarity of Chromium’s current implementation, and it could go away at any moment if Chromium decides to change it.

In terms of the benchmark, though, it’s not clear to me what to do with this data. You might imagine that it’s a simple optimization for a JavaScript framework (or meta-framework) to just concatenate all the styles together, but it’s not always so straightforward. When a component is mounted, it may call getComputedStyle() on its own DOM nodes, so batching up all the style insertions until after a microtask is not really feasible. Some meta-frameworks (such as Nuxt and SvelteKit) leverage a bundler to concatenate the styles and insert them before the component is mounted, but it feels a bit unfair to depend on that for the benchmark.

To me, this is one of the core advantages of shadow DOM – you don’t have to worry if your bundler is configured correctly or if your JavaScript framework uses the right kind of style scoping. Shadow DOM is just performant, all of the time, full stop. That said, here is the Chrome comparison data with the concatenation optimization applied:

Chart data, see details in table below

Click for table
Scenario Chrome 102 (with concatenation optimization)
Scoping – classes 48
Scoping – attributes 43
Shadow DOM 49
Unscoped 1022

The first three are close enough that I think it’s fair to say that all of the three scoping methods (class, attribute, and shadow DOM) are fast enough.

Note: You may wonder if Constructable Stylesheets would have an impact here. I tried a modified version of the benchmark that uses these, and didn’t observe any difference – Chrome showed the same behavior for concatenation vs splitting. This makes sense, as none of the styles are duplicated, which is the main use case Constructable Stylesheets are designed for. I have found elsewhere, though, that Constructable Stylesheets are more performant than <style> tags in terms of DOM API performance, if not style calculation performance (e.g. see here, here, and here).

Safari

In our final tour of browsers, we arrive at Safari:

Chart data, see details in table below

Click for table
Scenario Safari 15.5
Scoping – classes 75
Scoping – attributes 812
Shadow DOM 94
Unscoped 840

To me, the Safari data is the easiest to reason about. Class scoping is fast, shadow DOM is fast, and unscoped CSS is slow. The one surprise is just how slow attribute selectors are compared to class selectors. Maybe WebKit has some more optimizations to do in this space – compared to Chrome and Firefox, attributes are just a much bigger performance cliff relative to classes.

This is another good example of why class scoping is superior to attribute scoping. It’s faster in all the engines, but the difference is especially stark in Safari. (Or you could use shadow DOM and not worry about it at all.)

Update: shortly after this post was published, WebKit made an optimization to attribute selectors. This seems to eliminate the perf cliff: in Safari Technology Preview 152 (Safari 16.0, WebKit 17615.1.2.3), the benchmark time for attributes drops to 77ms, which is only marginally slower than classes at 74ms (taking the median of 15 samples).

Conclusion

Performance shouldn’t be the main reason you choose a technology like scoped styles or shadow DOM. You should choose it because it fits well with your development paradigm, it works with your framework of choice, etc. Style performance usually isn’t the biggest bottleneck in a web application, although if you have a lot of CSS or a large DOM size, then you may be surprised by the amount of “Recalculate Style” costs in your next performance trace.

One can also hope that someday browsers will advance enough that style calculation becomes less of a concern. As I mentioned before, Stylo exists, it’s very good, and other browsers are free to borrow its ideas for their own engines. If every browser were as fast as Firefox, I wouldn’t have a lot of material for this blog post.

Chart data, see details in table below

This is the same data presented in this post, but on a single chart. Just notice how much Firefox stands out from the other browsers.

Click for table
Scenario Chrome 102 Firefox 101 Safari 15.5
Scoping – classes 357 30 75
Scoping – attributes 614 38 812
Shadow DOM 49 26 94
Unscoped 1022 114 840
Scoping – classes – concatenated 48 29 73
Scoping – attributes – concatenated 43 38 820

For those who dislike shadow DOM, there is also a burgeoning proposal in the CSS Working Group for style scoping. If this proposal were adopted, it could provide a less intrusive browser-native scoping mechanism than shadow DOM, similar to the abandoned <style scoped> proposal. I’m not a browser developer, but based on my reading of the spec, I don’t see why it couldn’t offer the same performance benefits we see with shadow DOM.

In any case, I hope this blog post was interesting, and helped shine light on an odd and somewhat under-explored space in web performance. Here is the benchmark source code and a live demo in case you’d like to poke around.

Thanks to Alex Russell and Thomas Steiner for feedback on a draft of this blog post.

Afterword – more data

Updated June 23, 2022

After writing this post, I realized I should take my own advice and automate the benchmark so that I could have more confidence in the numbers (and make it easier for others to reproduce).

So, using Tachometer, I re-ran the benchmark, taking the median of 25 samples, where each sample uses a fresh browser session. Here are the results:

Chart data, see details in table below

Click for table
Scenario Chrome 102 Firefox 101 Safari 15.5
Scoping – classes 277.1 45 80
Scoping – attributes 418.8 54 802
Shadow DOM 56.80000001 67 82
Unscoped 820.4 190 857
Scoping – classes – concatenated 44.30000001 42 80
Scoping – attributes – concatenated 44.5 51 802
Unscoped – concatenated 251.3 167 865

As you can see, the overall conclusion of my blog post doesn’t change, although the numbers have shifted slightly in absolute terms.

I also added “Unscoped – concatenated” as a category, because I realized that the “Unscoped” scenario would benefit from the concatenation optimization as well (in Chrome, at least). It’s interesting to see how much of the perf win is coming from concatenation, and how much is coming from scoping.

If you’d like to see the raw numbers from this benchmark, you can download them here.

Second afterword – even more data

Updated June 25, 2022

You may wonder how much Firefox’s Stylo engine is benefiting from the 10 cores in that 2021 Mac Book Pro. So I unearthed my old 2014 Mac Mini, which has only 2 cores but (surprisingly) can still run macOS Monterey. Here are the results:

Chart data, see details in table below

Click for table
Scenario Chrome 102 Firefox 101 Safari 15.5
Scoping – classes 717.4 107 187
Scoping – attributes 1069.5 162 2853
Shadow DOM 227.7 117 233
Unscoped 2674.5 452 3132
Scoping – classes – concatenated 189.3 104 188
Scoping – attributes – concatenated 191.9 159 2826
Unscoped – concatenated 865.8 422 3148

(Again, this is the median of 25 samples. Raw data.)

Amazingly, Firefox seems to be doing even better here relative to the other browsers. For “Unscoped,” it’s 14.4% of the Safari number (vs 22.2% on the MacBook), and 16.9% of the Chrome number (vs 23.2% on the MacBook). Whatever Stylo is doing, it’s certainly impressive.

Third update – scoping strategies

Updated October 8, 2022

I was curious about which kind of scoping strategy (e.g. Svelte-style or Vue-style) performed best in the benchmark. So I updated the benchmark to generate three “scoped selector” styles:

  1. Full selector: ala Svelte, every part of the selector has a class or attribute added (e.g. div div becomes div.xyz div.xyz)
  2. Right-hand side (RHS): ala Vue, only the right-hand side selector is scoped with a class or attribute (e.g. div div becomes div div.xyz)
  3. Tag prefix: ala Enhance, the tag name of the component is prefixed (e.g. div div becomes my-component div div)

Here are the results, taking the median of 25 iterations on a 2014 Mac Mini (raw data):

Chart data, see table below

Same chart with “Unscoped” removed

Chart data, see table below

Table
Scenario Chrome 106 Firefox 105 Safari 16
Shadow DOM 237.1 120 249
Scoping – classes – RHS 643.1 110 190
Scoping – classes – full 644.1 111 193
Scoping – attributes – RHS 954.3 152 200
Scoping – attributes – full 964 146 204
Scoping – tag prefix 1667.9 163 316
Unscoped 9767.5 3436 6829

Note that this version of the benchmark is slightly different from the previous one – I wanted to cover more selector styles, so I changed how the source CSS is generated to include more pseudo-classes in the ancestor position (e.g. :nth-child(2) div). This is why the “unscoped” numbers are higher than before.

My first takeaway is that Safari 16 has largely fixed the problem with attribute selectors – they are now roughly the same as class selectors. (This optimization seems to be the reason.)

In Firefox, classes are still slightly faster than attributes. I actually reached out to Emilio Cobos Álvarez about this, and he explained that, although Firefox did make an optimization to attribute selectors last year (prompted by my previous blog post), class selectors still have “a more micro-optimized code path.” To be fair, though, the difference is not enormous.

In Chrome, class selectors comfortably outperform attribute selectors, and the tag prefix is further behind. Note though, that these are the “unconcatenated” numbers – when applying the concatenation optimization, all the numbers decrease for Chrome:

Chart data, see table below

Same chart with “Unscoped” removed

Chart data, see table below

Table
Scenario Chrome 106 Firefox 105 Safari 16
Shadow DOM 237.1 120 249
Scoping – classes – RHS – concatenated 182 107 192
Scoping – classes – full – concatenated 183.6 107 190
Scoping – attributes – RHS – concatenated 185.8 148 198
Scoping – attributes – full – concatenated 187.1 142 204
Scoping – tag prefix – concatenated 288.7 159 315
Unscoped – concatenated 6476.3 3526 6882

With concatenation, the difference between classes and attributes is largely erased in Chrome. As before, concatenation has little to no impact on Firefox or Safari.

In terms of which scoping strategy is fastest, overall the tag prefix seems to be the slowest, and classes are faster than attributes. Between “full” selector scoping and RHS scoping, there does not seem to be a huge difference. And overall, any scoping strategy is better than unscoped styles. (Although do keep in mind this is a microbenchmark, and some of the selectors it generates are a bit tortured and elaborate, e.g. :not(.foo) :nth-child(2):not(.bar). In a real website, the difference would probably be less pronounced.)

I’ll also note that the more work I do in this space, the less my work seems to matter – which is a good thing! Between blogging and filing bugs on browsers, I seem to have racked up a decent body count of browser optimizations. (Not that I can really take credit; all I did was nerd-snipe the relevant browser engineers.) Assuming Chromium fixes the concatenation perf cliff, there won’t be much to say except “use some kind of CSS scoping strategy; they’re all pretty good.”

Dialogs and shadow DOM: can we make it accessible?

Last year, I wrote about managing focus in the shadow DOM, and in particular about modal dialogs. Since the <dialog> element has now shipped in all browsers, and the inert attribute is starting to land too, I figured it would be a good time to take another look at getting dialogs to play nicely with shadow DOM.

This post is going to get pretty technical, especially when it comes to the nitty-gritty details of accessibility and web standards. If you’re into that, then buckle up! The ride may be a bit bumpy.

Quick recap

Shadow DOM is weird. On paper, it doesn’t actually change what you can do in the DOM – with open mode, at least, you can access any element on the page that you want. In practice, though, shadow DOM upends a lot of web developer expectations about how the DOM works, and makes things much harder.

Image of Lisa Simpson in front of a sign saying "Keep out. Or enter, I'm a sign not a cop."

I credit Brian Kardell for this description of open shadow DOM, which is maybe the most perfect distillation of how it actually works.

Note: Shadow DOM has two modes: open and closed. Closed mode is a lot more restrictive, but it’s less common – the majority of web component frameworks use open by default (e.g. Angular, Fast, Lit, LWC, Remount, Stencil, Svelte, Vue). Somewhat surprisingly, though, open mode is only 3 times as popular as closed mode, according to Chrome Platform Status (9.9% vs 3.5%).

For accessibility reasons, modal dialogs need to implement a focus trap. However, the DOM doesn’t have an API for “give me all the elements on the page that the user can Tab through.” So web developers came up with creative solutions, most of which amount to:

dialog.querySelectorAll('button, input, a[href], ...')

Unfortunately this is the exact thing that doesn’t work in the shadow DOM. querySelectorAll only grabs elements in the current document or shadow root; it doesn’t deeply traverse.

Like a lot of things with shadow DOM, there is a workaround, but it requires some gymnastics. These gymnastics are hard, and have a complexity and (probably) performance cost. So a lot of off-the-shelf modal dialogs don’t handle shadow DOM properly (e.g. a11y-dialog does not).

Note: My goal here isn’t to criticize a11y-dialog. I think it’s one of the best dialog implementations out there. So if even a11y-dialog doesn’t support shadow DOM, you can imagine a lot of other dialog implementations probably don’t, either.

A constructive dialog

“But what about <dialog>?”, you might ask. “The dang thing is called <dialog>; can’t we just use that?”

If you had asked me a few years ago, I would have pointed you to Scott O’Hara’s extensive blog post on the subject, and said that <dialog> had too many accessibility gotchas to be a practical solution.

If you asked me today, I would again point you to the same blog post. But this time, there is a very helpful 2022 update, where Scott basically says that <dialog> has come a long way, so maybe it’s time to give it a second chance. (For instance, the issue with returning focus to the previously-focused element is now fixed, and the need for a polyfill is much reduced.)

Note: One potential issue with <dialog>, mentioned in Rob Levin’s recent post on the topic, is that clicking outside of the dialog should close it. This has been proposed for the <dialog> element, but the WAI ARIA Authoring Practices Guide doesn’t actually stipulate this, so it seems like optional behavior to me.

To be clear: <dialog> still doesn’t give you 100% of what you’d need to implement a dialog (e.g. you’d need to lock the background scroll), and there are still some lingering discussions about how to handle initial focus. For that reason, Scott still recommends just using a battle-tested library like a11y-dialog.

As always, though, shadow DOM makes things more complicated. And in this case, <dialog> actually has some compelling superpowers:

  1. It automatically limits focus to the dialog, with correct Tab order, even in shadow DOM.
  2. It works with closed shadow roots as well, which is impossible in userland solutions.
  3. It also works with user-agent shadow roots. (E.g. you can Tab through the buttons in a <video controls> or <audio controls>.) This is also impossible in userland, since these elements function effectively like closed shadow roots.
  4. It correctly returns focus to the previously-focused element, even if that element is in a closed shadow root. (This is possible in userland, but you’d need an API contract with the closed-shadow component.)
  5. The Esc key correctly closes the modal, even if the focus is in a user-agent shadow root (e.g. the pause button is focused when you press Esc). This is also not possible in userland.

Here is a demo:

Note: Eagle-eyed readers may wonder: what if the first tabbable element in the dialog is in a shadow root? Does it correctly get focus? The short answer is: yes in Chrome, no in Firefox or Safari (demo). Let’s hope those browsers fix it soon.

So should everybody just switch over to <dialog>? Not so fast: it actually doesn’t perfectly handle focus, per the WAI ARIA Authoring Practices Guide (APG), because it allows focus to escape to the browser chrome. Here’s what I mean:

  • You reach the last tabbable element in the dialog and press Tab.
    • Correct: focus moves to the first tabbable element in the dialog.
    • Incorrect (<dialog>): focus goes to the URL bar or somewhere else in the browser chrome.
  • You reach the first tabbable element in the dialog and press Shift+Tab.
    • Correct: focus moves to the last tabbable element in the dialog.
    • Incorrect (<dialog>): focus goes to the URL bar or somewhere else in the browser chrome.

This may seem like a really subtle difference, but the consensus of accessibility experts seems to be that the WAI ARIA APG is correct, and <dialog> is wrong.

Note: I say “consensus,” but… there isn’t perfect consensus. You can read this comment from James Teh or Scott O’Hara’s aforementioned post (“This is good behavior, not a bug”) for dissenting opinions. In any case, the “leaky” focus trap conflicts with the WAI ARIA APG and the way userland dialogs have traditionally worked.

So we’ve reached (yet another!) tough decision with <dialog>. Do we accept <dialog>, because at least it gets shadow DOM right, even though it gets some other stuff wrong? Do we try to build our own thing? Do we quit web development entirely and go live the bucolic life of a potato farmer?

Inert matter

While I was puzzling over this recently, it occurred to me that inert may be a step forward to solving this problem. For those unfamiliar, inert is an attribute that can be used to mark sections of the DOM as “inert,” i.e. untabbable and invisible to screen readers:

<main inert></main>
<div role="dialog"></div>
<footer inert></footer>

In this way, you could mark everything except the dialog as inert, and focus would be trapped inside the dialog.

Here is a demo:

As it turns out, this works perfectly for tabbing through elements in the shadow DOM, just like <dialog>! Unfortunately, it has exactly the same problem with focus escaping to the browser chrome. This is no accident: the behavior of <dialog> is defined in terms of inert.

Can we still solve this, though? Unfortunately, I’m not sure it’s possible. I tried a few different techniques, such as listening for Tab events and checking if the activeElement has moved outside of the modal, but the problem is that you still, at some point, need to figure out what the “first” and “last” tabbable elements in the dialog are. To do this, you need to traverse the DOM, which means (at the very least) traversing open shadow roots, which doesn’t work for closed or user-agent shadow roots. And furthermore, it involves a lot of extra work for the web developer, who has probably lost focus at this point and is daydreaming about that nice, quiet potato farm.

Note: inert also, sadly, does not help with the Esc key in user-agent shadow roots, or returning focus to closed shadow roots when the dialog is closed, or setting initial focus on an element in a closed shadow root. These are <dialog>-only superpowers. Not that you needed any extra convincing.

Conclusion

Until the spec and browser issues have been ironed out (e.g. browsers change their behavior so that focus doesn’t escape to the browser chrome, or they give us some entirely different “focus trap” primitive), I can see two reasonable options:

  1. Use something like a11y-dialog, and don’t use shadow DOM or user-agent shadow components like <video controls> or <audio controls>. (Or do some nasty hacks to make it partially work.)
  2. Use shadow DOM, but don’t bother solving the “focus escapes to the browser chrome” problem. Use <dialog> (or a library built on top of it) and leave it at that.

For my readers who were hoping that I’d drop some triumphant “just npm install nolans-cool-dialog and it will work,” I’m sorry to disappoint you. Browsers are still rough around the edges in this area, and there aren’t a lot of great options. Maybe there is some mad-science way to actually solve this, but even that would likely involve a lot of complexity, so it wouldn’t be ideal.

Alternatively, maybe some of you are thinking that I’m focusing too much on closed and user-agent shadow roots. As long as you’re only using open shadow DOM (which, recall, is like the sign that says “I’m a sign, not a cop”), you can do whatever you want. So there’s no problem, right?

Personally, though, I like using <video controls> and <audio controls> (why ship a bunch of JavaScript to do something the browser already does?). And furthermore, I find it odd that if you put a <video controls> inside a <dialog>, you end up with something that’s impossible to make accessible per the WAI ARIA APG. (Is it too much to ask for a little internal consistency in the web platform?)

In any case, I hope this blog post was helpful for others tinkering around with the same problems. I’ll keep an eye on the browsers and standards space, and update this post if anything promising emerges.

The collapse of complex software

In 1988, the anthropologist Joseph Tainter published a book called The Collapse of Complex Societies. In it, he described the rise and fall of great civilizations such as the Romans, the Mayans, and the Chacoans. His goal was to answer a question that had vexed thinkers over the centuries: why did such mighty societies collapse?

In his analysis, Tainter found the primary enemy of these societies to be complexity. As civilizations grow, they add more and more complexity: more hierarchies, more bureaucracies, deeper intertwinings of social structures. Early on, this makes sense: each new level of complexity brings rewards, in terms of increased economic output, tax revenue, etc. But at a certain point, the law of diminishing returns sets in, and each new level of complexity brings fewer and fewer net benefits, dwindling down to zero and beyond.

But since complexity has worked so well for so long, societies are unable to adapt. Even when each new layer of complexity starts to bring zero or even negative returns on investment, people continue trying to do what worked in the past. At some point, the morass they’ve built becomes so dysfunctional and unwieldy that the only solution is collapse: i.e., a rapid decrease in complexity, usually by abolishing the old system and starting from scratch.

What I find fascinating about this (besides the obvious implications for modern civilization) is that Tainter could have been writing about software.

Anyone who’s worked in the tech industry for long enough, especially at larger organizations, has seen it before. A legacy system exists: it’s big, it’s complex, and no one fully understands how it works. Architects are brought in to “fix” the system. They might wheel out a big whiteboard showing a lot of boxes and arrows pointing at other boxes, and inevitably, their solution is… to add more boxes and arrows. Nobody can subtract from the system; everyone just adds.

Photo of a man standing in front of a whiteboard with a lot of boxes and arrows and text on the boxes

“EKS is being deprecated at the end of the month for Omega Star, but Omega Star still doesn’t support ISO timestamps.” We’ve all been there. (Via Krazam)

This might go on for several years. At some point, though, an organizational shakeup probably occurs – a merger, a reorg, the polite release of some senior executive to go focus on their painting hobby for a while. A new band of architects is brought in, and their solution to the “big diagram of boxes and arrows” problem is much simpler: draw a big red X through the whole thing. The old system is sunset or deprecated, the haggard veterans who worked on it either leave or are reshuffled to other projects, and a fresh-faced team is brought in to, blessedly, design a new system from scratch.

As disappointing as it may be for those of us who might aspire to write the kind of software that is timeless and enduring, you have to admit that this system works. For all its wastefulness, inefficiency, and pure mendacity (“The old code works fine!” “No wait, the old code is terrible!”), this is the model that has sustained a lot of software companies over the past few decades.

Will this cycle go on forever, though? I’m not so sure. Right now, the software industry has been in a nearly two-decade economic boom (with some fits and starts), but the one sure thing in economics is that booms eventually turn to busts. During the boom, software companies can keep hiring new headcount to manage their existing software (i.e. more engineers to understand more boxes and arrows), but if their labor force is forced to contract, then that same system may become unmaintainable. A rapid and permanent reduction in complexity may be the only long-term solution.

One thing working in complexity’s favor, though, is that engineers like complexity. Admit it: as much as we complain about other people’s complexity, we love our own. We love sitting around and dreaming up new architectural diagrams that can comfortably sit inside our own heads – it’s only when these diagrams leave our heads, take shape in the real world, and outgrow the size of any one person’s head that the problems begin.

It takes a lot of discipline to resist complexity, to say “no” to new boxes and arrows. To say, “No, we won’t solve that problem, because that will just introduce 10 new problems that we haven’t imagined yet.” Or to say, “Let’s go with a much simpler design, even if it seems amateurish, because at least we can understand it.” Or to just say, “Let’s do less instead of more.”

Simplicity of design sounds great in theory, but it might not win you many plaudits from your peers. A complex design means more teams to manage more parts of the system, more for the engineers to do, more meetings and planning sessions, maybe some more patents to file. A simple design might make it seem like you’re not really doing your job. “That’s it? We’re done? We can clock out?” And when promotion season comes around, it might be easier to make a case for yourself with a dazzling new design than a boring, well-understood solution.

Ultimately, I think whether software follows the boom-and-bust model, or a more sustainable model, will depend on the economic pressures of the organization that is producing the software. A software company that values growth at all cost, like the Romans eagerly gobbling up more and more of Gaul, will likely fall into the “add-complexity-and-collapse” cycle. A software company with more modest aims, that has a stable customer base and doesn’t change much over time (does such a thing exist?) will be more like the humble tribe that follows the yearly migration of the antelope and focuses on sustainable, tried-and-true techniques. (Whether such companies will end up like the hapless Gauls, overrun by Caesar and his armies, is another question.)

Personally, I try to maintain a good sense of humor about this situation, and to avoid giving in to cynicism or despair. Software is fun to write, but it’s also very impermanent in the current industry. If the code you wrote 10 years ago is still in use, then you have a lot to crow about. If not, then hey, at least you’re in good company with the rest of us, who probably make up the majority of software developers. Just keep doing the best you can, and try to have a healthy degree of skepticism when some wild-eyed architect wheels out a big diagram with a lot of boxes and arrows.