My talk on CSS runtime performance

A few months ago, I gave a talk on CSS performance at performance.now in Amsterdam. The recording is available online:

(You can also read the slides.)

This is one of my favorite talks I’ve ever given. It was the product of months (honestly, years) of research, prompted by a couple questions:

  1. What is the fastest way to implement scoped styles? (Surprisingly few people seem to ask this question.)
  2. Does using shadow DOM improve style performance? (Short answer: yes.)

To answer these questions (and more), I did a bunch of research into how browsers work under the hood. This included combing through old Servo discussions from 2013, reaching out to browser developers like Manuel Rego Casasnovas and Emilio Cobos Alvarez, reading browser PRs, and writing lots of benchmarks.

In the end, I’m pretty satisfied with the talk. My main goal was to shine a light on all the heroic work that browser vendors have done over the years to make CSS so performant. Much of this stuff is intricate and arcane (like Bloom filters), but I hoped that with some simple diagrams and animations, I could bring this work to life.

The two outcomes I’d love to see from this talk are:

  1. Web developers spend more time thinking about and measuring CSS performance. (Pssst, check out the SelectorStats section of my guide to Chrome tracing!)
  2. Browser vendors provide better DevTools to understand CSS performance. (In the talk, I pitch this as a SQL EXPLAIN for CSS.)

What I didn’t want to do in this talk was rain on anybody’s parade who is trying to do sophisticated things with CSS. More and more, I am seeing ambitious usage of new CSS features like :has and container queries, and I don’t want people to feel like they should avoid these techniques and limit themselves to classes and IDs. I just want web developers to consider the cost of CSS, and to get more comfortable with using the DevTools to understand which kinds of CSS patterns may be slowing down their website.

I also got some good feedback from browser DevTools folks after my talk, so I’m hopeful for the future of CSS performance. As techniques like shadow DOM and native CSS scoping become more widespread, it may even mitigate a lot of my worries about CSS perf. In any case, it was a fascinating topic to research, and I hope that folks were intrigued and entertained by my talk.

Retiring Pinafore

Five years ago, I started a journey to build a better Mastodon client – one focused on performance and simplicity. And I did! Pinafore is the main Mastodon client I’ve used myself since I first released it.

After five years, though, my relationship with social media has changed, and it’s time for me to put Pinafore out to pasture. The pinafore.social website will still work, but I’ve marked the repo as unmaintained.

Why retire Pinafore?

I don’t have the energy to do this anymore. Pinafore has gone from being a fun side project to being a source of dread for me. There is a constant stream of bug reports, feature requests, and pull requests to manage, and I just don’t want to spend my free time doing this anymore.

By the way, this is not my first rodeo. Read this post on my breakup with another open-source project.

Why not pass it off to a new maintainer?

Running a fediverse client requires trust. People who use Pinafore are trusting me to handle their data securely. As such, I’ve been meticulous about using good security headers and making pro-privacy decisions. A new maintainer (through malice or ignorance) could add new functionality that compromises on security or privacy, essentially trading on my good name while harming users.

Over the years, I have had lots of feature requests that would inadvertently cause a privacy or security leak, and I’ve pushed back on every single one. (E.g. “Why not contact third-party servers to show the full favorite/boost count?” Well, because users may trust their home server, but that doesn’t mean they trust random third-party servers.)

Rather than trust that a new maintainer will keep these high standards in place, I’d rather put Pinafore in a frozen state.

Why not shut it down entirely?

Thanks to Vercel’s generous free tier, Pinafore costs me $0 per month to run. It’s just static HTML/CSS/JS files, after all.

Why are you the sole maintainer?

I’m not – there have been tons of contributions through the years. But for the most part, these have been “drive-by” in nature (nothing wrong with that!), rather than someone deeply learning the codebase end-to-end.

I suspect one of the reasons for this is that Pinafore is written in Svelte v2 and Sapper – both of which are deprecated in favor of Svelte v3 and SvelteKit. Not only is there no migration path from Svelte v2 to v3, but there isn’t one from Sapper to SvelteKit either. (And on top of that, I had to fork Sapper pretty heavily.) Anyone making a bet on learning Pinafore’s tech stack is investing in a dead framework, so it’s not very attractive for new maintainers.

So why didn’t I bother updating it? Well, it’s a lot of work to manually migrate 200+ components to what is essentially a new framework. And plus, as far as I could tell, it would be a pure DX (Developer Experience) improvement, not a UX (User Experience) improvement. (I just wouldn’t be using any of SvelteKit’s new features, and Svelte v3 doesn’t seem to have massive UX improvements over Svelte v2.)

What did you learn while writing Pinafore?

Now here’s an interesting question! And one that may be useful for those building their own Mastodon (or fediverse) clients. It is my sincerest wish that Pinafore inspires other developers to build their own (and better!) clients.

API and offline

First off, ActivityPub does have a client-to-server API, but as far as I can tell, it’s not really worth implementing. Mastodon is the 800-pound gorilla in the fediverse, it doesn’t implement this API, and other servers (such as Pleroma and Misskey) implement their own flavor of Mastodon’s API. And plus, Mastodon’s REST API is pretty sensible and doesn’t change too frequently. (And when it does, they add a /v2 endpoint while still maintaining the /v1 version.)

However, the fact that Mastodon has a fairly bog-standard REST API makes it pretty difficult to implement offline support, as I did in Pinafore. Essentially, I implemented a full mirror of Mastodon’s PostgreSQL database structure, but on top of IndexedDB. On top of that, I had to implement a variety of strategies to synchronize data between the client and server:

  • As new statuses stream in, how do you backfill ones you may have missed if the user went offline? Well, you have to just keep fetching statuses to fill the gap.
  • How do you deal with deleted statuses? Well, you have to remove them from the in-memory store, and the database, and then also go ahead and delete any statuses that boosted them or notifications that reference them… It’s a lot. (And don’t get me started on editing statuses! I didn’t even get around to that.)
  • How to deal with slow servers? Well, you can implement an optimistic UI that shows (for instance) a “favorited” animation while still waiting for the server to respond. (And also cancels if the server responds with an error or times out.)

From my years working on PouchDB, I know that it’s a fool’s errand to try to implement proper client-server synchronization without a holistic plan for managing revisions, conflicts, and offline states… and yet, I did it. The end result is pretty impressive in my opinion, even if arguably it doesn’t add a lot to the user experience. (There’s not much you can do in a social media app when you’re offline, and I’m sure people still frequently have to refresh when stuff gets out-of-date.)

Performance

Speaking of which, refreshes should be fast! And I believe Pinafore is pretty good at this. (I can’t find the link, but someone did a recent analysis showing that Pinafore uses less CPU and memory than the default Mastodon frontend.)

In short, I’d say it’s entirely possible to build a performant SPA (despite some of my misgivings about SPAs). But it helps if:

  • You have a browser perf background (like me).
  • You’re only one developer. (Much harder to implement tricky perf optimizations if you have to explain it to your colleagues!)
  • You use a perf-focused framework like Svelte.
  • You don’t do much! Pinafore has a fraction of the features of the main Mastodon frontend.
  • You’re merciless about removing dependencies, or writing your own dependencies when the existing ones are too slow or bloated.
  • You’re meticulous about little micro-optimizations (e.g. debouncing, event delegation, or page splitting) that improve the user experience, especially on low-end devices, but make the developer experience a lot worse.

Not all of this is necessary to make a fast, fluid API, but it certainly helps. And the fact that I ended up building something that can run on feature phones gives me a lot of satisfaction.

Accessibility

I didn’t set out to write “the accessible Mastodon client,” but I’ve heard from a lot of folks that Pinafore is one of the better ones out there, especially for blind users.

For this, I mostly have to thank Marco Zehe and James Teh (among others), who provided tons of feedback and really helped with the polish of the screen reader experience. Accessibility isn’t always black-and-white – like anything in design, sometimes there are tradeoffs and differing opinions on what the best option is. Leaning on the expertise of actual blind users gave me insights that I couldn’t have had otherwise.

Another thing that helps is just giving a damn. When I started on Pinafore, I didn’t really know much about accessibility, but I decided it was time to finally learn. I started off with a basic intro to screen readers from Rob Dodson, played around with VoiceOver and NVDA, and tried to read and understand as much as I could. I wouldn’t call myself an accessibility expert, but I’ve made a lot of progress in the past five years, and now I wince when I look back at some of the code I wrote in the past.

In the end, I found accessibility to be quite rewarding. Rather than feeling like a chore or a box-ticking exercise, it feels like a fun challenge. In some cases it’s just about leaning on existing web standards, but in other cases it feels like you’re building a parallel semantic UI to the visual one. Sometimes I found that this even influenced the overall architecture of my code – which goes to show that it’s better to consider accessibility upfront rather than as an afterthought.

That said, I definitely messed up some stuff when it comes to accessibility – color contrast in particular is something I did a poor job on. (Luckily Nick Colley has put a bunch of work into Pinafore to improve this!)

Conclusion

Pinafore was a fun project. I learned a lot about web development while working on it. Often, when a new feature landed in browsers – e.g. color-scheme, maskable icons, or various Intl APIs – I would eagerly integrate it into Pinafore, which helped me learn more about how browsers work.

In another case, I went a bit overboard on building my own emoji picker for Pinafore, and in the process learned way more than I ever wanted to know about fonts and emoji rendering.

I also think that Pinafore accomplished many of the goals I had in mind when I originally wrote it. At the time, Mastodon only had a multi-column UI, which many users found overwhelming and confusing. Pinafore demonstrated that a single-column layout could be a viable alternative, and since then, Mastodon has defaulted to a single-column layout.

Back then, there was also only one web-based Mastodon client (Halcyon), and it didn’t support logging in to more than one instance at a time. Pinafore proved it was possible for a web-based client to do this (not obvious given CORS constraints!), and nowadays there are lots of web-based clients, such as Sengi, Cuckoo+, and Elk, and many of them support multi-instance logins.

Pinafore isn’t going anywhere – like I mentioned, the site is still up and running. I also think it could serve as an interesting point of comparison for other Mastodon clients. (Try to beat Pinafore on performance and accessibility! I think that would be a great outcome.)

I also want to thank everyone who followed along with me on this journey over the years, and who either used Pinafore, filed a bug, or contributed to it. Thank you for giving me one of my career-defining projects over the last half-decade. It wouldn’t have been possible without your help.

2022 book review

Once again, here are the books I read this year, and especially the ones I’d recommend.

One interesting thing I noticed about this year: in years past, I mentioned trying to read more books written by women. Well this year, without consciously trying, 9 out of the 13 books I read were written by women. I’d pat myself on the back, but if I did a full accounting of all the books I’ve read in my lifetime, I probably have a huge deficit to make up.

In any case, bring on the books!

Quick links

Fiction

Nonfiction

Fiction

The MaddAddam trilogy by Margaret Atwood (2003-2013)

Probably my favorite books I read all year. I’m a sucker for good sci-fi, and Atwood’s feels especially prescient. I can’t believe the first one (Oryx and Crake) was written almost two decades ago – the concerns about genetic manipulation and climate change are very top-of-mind nowadays.

The first two books are equally compelling, and they feel like separate, self-contained novels. Whereas the third one (while less thrilling to me) does a good job of bringing the two storylines together and tying up loose ends.

Also, this trilogy is just begging to be made into a prestige TV series – I wouldn’t be surprised if we get one in the next few years.

The Cage by Audrey Schulman (1994)

Audrey Schulman is quickly becoming one of my favorite writers. She writes true science fiction – with an emphasis on the “science” part. In each of her books you can tell she really does her research. In this one in particular, there are so many little touches (like the details on how cameras react to the extreme cold, or how frostbite feels on the skin) that you know she must have dug deep to bring this story to life.

Add on the vivid characters – she’s especially good at communicating what it feels like to be a woman in a male-dominated environment, in a “Jody Foster in Silence of the Lambs” kind of way – and you end up with an amazing first novel.

The Dolphin House by Audrey Schulman (2022)

The latest book from Audrey Schulman, and equally as good as her first one. It’s best to read it without reading any blurbs about what it’s about. I’ll just say that, once again, the amount of research she does (especially into animal behavior) and the depth of her characters, make for an absorbing and satisfying read.

Their Eyes Were Watching God by Zora Neale Hurston (1937)

A moving story about love and loss. At first it reminded me a bit of Madame Bovary (the boredom of a cooped-up housewife), but the story moves at such a brisk pace with so many different subplots that you can’t really compare it to one single thing. Somber at times, funny at others, ultimately uplifting.

After Dark by Haruki Murakami (2004)

A strange book (aren’t all Murakami books strange?) but a compelling one. What Steven King has for horror, Murakami seems to have for these kinds of uncanny situations that defy explanation. A short, fun read.

The Concubine by Norah Lofts (1963)

There are plenty of books, movies, and TV miniseries about the Anne Boleyn story, but this one was recommended to be as one of the best characterizations. Here we see Anne mostly as a tragic figure – a teenage girl who tries to master her own destiny but gets in way over her head. Meanwhile, Henry mostly comes off as a lecherous boor, quick to invent whatever moral authority he needs in the moment to justify his whims. A great but somewhat depressing book.

The Bell Jar by Sylvia Plath (1963)

I really enjoyed the first half of this book – the story of an ambitious but detached teenager is something I can personally identify with. The second half is where I lost interest, maybe because mental health and depression aren’t something I’ve had to deal with much. I imagine this book must have been a jaw-dropper when it was first released, at a time when mental health issues weren’t discussed with this much candor. I also think I would have enjoyed this book more when I was a mopey teenager.

The Alchemist by Paulo Caoelho (1988)

A strange little book, somewhat reminiscent of The Little Prince. I can’t say I really loved it, but it’s a nice short story.

Go Tell It on the Mountain by James Baldwin (1953)

The family drama and meditations on faith and hypocrisy stuck with me the most, although I can’t say this was my favorite book I read this year. Worth a read, though.

No One Is Talking About This by Patricia Lockwood (2021)

The first half is a pretty good approximation of what it feels like to mainline Twitter directly into your veins. I’m ashamed of how many of the references I got. However, it feels like the book runs out of steam about halfway through, like it was trying to make a point about how real-life events can tear you away from “the portal,” but it didn’t quite land for me.

Nonfiction

Midnight in Chernobyl by Adam Higginbotham (2019)

I was enthralled by the HBO miniseries Chernobyl, so I picked up this book. It’s equally riveting, and I found it hard to put down. Something about the horror of the event, combined with the banality of the bureaucratic fumbling around it, fills me with awe and fascination. And honestly, some of the descriptions of toadying, cover-ups, and fudging of the truth that were rife in the Soviet Union remind me more than a little bit of working in big companies (although thankfully the stakes are substantially lower than Chernobyl).

Out of the Software Crisis by Baldur Bjarnason (2022)

A great book on software development, and one I might need to re-read. So much of our industry feels like it’s driven by hearsay, hunches, and charisma (what Bjarnason calls “the pop culture”), and as an antidote to that, this book is like a breath of fresh air.

Shadow DOM and accessibility: the trouble with ARIA

Shadow DOM is a kind of retcon for the web. As I’ve written in the past, shadow DOM upends a lot of developer expectations and invalidates many tried-and-true techniques that worked fine in the pre-shadow DOM world. One potentially surprising example is ARIA.

Quick recap: shadow DOM allows you to isolate parts of your DOM into encapsulated chunks – typically one per component. Meanwhile, ARIA is an accessibility primitive, which defines attributes like aria-labelledby and aria-describeddby that can reference other elements by their IDs.

Do you see the problem yet? If not, I don’t blame you ‒ this is a tricky intersection of various web technologies. Unfortunately though, if you want to use shadow DOM without breaking accessibility, then this is one of the things you will have to grapple with. So let’s dive in.

Sketch of the problem

In shadow DOM, element IDs are scoped to their shadow root. For instance, here are two components:

<custom-label>
  #shadow-root
    <label id="foo">Hello world</label>
</custom-label>

<custom-input>
  #shadow-root
    <input type="text" id="foo">
</custom-input>

In this case, the two elements have the same ID of "foo". And yet, this is perfectly valid in the world of shadow DOM. If I do:

document.getElementById('foo')

…it will actually return null, because these IDs are not globally scoped – they are locally scoped to the shadow root of each component:

document.querySelector('custom-label')
  .shadowRoot.getElementById('foo') // returns the <label>

document.querySelector('custom-input')
  .shadowRoot.getElementById('foo') // returns the <input>

So far, so good. But now, what if I want to use aria-labelledby to connect the two elements? I could try this:

<!-- NOTE: THIS DOES NOT WORK -->
<custom-input>
  #shadow-root
    <input type="text" aria-labelledby="foo">
</custom-input>

Why does this fail? Well, because the "foo" ID is locally scoped. This means that the <input> cannot reach outside its shadow DOM to reference the <label> from the other component. (Feel free to try this example in any browser or screen reader – it will not work!)

So how can we solve this problem?

Solution 1: janky workarounds

The first thing you might reach for is a janky workaround. For instance, you could simply copy the text content from the <label> and slam it into the <input>, replacing aria-labelledby with aria-label:

<custom-input>
  #shadow-root
    <input type="text" aria-label="Hello world">
</custom-input>

Now, though, you’ve introduced several problems:

  1. You need to set up a MutationObserver or similar technique to observe whenever the <label> changes.
  2. You need to accurately calculate the accessible name of the <label>, and many off-the-shelf JavaScript libraries do not themselves support shadow DOM. So you have to hope that the contents of the <label> are simple enough for the calculation to work.
  3. This works for aria-labelledby because of the corresponding aria-label, but it doesn’t work for other attributes like aria-controls, aria-activedescendant, or aria-describedby. (Yes there is aria-description, but it doesn’t have full browser support.)

Another workaround is to avoid using the <input> directly, and to instead expose semantics on the custom element itself. For instance:

<custom-input 
  role="textbox" 
  contenteditable="true"
  aria-labelledby="foo"
></custom-input>
<custom-label id="foo"></custom-label>

(ElementInternals, once it’s supported in all browsers, could also help here.)

At this point, though, you’re basically building everything from scratch out of <div>s, including styles, keyboard events, and ARIA states. (Imagine doing this for a radio button, with all of its various keyboard interactions.) And plus, it wouldn’t work with any kind of nesting – forget about having any wrapper components with their own shadow roots.

I’ve also experimented with even jankier workarounds that involve copying entire DOM trees around between shadow roots. It kinda works, but it introduces a lot of footguns, and we’re already well on our way to wild contortions just to replace a simple aria-labelledby attribute. So let’s explore some better techniques.

Solution 2: ARIA reflection

As it turns out, some smart folks at browser vendors and standards bodies have been hard at work on this problem for a while. A lot of this effort is captured in the Accessibility Object Model (AOM) specification.

And thanks to AOM, we have a (partial) solution by way of IDREF attribute reflection. If that sounds like gibberish, let me explain what it means.

In ARIA, there are a bunch of attributes that refer to other elements by their IDs (i.e. “IDREFs”). These are:

  • aria-activedescendant
  • aria-controls
  • aria-describedby
  • aria-details
  • aria-errormessage
  • aria-flowto
  • aria-labelledby
  • aria-owns

Historically, you could only use these as HTML attributes. But that carries with it the problem of shadow DOM and ID scoping.

So to solve that, we now have the concept of the ARIA mixin, which basically states that for every aria-* attribute, there is a corresponding aria* property on DOM elements, available via JavaScript. In the case of the IDREF attributes above, these would be:

  • ariaActiveDescendantElement
  • ariaControlsElements
  • ariaDescribedByElements
  • ariaDetailsElements
  • ariaErrorMessageElement
  • ariaFlowToElements
  • ariaLabelledByElements
  • ariaOwnsElements

This means that instead of:

input.setAttribute('aria-labelledby', 'foo')

… you can now do:

input.ariaLabelledByElements = [label]

… where label is the actual <label> element. Note that we don’t have to deal with the ID ("foo") at all, so there is no more issue with IDs being scoped to shadow roots. (Also note it accepts an array, because you can actually have multiple labels.)

Now, this spec is very new (the change was only merged in June 2022), so for now, these properties are not supported in all browsers. The patches have just started to land in WebKit and Chromium. (Work has not yet begun in Firefox.) As of today, these can only be used in WebKit Nightly and Chrome Canary (with the “experimental web platform features” flag turned on). So if you’re hoping to ship it into production tomorrow: sorry, it’s not ready yet.

The even more unfortunate news, though, is that this spec does not fully solve the issue. As it turns out, you cannot just link any two elements you want – you can only link elements where the containing shadow roots are in an ancestor-descendant relationship (and the relationship can only go in one direction). In other words:

element1.ariaLabelledByElements = [element2]

In the above example, if one of the following is not true, then the linkage will not work and the browser will treat it as a no-op:

  • element2 is in the same shadow root as element1
  • element2 is in a parent, grandparent, or ancestor shadow root of element1

This restriction may seem confusing, but the intention is to avoid accidental leakage, especially in the case of closed shadow DOM. ariaLabelledByElements is a setter, but it’s also a getter, and that means that anyone with access to element1 can get access to element2. Now normally, you can freely traverse up the tree in shadow DOM, even if you can’t traverse down – which means that, even with closed shadow roots, an element can always access anything in its ancestor hierarchy. So the goal of this restriction is to prevent you from leaking anything that wasn’t already leaked.

Another problem with this spec is that it doesn’t work with declarative shadow DOM, i.e. server-side rendering (SSR). So your elements will remain inaccessible until you can use JavaScript to wire them up. (Which, for many people, is a dealbreaker.)

Solution 3: cross-root ARIA

The above solutions are what work today, at least in terms of the latest HTML spec and bleeding-edge versions of browsers. Since the problem is not fully solved, though, there is still active work being done in this space. The most promising spec right now is cross-root ARIA (originally authored by my colleague Leo Balter), which defines a fully-flexible and SSR-compatible API for linking any two elements you want, regardless of their shadow relationships.

The spec is rapidly changing, but here is a sketch of how the proposal looks today:

<!-- NOTE: DRAFT SYNTAX -->

<custom-label id="foo">
  <template shadowroot="open" 
            shadowrootreflectsariaattributes="aria-labelledby">
    <label reflectedariaattributes="aria-labelledby">
      Hello world
    </label>
  </template>
</custom-label>

<custom-input aria-labelledby="foo">
  <template shadowroot="open" 
            shadowrootdelegatesariaattributes="aria-labelledby">
    <input type="text" delegatedariaattributes="aria-labelledby">
  </template>
</custom-input>

A few things to notice:

  1. The spec works with Declarative Shadow DOM (hence I’ve used that format to illustrate).
  2. There are no restrictions on the relationship between elements.
  3. ARIA attributes can be exported (or “delegated”) out of shadow roots, as well as imported (or “reflected”) into shadow roots.

This gives web authors full flexibility to wire up elements however they like, regardless of shadow boundaries, and without requiring JavaScript. (Hooray!)

This spec is still in its early days, and doesn’t have any browser implementations yet. However, for those of us using web components and shadow DOM, it’s vitally important. Westbrook Johnson put it succinctly in this year’s Web Components Community Group meeting at TPAC:

“Accessibility with shadow roots is broken.”

Westbrook Johnson

Given all the problems I’ve outlined above, it’s hard for me to quibble with this statement.

What works today?

With the specs still landing in browsers or still being drafted, the situation can seem dire. It’s hard for me to give a simple “just use this API” recommendation.

So what is a web developer with a deadline to do? Well, for now, you have a few options:

  1. Don’t use shadow DOM. (Many developers have come to this conclusion!)
  2. Use elaborate workarounds, as described above.
  3. If you’re building something sophisticated that relies on several aria-* attributes, such as a combobox, then try to selectively use light DOM in cases where you can’t reach across shadow boundaries. (I.e. put the whole combobox in a single shadow root – don’t break it up into multiple shadow roots.)
  4. Use an ARIA live region instead of IDREFs. (This is the same technique used by canvas-based applications, such as Google Docs.) This option is pretty heavy-handed, but I suppose you could use it as a last resort.

Unfortunately there’s no one-size-fits-all solution. Depending on how you’ve architected your web components, one or multiple of the above options may work for you.

Conclusion

I’m hoping this situation will eventually improve. Despite all its flaws, I actually kind of like shadow DOM (although maybe it’s a kind of Stockholm syndrome), and I would like to be able to use it without worrying about accessibility.

For that reason, I’ve been somewhat involved recently with the AOM working group. It helps that my employer (Salesforce) has been working with Igalia to spec and implement this stuff as well. (It also helps that Manuel Rego Casasnovas is a beast who is capable of whipping up specs as well as patches to both WebKit and Chromium with what seems like relative ease.)

If you’re interested in this space, and would like to see it improve, I would recommend taking a look at the cross-root ARIA spec on GitHub and providing feedback. Or, make your voice heard in the Interop 2022 effort – where web components actually took the top spot in terms of web developer desire for more consistency across browsers.

The web is always improving, but it improves faster if web developers communicate their challenges, frustrations, and workarounds back to browser vendors and spec authors. That’s one of my goals with this blog post. So even if it didn’t solve every issue you have with shadow DOM and accessibility (other than maybe to scare you away from shadow DOM forever!), I hope that this post was helpful and informative.

Thanks to Manuel Rego Casasnovas and Westbrook Johnson for feedback on a draft of this blog post.

Thoughts on Mastodon

Five years ago, I was all-in on Mastodon. I deleted my Twitter account, set up a Mastodon instance, and encouraged my friends to join. A year later, I wrote my own Mastodon client in an attempt to make Mastodon faster and easier to use.

So with the recent Twitter exodus, and with seemingly every news outlet and tech blog talking about Mastodon, you’d think I’d be pretty pleased. And yet, I’m filled with a deep ambivalence.

Mastodon is great. It has a lot of advantages over Twitter. But in my own experience, I’ve found that the less time I spend on social media, the better I feel.

I like my RSS feed. The signal-to-noise ratio is high, the timeline is slow, and there are no notifications. That’s about the right speed of social media for me.

I still use Mastodon. But over time, Mastodon has become the place where I share interesting articles from my RSS feed, or my own blog posts, once a week or so. I read the comments, but rarely respond. It’s a largely write-only medium for me.

With so many people rediscovering Mastodon, though, I’ve done two things:

  1. I’ve beefed up my Mastodon instance and started a Patreon to help support the exploding usage.
  2. I’ve done some tinkering on my Mastodon client (Pinafore) and triaged the sudden onslaught of bug reports and feature requests.

I’ve done these things out of a sense of duty and obligation, but I know from experience that that’s not sustainable. My heart’s just not really in it, so maintaining these projects is probably not a great idea long-term. At some point, I will probably need to find a new maintainer for my Mastodon instance, and either “retire” Pinafore or pass it on to another maintainer.

For anyone who has just joined Mastodon from Twitter, and who is giddy about the possibilities of building a better, user-controlled social media: I applaud you! It is a worthy endeavor! But I would caution you to rein in your enthusiasm a bit, and read this thread from an ex-Twitter designer on what Twitter actually got right and Mastodon gets wrong, and this post from Alan Jacobs on how many Mastodon users have brought over their same bad habits – unmodified, unexamined – from the “hellsite.”

In my five years on Mastodon, I’ve found that there is a lot it does better than Twitter, but there is also a lot that is just endemic to social media. To the endless scroll. To the status games, the quest for adulation, the human urge to shame and shun and one-up and manipulate. I’m sure this goes back to Usenet – Jaron Lanier called it “chaotic human weather”.

There is a better way to foster kind, thoughtful, generous, joyful conversation on the internet. I’m not convinced that Mastodon has found the magic formula, but it is a step in the right direction. And as argued in this talk, I’m less interested in what the fediverse is now, than by what it could become. That depends on all of you, and what you choose to build with it.

A beginner’s guide to Chrome tracing

I’ve been doing web performance for a while, so I’ve spent a lot of time in the Performance tab of the Chrome DevTools. But sometimes when you’re debugging a tricky perf problem, you have to go deeper. That’s where Chrome tracing comes in.

Chrome tracing (aka Chromium tracing) lets you record a performance trace that captures low-level details of what the browser is doing. It’s mostly used by Chromium engineers themselves, but it can also be helpful for web developers when a DevTools trace is not enough.

This post is a short guide on how to use this tool, from a web developer’s point of view. I’m not going to cover everything – just the bare minimum to get up and running.

Setup

First off, as described in this helpful post, you’re going to want a clean browser window. The tracing tool measures everything going on in the browser, including background tabs and extensions, which just adds unnecessary noise.

You can launch a fresh Chrome window using this command (on Linux):

google-chrome \
  --user-data-dir="$(mktemp -d)" --disable-extensions

Or on macOS:

/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome \
  --user-data-dir="$(mktemp -d)" --disable-extensions

Or if you’re lazy (like me), you can install a standalone browser like Chrome Canary and run that.

Record

Next, go to about:tracing in the URL bar. (chrome:tracing or edge:tracing will also work, depending on your browser.) You’ll see a screen like this:

Screenshot of tracing tool with arrow pointing at Record

Click “Record.”

Next, you’ll be given a bunch of options. Here’s where it gets interesting.

Screenshot of tracing tools showing Edit categories with an arrow pointing at it

Usually “Web developer” is a fine default. But sometimes you want extra information, which you can get by clicking “Edit categories.” Here are some of the “cheat codes” I’ve discovered:

  • Check blink.user_timing to show user timings (i.e. performance.measures) in the trace. This is incredibly helpful for orienting yourself in a complex trace.
  • Check blink.debug to get SelectorStats, i.e. stats on slow CSS selectors during style calculation.
  • Check v8.runtime_stats for low-level details on what V8 is doing.

Note that you probably don’t want to go in here and check boxes with wild abandon. That will just make the trace slower to load, and could crash the tab. Only check things you think you’ll actually be using.

Next, click “Record.”

Now, switch over to another tab and do whatever action you want to record – loading a page, clicking a button, etc. Note that if you’re loading a page, it’s a good idea to start from about:blank to avoid measuring the unload of the previous page.

When you’re done recording, switch back and click “Stop.”

Analyze

Screenshot of tracing tool showing arrows pointing at Processes, None, and the Renderer process

In the tracing UI, the first thing you’ll want to do is remove the noise. Click “Processes,” then “None,” then select only the process you’re interested in. It should say “Renderer” plus the title of the tab where you ran your test.

Moving around the UI can be surprisingly tricky. Here is what I usually do:

  • Use the WASD keys to move left, right, or zoom in and out. (If you’ve played a lot of first-person shooters, you should feel right at home.)
  • Click-and-drag on any empty space to pan around.
  • Use the mousewheel to scroll up and down. Use /Alt + mousewheel to zoom in and out.

You’ll want to locate the CrRendererMain thread. This is the main thread of the renderer process. Under “Ungrouped Measure,” you should see any user timings (i.e. performance.measures) that you took in the trace.

In this example, I’ve located the Document::updateStyle slice (i.e. style calculation), as well as the SelectorStats right afterward. Below, I have a detailed table that I can click to sort by various columns. (E.g. you can sort by the longest elapsed time.)

Screenshot of tracing tool with arrows pointing to CrRendererMain, UpdateStyle, SelectorStats, and table of selectors

Note that I have a performance.measure called “total” in the above trace. (You can name it whatever you want.)

General strategy

I mostly use Chrome tracing when there’s an unexplained span of time in the DevTools. Here are some cases where I’ve seen it come in handy:

  • Time spent in IndexedDB (the IndexedDB flag can be helpful here).
  • Time spent in internal subsystems, such as accessibility or spellchecking.
  • Understanding which CSS selectors are slowest (see SelectorStats above).

My general strategy is to first run the tool with the default settings (plus blink.user_timing, which I almost always enable). This alone will often tell you more than the DevTools would.

If that doesn’t provide enough detail, I try to guess which subsystem of the browser has a performance problem, and tick flags related to that subsystem when recording. (For instance, skia is related to rendering, blink_style and blink.invalidation are probably related to style invalidation, etc.) Unfortunately this requires some knowledge of Chromium’s internals, along with a lot of guesswork.

When in doubt, you can always file a bug on Chromium. As long as you have a consistent repro, and you can demonstrate that it’s a Chromium-only perf problem, then the Chromium engineers should be able to route it to the right team.

Conclusion

The Chrome tracing tool is incredibly complex, and it’s mostly designed for browser engineers. It can be daunting for a web developer to pick up and use. But with a little practice, it can be surprisingly helpful, especially in odd perf edge cases.

There is also a new UI called Perfetto that some may find easier to use. I’m a bit old-school, though, so I still prefer the old UI for now.

I hope this short guide was helpful if you ever find yourself stuck with a performance problem in Chrome and need more insight into what’s going on!

See also: “Chrome Tracing for Fun and Profit” by Jeremy Rose.

Style performance and concurrent rendering

I was fascinated recently by “Why we’re breaking up with CSS-in-JS” by Sam Magura. It’s a great overview of some of the benefits and downsides of the “CSS-in-JS” pattern, as implemented by various libraries in the React ecosystem.

What really piqued my curiosity, though, was a link to this guide by Sebastian Markbåge on potential performance problems with CSS-in-JS when using concurrent rendering, a new feature in React 18.

Here is the relevant passage:

In concurrent rendering, React can yield to the browser between renders. If you insert a new rule in a component, then React yields, the browser then have to see if those rules would apply to the existing tree. So it recalculates the style rules. Then React renders the next component, and then that component discovers a new rule and it happens again.

This effectively causes a recalculation of all CSS rules against all DOM nodes every frame while React is rendering. This is VERY slow.

This concept was new and confusing to me, so I did what I often do in these cases: I wrote a benchmark.

Let’s benchmark it!

This benchmark is similar to my previous shadow DOM vs style scoping benchmark, with one twist: instead of rendering all “components” in one go, we render each one in its own requestAnimationFrame. This is to simulate a worst-case scenario for React concurrent rendering – where React yields between each component render, allowing the browser to recalculate style and layout.

In this benchmark, I’m rendering 200 “components,” with three kinds of stylesheets: unscoped (i.e. the most unperformant CSS I can think of), scoped-ala-Svelte (i.e. adding classes to every selector), and shadow DOM.

The “unscoped” CSS tells the clearest story:

Screenshot of Chrome DevTools showing style/layout calculation costs steadily increasing over time

In this Chrome trace, you can see that the style calculation costs steadily increase as each component is rendered. This seems to be exactly what Markbåge is talking about:

When you add or remove any CSS rules, you more or less have to reapply all rules that already existed to all nodes that already existed. Not just the changed ones. There are optimizations in browsers but at the end of the day, they don’t really avoid this problem.

In other words: not only are we paying style costs as every component renders, but those costs actually increase over time.

If we batch all of our style insertions before the components render, though, then we pay much lower style costs on each subsequent render:

Screenshot of Chrome DevTools, showing low and roughly consistent style/layout calculation costs over time

To me, this is similar to layout thrashing. The main difference is that, with “classic” layout thrashing, you’re forcing a style/layout recalculation by calling some explicit API like getBoundingClientRect or offsetLeft. Whereas in this case, you’re not explicitly invoking a recalc, but instead implicitly forcing a recalc by yielding to the browser’s normal style/layout rendering loop.

I’ll also note that the second scenario could still be considered “layout thrashing” – the browser is still doing style/layout work on each frame. It’s just doing much less, because we’ve only invalidated the DOM elements and not the CSS rules.

Update: This benchmark does not perfectly simulate how React renders DOM nodes – see below for a slightly tweaked benchmark. The conclusion is still largely the same.

Here are the benchmark results for multiple browsers (200 components, median of 25 samples, 2014 Mac Mini):

Chart data, see table below

Click for table
Scenario Chrome 106 Firefox 106 Safari 16
Unscoped 20807.3 13589 14958
Unscoped – styles in advance 3392.5 3357 3406
Scoped 3330 3321 3330
Scoped – styles in advance 3358.9 3333 3339
Shadow DOM 3366.4 3326 3327

As you can see, injecting the styles in advance is much faster than the pay-as-you-go system: 20.8s vs 3.4s in Chrome (and similar for other browsers).

It also turns out that using scoped CSS mitigates the problem – there is little difference between upfront and per-component style injection. And shadow DOM doesn’t have a concept of “upfront styles” (the styles are naturally scoped and attached to each component), so it benefits accordingly.

Is scoping a panacea?

Note though, that scoping only mitigates the problem. If we increase the number of components, we start to see the same performance degradation:

Screenshot of Chrome DevTools showing style/layout calculation costs steadily getting worse over time, although not as bad as in the other screenshot

Here are the benchmark results for 500 components (skipping “unscoped” this time around – I didn’t want to wait!):

Chart data, see table below

Click for table
Scenario Chrome 106 Firefox 106 Safari 16
Scoped 12490.6 8972 11059
Scoped – styles in advance 8413.4 8824 8561
Shadow DOM 8441.6 8949 8695

So even with style scoping, we’re better off injecting the styles in advance. And shadow DOM also performs better than “pay-as-you-go” scoped styles, presumably because it’s a browser-native scoping mechanism (as opposed to relying on the browser’s optimizations for class selectors). The exception is Firefox, which (in a recurring theme), seems to have some impressive optimizations in this area.

Is this something browsers could optimize more? Possibly. I do know that Chromium already weighs some tradeoffs with optimizing for upfront rendering vs re-rendering when stylesheets change. And Firefox seems to perform admirably with whatever CSS we throw at it.

So if this “inject and yield” pattern were prevalent enough on the web, then browsers might be incentivized to target it. But given that React concurrent rendering is somewhat new-ish, and given that the advice from React maintainers is already to batch style insertions, this seems somewhat unlikely to me.

Considering concurrent rendering

Unmentioned in either of the above posts is that this problem largely goes away if you’re not using concurrent rendering. If you do all of your DOM writes in one go, then you can’t layout thrash unless you’re explicitly calling APIs like getBoundingClientRect – which would be something for component authors to avoid, not for the framework to manage.

(Of course, in a long-lived web app, you could still have steadily increasing style costs as new CSS is injected and new components are rendered. But it seems unlikely to be quite as severe as the “rAF-based thrashing” above.)

I assume this, among other reasons, is why many non-React framework authors are skeptical of concurrent rendering. For instance, here’s Evan You (maintainer of Vue):

The pitfall here is not realizing that time slicing can only slice “slow render” induced by the framework – it can’t speed up DOM insertions or CSS layout. Also, staying responsive != fast. The user could end up waiting longer overall due to heavy scheduling overhead.

(Note that “time slicing” was the original name for concurrent rendering.)

Or for another example, here’s Rich Harris (maintainer of Svelte):

It’s not clear to me that [time slicing] is better than just having a framework that doesn’t have these bottlenecks in the first place. The best way to deliver a good user experience is to be extremely fast.

I feel a bit torn on this topic. I’ve seen the benefits of a “time slicing” or “debouncing” approach even when building Svelte components – for instance, both emoji-picker-element and Pinafore use requestIdleCallack (as described in this post) to improve responsiveness when typing into the text inputs. I found this improved the “feel” when typing, especially on a slower device (e.g. using Chrome DevTool’s 6x CPU throttling), even though both were written in Svelte. Svelte’s JavaScript may be fast, but the fastest JavaScript is no JavaScript at all!

That said, I’m not sure if this is something that should be handled by the framework rather than the component author. Yielding to the browser’s rendering loop is very useful in certain perf-sensitive scenarios (like typing into a text input), but in other cases it can worsen the overall performance (as we see with rendering components and their styles).

Is it worth it for the framework to make everything concurrent-capable and try to get the best of both worlds? I’m not so sure. Although I have to admire React for being bold enough to try.

Afterword

After this post was published, Mark Erikson wrote a helpful comment pointing out that inserting DOM nodes is not really something React does during “renders” (at least, in the context of concurrent rendering). So the benchmark would be more accurate if it inserted <style> nodes (as a “misbehaving” CSS-in-JS library would), but not component nodes, before yielding to the browser.

So I modified the benchmark to have a separate mode that delays inserting component DOM nodes until all components have “rendered.” To make it a bit fairer, I also pre-inserted the same number of initial components (but without style) – otherwise, the injected CSS rules wouldn’t have many DOM nodes to match against, so it wouldn’t be terribly representative of a real-world website.

As it turns out, this doesn’t really change the conclusion – we still see gradually increasing style costs in a “layout thrashing” pattern, even when we’re only inserting <style>s between rAFs:

Chrome DevTools screenshot showing gradually increasing style costs over time

The main difference is that, when we front-load the style injections, the layout thrashing goes away entirely, because each rAF tick is neither reading from nor writing to the DOM. Instead, we have one big style cost at the start (when injecting the styles) and another at the end (when injecting the DOM nodes):

Chrome DevTools screenshot showing large purple style blocks at the beginning and end and little JavaScript slices in the middle

(In the above screenshot, the occasional purple slices in the middle are “Hit testing” and “Pre-paint,” not style or layout calculation.)

Note that this is still a teensy bit inaccurate, because now our rAF ticks aren’t doing anything, since this benchmark isn’t actually using React or virtual DOM. In a real-world example, there would be some JavaScript cost to running a React component’s render() function.

Still, we can run the modified benchmark against the various browsers, and see that the overall conclusion has not changed much (200 components, median of 25 samples, 2014 Mac Mini):

Chart data, see table below

Click for table
Scenario Chrome 106 Firefox 106 Safari 16
Unscoped 26180 17622 17349
Unscoped – styles in advance 3958.3 3663 3945
Scoped 3394.6 3370 3358
Scoped – styles in advance 3476.7 3374 3368
Shadow DOM 3378 3370 3408

So the lesson still seems to be: invalidating global CSS rules frequently is a performance anti-pattern. (Even moreso than inserting DOM nodes frequently!)

Afterword 2

I asked Emilio Cobos Álvarez about this, and he gave some great insights from the Firefox perspective:

We definitely have optimizations for that […] but the worst case is indeed “we restyle the whole document again”.

Some of the optimizations Firefox has are quite clever. For example, they optimize appending stylesheets (i.e. appending a new <style> to the <head>) more heavily than inserting (i.e. injecting a <style> between other <style>s) or deleting (i.e. removing a <style>).

Emilio explains why:

Since CSS is source-order dependent, insertions (and removals) cause us to rebuild all the relevant data structures to preserve ordering, while appends can be processed more easily.

Some of this work was apparently done as part of optimizations for Facebook.com back in 2017. I assume Facebook was appending a lot of <style>s, but not inserting or deleting (which makes sense – this is the dominant pattern I see in JavaScript frameworks today).

Firefox also has some specific optimizations for classes, IDs, and tag names (aka “local names”). But despite their best efforts, there are cases where everything needs to be marked as invalid.

So as a web developer, keeping a mental model of “when styles change, everything must be recalculated” is still accurate, at least for the worst case.

SPAs: theory versus practice

I’ve been thinking a lot recently about Single-Page Apps (SPAs) and Multi-Page Apps (MPAs). I’ve been thinking about how MPAs have improved over the years, and where SPAs still have an edge. I’ve been thinking about how complexity creeps into software, and why a developer may choose a more complex but powerful technology at the expense of a simpler but less capable technology.

I think this core dilemma – complexity vs simplicity, capability vs maintainability – is at the heart of a lot of the debates about web app architecture. Unfortunately, these debates are so often tied up in other factors (a kind of web dev culture war, Twitter-stoked conflicts, maybe even a generational gap) that it can be hard to see clearly what the debate is even about.

At the risk of grossly oversimplifying things, I propose that the core of the debate can be summed up by these truisms:

  1. The best SPA is better than the best MPA.
  2. The average SPA is worse than the average MPA.

The first statement should be clear to most seasoned web developers. Show me an MPA, and I can show you how to make it better with JavaScript. Added too much JavaScript? I can show you some clever ways to minimize, defer, and multi-thread that JavaScript. Ran into some bugs, because now you’ve deviated from the browser’s built-in behavior? There are always ways to fix it! You’ve got JavaScript.

Whereas with an MPA, you are delegating some responsibility to the browser. Want to animate navigations between pages? You can’t (yet). Want to avoid the flash of white? You can’t, until Chrome fixes it (and it’s not perfect yet). Want to avoid re-rendering the whole page, when there’s only a small subset that actually needs to change? You can’t; it’s a “full page refresh.”

My second truism may be more controversial than the first. But I think time and experience have shown that, whatever the promises of SPAs, the reality has been less convincing. It’s not hard to find examples of poorly-built SPAs that score badly on a variety of metrics (performance, accessibility, reliability), and which could have been built better and more cheaply as a bog-standard MPA.

Example: subsequent navigations

To illustrate, let’s consider one of the main value propositions of an SPA: making subsequent navigations faster.

Rich Harris recently offered an example of using the SvelteKit website (SPA) compared to the Astro website (MPA), showing that page navigations on the Svelte site were faster.

Now, to be clear, this is a bit of an unfair comparison: the Svelte site is preloading content when you hover over links, so there’s no network call by the time you click. (Nice optimization!) Whereas the Astro site is not using a Service Worker or other offlining – if you throttle to 3G, it’s even slower relative to the Svelte site.

But I totally believe Rich is right! Even with a Service Worker, Astro would have a hard time beating SvelteKit. The amount of DOM being updated here is small and static, and doing the minimal updates in JavaScript should be faster than asking the browser to re-render the full HTML. It’s hard to beat element.innerHTML = '...'.

However, in many ways this site represents the ideal conditions for an SPA navigation: it’s small, it’s lightweight, it’s built by the kind of experts who build their own JavaScript framework, and those experts are also keen to get performance right – since this website is, in part, a showcase for the framework they’re offering. What about real-world websites that aren’t built by JavaScript framework authors?

Anthony Ricaud recently gave a talk (in French – apologies to non-Francophones) where he analyzed the performance of real-world SPAs. In the talk, he asks: What if these sites used standard MPA navigations?

To answer this, he built a proxy that strips the site of its first-party JavaScript (leaving the kinds of ads and trackers that, sadly, many teams are not allowed to forgo), as well as another version of the proxy that doesn’t strip any JavaScript. Then, he scripted WebPageTest to click an internal link, measuring the load times for both versions (on throttled 4G).

So which was faster? Well, out of the three sites he tested, on both mobile (Moto G4) and desktop, the MPA was either just as fast or faster, every time. In some cases, the WebPageTest filmstrips even showed that the MPA version was faster by several seconds. (Note again: these are subsequent navigations.)

On top of that, the MPA sites gave immediate feedback to the user when clicking – showing a loading indicator in the browser chrome. Whereas some of the SPAs didn’t even manage to show a “skeleton” screen before the MPA had already finished loading.

Screenshot from conference talk showing a speaker on the left and a WebPageTest filmstrip on the right. The filmstrip compares two sites: the first takes 5.5 seconds and the second takes 2.5 seconds

Screenshot from Anthony Ricaud’s talk. The SPA version is on top (5.5s), and the MPA version is on bottom (2.5s).

Now, I don’t think this experiment is perfect. As Anthony admits, removing inline <script>s removes some third-party JavaScript as well (the kind that injects itself into the DOM). Also, removing first-party JavaScript removes some non-SPA-related JavaScript that you’d need to make the site interactive, and removing any render-blocking inline <script>s would inherently improve the visual completeness time.

Even with a perfect experiment, there are a lot of variables that could change the outcome for other sites:

  • How fast is the SSR?
  • Is the HTML streamed?
  • How much of the DOM needs to be updated?
  • Is a network request required at all?
  • What JavaScript framework is being used?
  • How fast is the client CPU?
  • Etc.

Still, it’s pretty gobsmacking that JavaScript was slowing these sites down, even in the one case (subsequent navigations) where JavaScript should be making things faster.

Exhausted developers and clever developers

Now, let’s return to my truisms from the start of the post:

  1. The best SPA is better than the best MPA.
  2. The average SPA is worse than the average MPA.

The cause of so much debate, I think, is that two groups of developers may look at this situation, agree on the facts on the ground, but come to two different conclusions:

“The average SPA sucks? Well okay, I should stop building SPAs then. Problem solved.” – Exhausted developer

 

“The average SPA sucks? That’s just because people haven’t tried hard enough! I can think of 10 ways to fix it.” – Clever developer

Let’s call these two archetypes the exhausted developer and the clever developer.

The exhausted developer has had enough with managing the complexity of “modern” web sites and web applications. Too many build tools, too many code paths, too much to think about and maintain. They have JavaScript fatigue. Throw it all away and simplify!

The clever developer is similarly frustrated by the state of modern web development. But they also deeply understand how the web works. So when a tool breaks or a framework does something in a sub-optimal way, it irks them, because they can think of a better way. Why can’t a framework or a tool fix this problem? So they set out to find a new tool, or to build it themselves.

The thing is, I think both of these perspectives are right. Clever developers can always improve upon the status quo. Exhausted developers can always save time and effort by simplifying. And one group can even help the other: for instance, maybe Parcel is approachable for those exhausted by Webpack, but a clever developer had to go and build Parcel first.

Conclusion

The disparity between the best and the average SPA has been around since the birth of SPAs. In the mid-2000s, people wanted to build SPAs because they saw how amazing GMail was. What they didn’t consider is that Google had a crack team of experts monitoring every possible problem with SPAs, right down to esoteric topics like memory leaks. (Do you have a team like that?)

Ever since then, JavaScript framework and tooling authors have been trying to democratize SPA tooling, bringing us the kinds of optimizations previously only available to the Googles and the Facebooks of the world. Their intentions have been admirable (I would put my own fuite on that pile), but I think it’s fair to say the results have been mixed.

An expert developer can stand up on a conference stage and show off the amazing scores for their site (perfect performance! perfect accessibility! perfect SEO!), and then an excited conference-goer returns to their team, convinces them to use the same tooling, and two years later they’ve built a monstrosity. When this happens enough times, the same conference-goer may start to distrust the next dazzling demo they see.

And yet… the web dev community marches forward. Today I can grab any number of “starter” app toolkits and build something that comes out-of-the-box with code-splitting, Service Workers, tree-shaking, a thousand different little micro-optimizations that I don’t even have to know the names of, because someone else has already thought of it and gift-wrapped it for me. That is a miracle, and we should be grateful for it.

Given enough innovation in this space, it is possible that, someday, the average SPA could be pretty great. If it came batteries-included with proper scroll, focus, and screen reader announcements, tooling to identify performance problems (including memory leaks), progressive DOM rendering (e.g. Jake Archibald’s hack), and a bunch of other optimizations, it’s possible that developers would fall into the “pit of success” and consistently make SPAs that outclass the equivalent MPA. I remain skeptical that we’ll get there, and even the best SPA would still have problems (complexity, performance on slow clients, etc.), but I can’t fault people for trying.

At the same time, browsers never stop taking the lessons from userland and upstreaming them into the browser itself, giving us more lines of code we can potentially delete. This is why it’s important to periodically re-evaluate the assumptions baked into our tooling.

Today, I think the core dilemma between SPAs and MPAs remains unresolved, and will maybe never be resolved. Both SPAs and MPAs have their strengths and weaknesses, and the right tool for the job will vary with the size and skills of the team and the product they’re trying to build. It will also vary over time, as browsers evolve. The important thing, I think, is to remain open-minded, skeptical, and analytical, and to accept that everything in software development has tradeoffs, and none of those tradeoffs are set in stone.

Style scoping versus shadow DOM: which is fastest?

Update: this post was updated with some new benchmark numbers in October 2022.

Last year, I asked the question: Does shadow DOM improve style performance? I didn’t give a clear answer, so perhaps it’s no surprise that some folks weren’t sure what conclusion to draw.

In this post, I’d like to present a new benchmark that hopefully provides a more solid answer.

TL;DR: My new benchmark largely confirmed my previous research, and shadow DOM comes out as the most consistently performant option. Class-based style scoping slightly beats shadow DOM in some scenarios, but in others it’s much less performant. Firefox, thanks to its multi-threaded style engine, is much faster than Chrome or Safari.

Shadow DOM and style performance

To recap: shadow DOM has some theoretical benefits to style calculation, because it allows the browser to work with a smaller DOM size and smaller CSS rule set. Rather than needing to compare every CSS rule against every DOM node on the page, the browser can work with smaller “sub-DOMs” when calculating style.

However, browsers have a lot of clever optimizations in this area, and userland “style scoping” solutions have emerged (e.g. Vue, Svelte, and CSS Modules) that effectively hook into these optimizations. The way they typically do this is by adding a class or an attribute to the CSS selector: e.g. * { color: red } becomes *.xxx { color: red }, where xxx is a randomly-generated token unique to each component.

After crunching the numbers, my post showed that class-based style scoping was actually the overall winner. But shadow DOM wasn’t far behind, and it was the more consistently fast option.

These nuances led to a somewhat mixed reaction. For instance, here’s one common response I saw (paraphrasing):

The fastest option overall is class-based scoped styles, ala Svelte or CSS Modules. So shadow DOM isn’t really that great.

But looking at the same data, you could reach another, totally reasonable, conclusion:

With shadow DOM, the performance stays constant instead of scaling with the size of the DOM or the complexity of the CSS. Shadow DOM allows you to use whatever CSS selectors you want and not worry about performance.

Part of it may have been people reading into the data what they wanted to believe. If you already dislike shadow DOM (or web components in general), then you can read my post and conclude, “Wow, shadow DOM is even more useless than I thought.” Or if you’re a web components fan, then you can read my post and think, “Neat, shadow DOM can improve performance too!” Data is in the eye of the beholder.

To drive this point home, here’s the same data from my post, but presented in a slightly different way:

Chart image, see table below for the same data

Click for details

This is 1,000 components, 10 rules per component.

Selector performance (ms) Chrome Firefox Safari
Class selectors 58.5 22 56
Attribute selectors 597.1 143 710
Class selectors – shadow DOM 70.6 30 61
Attribute selectors – shadow DOM 71.1 30 81

As you can see, the case you really want to avoid is the second one – bare attribute selectors. Inside of the shadow DOM, though, they’re fine. Class selectors do beat shadow DOM overall, but only by a rounding error.

My post also showed that more complex selectors are consistently fast inside of the shadow DOM, even if they’re much slower at the global level. This is exactly what you would expect, given how shadow DOM works – the real surprise is just that shadow DOM doesn’t handily win every category.

Re-benchmarking

It didn’t sit well with me that my post didn’t draw a firm conclusion one way or the other. So I decided to benchmark it again.

This time, I tried to write a benchmark to simulate a more representative web app. Rather than focusing on individual selectors (ID, class, attribute, etc.), I tried to compare a userland “scoped styles” implementation against shadow DOM.

My new benchmark generates a DOM tree based on the following inputs:

  • Number of “components” (web components are not used, since this benchmark is about shadow DOM exclusively)
  • Elements per component (with a random DOM structure, with some nesting)
  • CSS rules per component (randomly generated, with a mix of tag, class, attribute, :not(), and :nth-child() selectors, and some descendant and compound selectors)
  • Classes per component
  • Attributes per component

To find a good representative for “scoped styles,” I chose Vue 3’s implementation. My previous post showed that Vue’s implementation is not as fast as that of Svelte or CSS Modules, since it uses attributes instead of classes, but I found Vue’s code to be easier to integrate. To make things a bit fairer, I added the option to use classes rather than attributes.

One subtlety of Vue’s style scoping is that it does not scope ancestor selectors. For instance:

/* Input */
div div {}

/* Output - Vue */
div div[data-v-xxx] {}

/* Output - Svelte */
div.svelte-xxx div.svelte-xxx {}

(Here is a demo in Vue and a demo in Svelte.)

Technically, Svelte’s implementation is more optimal, not only because it uses classes rather than attributes, but because it can rely on the Bloom filter optimization for ancestor lookups (e.g. :not(div) div.svelte-xxx:not(div) div.svelte-xxx, with .svelte-xxx in the ancestor). However, I kept the Vue implementation because 1) this analysis is relevant to Vue users at least, and 2) I didn’t want to test every possible permutation of “scoped styles.” Adding the “class” optimization is enough for this blog post – perhaps the “ancestor” optimization can come in future work. (Update: this is now covered below.)

Note: In benchmark after benchmark, I’ve seen that class selectors are typically faster than attribute selectors – sometimes by a lot, sometimes by a little. From the web developer’s perspective, it may not be obvious why. Part of it is just browser vendor priorities: for instance, WebKit invented the Bloom filter optimization in 2011, but originally it only applied to tags, classes, and IDs. They expanded it to attributes in 2018, and Chrome and Firefox followed suit in 2021 when I filed these bugs on them. Perhaps something about attributes also makes them intrinsically harder to optimize than classes, but I’m not a browser developer, so I won’t speculate.

Methodology

I ran this benchmark on a 2021 MacBook Pro (M1), running macOS Monterey 12.4. The M1 is perhaps not ideal for this, since it’s a very fast computer, but I used it because it’s the device I had, and it can run all three of Chrome, Firefox, and Safari. This way, I can get comparable numbers on the same hardware.

In the test, I used the following parameters:

Parameter Value
Number of components 1000
Elements per component 10
CSS rules per component 10
Classes per element 2
Attributes per element 2

I chose these values to try to generate a reasonable “real-world” app, while also making the app large enough and interesting enough that we’d actually get some useful data out of the benchmark. My target is less of a “static blog” and more of a “heavyweight SPA.”

There are certainly more inputs I could have added to the benchmark: for instance, DOM depth. As configured, the benchmark generates a DOM with a maximum depth of 29 (measured using this snippet). Incidentally, this is a decent approximation of a real-world app – YouTube measures 28, Reddit 29, and Wikipedia 17. But you could certainly imagine more heavyweight sites with deeper DOM structures, which would tend to spend more time in descendant selectors (outside of shadow DOM, of course – descendant selectors cannot cross shadow boundaries).

For each measurement, I took the median of 5 runs. I didn’t bother to refresh the page between each run, because it didn’t seem to make a big difference. (The relevant DOM was being blown away every time.) I also didn’t randomize the stylesheets, because the browsers didn’t seem to be doing any caching that would require randomization. (Browsers have a cache for stylesheet parsing, as I discussed in this post, but not for style calculation, insofar as it matters for this benchmark anyway.)

Update: I realized this comment was a bit blasé, so I re-ran the benchmark with a fresh browser session between each sample, just to make sure the browser cache wasn’t affecting the numbers. You can find those numbers at the end of the post. (Spoiler: no big change.)

Although the benchmark has some randomness, I used random-seedable with a consistent seed to ensure reproducible results. (Not that the randomness was enough to really change the numbers much, but I’m a stickler for details.)

The benchmark uses a requestPostAnimationFrame polyfill to measure style/layout/paint performance (see this post for details). To focus on style performance only, a DOM structure with only absolute positioning is used, which minimizes the time spent in layout and paint.

And just to prove that the benchmark is actually measuring what I think it’s measuring, here’s a screenshot of the Chrome DevTools Performance tab:

Screenshot of Chrome DevTools showing a large amount of time taken up by the User Timing called "total" with most of that containing a time slice called "Recalculate style"

Note that the measured time (“total”) is mostly taken up by “Recalculate Style.”

Results

When discussing the results, it’s much simpler to go browser-by-browser, because each one has different quirks.

One of the things I like about analyzing style performance is that I see massive differences between browsers. It’s one of those areas of browser performance that seems really unsettled, with lots of work left to do.

That is… unless you’re Firefox. I’m going to start off with Firefox, because it’s the biggest outlier out of the three major browser engines.

Firefox

Firefox’s Stylo engine is fast. Like, really fast. Like, so fast that, if every browser were like Firefox, there would be little point in discussing style performance, because it would be a bit like arguing over the fastest kind of for-loop in JavaScript. (I.e., interesting minutia, but irrelevant except for the most extreme cases.)

In almost every style calculation benchmark I’ve seen over the past five years, Firefox smokes every other browser engine to the point where it’s really in a class of its own. Whereas other browsers may take over 1,000ms in a given scenario, Firefox will take ~100ms for the same scenario on the same hardware.

So keep in mind that, with Firefox, we’re going to be talking about really small numbers. And the differences between them are going to be even smaller. But here they are:

Chart data, see details in table below

Click for table
Scenario Firefox 101
Scoping – classes 30
Scoping – attributes 38
Shadow DOM 26
Unscoped 114

Note that, in this benchmark, the first three bars are measuring roughly the same thing – you end up with the same DOM with the same styles. The fourth case is a bit different – all the styles are purely global, with no scoping via classes or attributes. It’s mostly there as a comparison point.

My takeaway from the Firefox data is that scoping with either classes, attributes, or shadow DOM is fine – they’re all pretty fast. And as I mentioned, Firefox is quite fast overall. As we move on to other browsers, you’ll see how the performance numbers get much more varied.

Chrome

The first thing you should notice about Chrome’s data is how much higher the y-axis is compared to Firefox. With Firefox, we were talking about ~100ms at the worst, whereas now with Chrome, we’re talking about an order of magnitude higher: ~1,000ms. (Don’t feel bad for Chrome – the Safari numbers will look pretty similar.)

Chart data, see details in table below

Click for table
Scenario Chrome 102
Scoping – classes 357
Scoping – attributes 614
Shadow DOM 49
Unscoped 1022

Initially, the Chrome data tells a pretty simple story: shadow DOM is clearly the fastest, followed by style scoping with classes, followed by style scoping with attributes, followed by unscoped CSS. So the message is simple: use Shadow DOM, but if not, then use classes instead of attributes for scoping.

I noticed something interesting with Chrome, though: the performance numbers are vastly different for these two cases:

  • 1,000 components: insert 1,000 different <style>s into the <head>
  • 1,000 components: concatenate those styles into one big <style>

As it turns out, this simple optimization greatly improves the Chrome numbers:

Chart data, see details in table below

Click for table
Scenario Chrome 102 – separate styles Chrome 102 – concatenated
Classes 357 48
Attributes 614 43

When I first saw these numbers, I was confused. I could understand this optimization in terms of reducing the cost of DOM insertions. But we’re talking about style calculation – not DOM API performance. In theory, it shouldn’t matter whether there are 1,000 stylesheets or one big stylesheet. And indeed, Firefox and Safari show no difference between the two:

Chart data, see details in table below

Click for table
Scenario Firefox 101 – separate styles Firefox 101 – concatenated
Classes 30 29
Attributes 38 38

Chart data, see details in table below

Click for table
Scenario Safari 15.5 – separate styles Safari 15.5. – concatenated
Classes 75 73
Attributes 812 820

This behavior was curious enough that I filed a bug on Chromium. According to the Chromium engineer who responded (thank you!), this is because of a design decision to trade off some initial performance in favor of incremental performance when stylesheets are modified or added. (My benchmark is a bit unfair to Chrome, since it only measures the initial calculation. A good idea for a future benchmark!)

This is actually a pretty interesting data point for JavaScript framework and bundler authors. It seems that, for Chromium anyway, the ideal technique is to concatenate stylesheets similarly to how JavaScript bundlers do code-splitting – i.e. trying to concatenate as much as possible, while still splitting in some cases to optimize for caching across routes. (Or you could go full inline and just put one big <style> on every page.) Keep in mind, though, that this is a peculiarity of Chromium’s current implementation, and it could go away at any moment if Chromium decides to change it.

In terms of the benchmark, though, it’s not clear to me what to do with this data. You might imagine that it’s a simple optimization for a JavaScript framework (or meta-framework) to just concatenate all the styles together, but it’s not always so straightforward. When a component is mounted, it may call getComputedStyle() on its own DOM nodes, so batching up all the style insertions until after a microtask is not really feasible. Some meta-frameworks (such as Nuxt and SvelteKit) leverage a bundler to concatenate the styles and insert them before the component is mounted, but it feels a bit unfair to depend on that for the benchmark.

To me, this is one of the core advantages of shadow DOM – you don’t have to worry if your bundler is configured correctly or if your JavaScript framework uses the right kind of style scoping. Shadow DOM is just performant, all of the time, full stop. That said, here is the Chrome comparison data with the concatenation optimization applied:

Chart data, see details in table below

Click for table
Scenario Chrome 102 (with concatenation optimization)
Scoping – classes 48
Scoping – attributes 43
Shadow DOM 49
Unscoped 1022

The first three are close enough that I think it’s fair to say that all of the three scoping methods (class, attribute, and shadow DOM) are fast enough.

Note: You may wonder if Constructable Stylesheets would have an impact here. I tried a modified version of the benchmark that uses these, and didn’t observe any difference – Chrome showed the same behavior for concatenation vs splitting. This makes sense, as none of the styles are duplicated, which is the main use case Constructable Stylesheets are designed for. I have found elsewhere, though, that Constructable Stylesheets are more performant than <style> tags in terms of DOM API performance, if not style calculation performance (e.g. see here, here, and here).

Safari

In our final tour of browsers, we arrive at Safari:

Chart data, see details in table below

Click for table
Scenario Safari 15.5
Scoping – classes 75
Scoping – attributes 812
Shadow DOM 94
Unscoped 840

To me, the Safari data is the easiest to reason about. Class scoping is fast, shadow DOM is fast, and unscoped CSS is slow. The one surprise is just how slow attribute selectors are compared to class selectors. Maybe WebKit has some more optimizations to do in this space – compared to Chrome and Firefox, attributes are just a much bigger performance cliff relative to classes.

This is another good example of why class scoping is superior to attribute scoping. It’s faster in all the engines, but the difference is especially stark in Safari. (Or you could use shadow DOM and not worry about it at all.)

Update: shortly after this post was published, WebKit made an optimization to attribute selectors. This seems to eliminate the perf cliff: in Safari Technology Preview 152 (Safari 16.0, WebKit 17615.1.2.3), the benchmark time for attributes drops to 77ms, which is only marginally slower than classes at 74ms (taking the median of 15 samples).

Conclusion

Performance shouldn’t be the main reason you choose a technology like scoped styles or shadow DOM. You should choose it because it fits well with your development paradigm, it works with your framework of choice, etc. Style performance usually isn’t the biggest bottleneck in a web application, although if you have a lot of CSS or a large DOM size, then you may be surprised by the amount of “Recalculate Style” costs in your next performance trace.

One can also hope that someday browsers will advance enough that style calculation becomes less of a concern. As I mentioned before, Stylo exists, it’s very good, and other browsers are free to borrow its ideas for their own engines. If every browser were as fast as Firefox, I wouldn’t have a lot of material for this blog post.

Chart data, see details in table below

This is the same data presented in this post, but on a single chart. Just notice how much Firefox stands out from the other browsers.

Click for table
Scenario Chrome 102 Firefox 101 Safari 15.5
Scoping – classes 357 30 75
Scoping – attributes 614 38 812
Shadow DOM 49 26 94
Unscoped 1022 114 840
Scoping – classes – concatenated 48 29 73
Scoping – attributes – concatenated 43 38 820

For those who dislike shadow DOM, there is also a burgeoning proposal in the CSS Working Group for style scoping. If this proposal were adopted, it could provide a less intrusive browser-native scoping mechanism than shadow DOM, similar to the abandoned <style scoped> proposal. I’m not a browser developer, but based on my reading of the spec, I don’t see why it couldn’t offer the same performance benefits we see with shadow DOM.

In any case, I hope this blog post was interesting, and helped shine light on an odd and somewhat under-explored space in web performance. Here is the benchmark source code and a live demo in case you’d like to poke around.

Thanks to Alex Russell and Thomas Steiner for feedback on a draft of this blog post.

Afterword – more data

Updated June 23, 2022

After writing this post, I realized I should take my own advice and automate the benchmark so that I could have more confidence in the numbers (and make it easier for others to reproduce).

So, using Tachometer, I re-ran the benchmark, taking the median of 25 samples, where each sample uses a fresh browser session. Here are the results:

Chart data, see details in table below

Click for table
Scenario Chrome 102 Firefox 101 Safari 15.5
Scoping – classes 277.1 45 80
Scoping – attributes 418.8 54 802
Shadow DOM 56.80000001 67 82
Unscoped 820.4 190 857
Scoping – classes – concatenated 44.30000001 42 80
Scoping – attributes – concatenated 44.5 51 802
Unscoped – concatenated 251.3 167 865

As you can see, the overall conclusion of my blog post doesn’t change, although the numbers have shifted slightly in absolute terms.

I also added “Unscoped – concatenated” as a category, because I realized that the “Unscoped” scenario would benefit from the concatenation optimization as well (in Chrome, at least). It’s interesting to see how much of the perf win is coming from concatenation, and how much is coming from scoping.

If you’d like to see the raw numbers from this benchmark, you can download them here.

Second afterword – even more data

Updated June 25, 2022

You may wonder how much Firefox’s Stylo engine is benefiting from the 10 cores in that 2021 Mac Book Pro. So I unearthed my old 2014 Mac Mini, which has only 2 cores but (surprisingly) can still run macOS Monterey. Here are the results:

Chart data, see details in table below

Click for table
Scenario Chrome 102 Firefox 101 Safari 15.5
Scoping – classes 717.4 107 187
Scoping – attributes 1069.5 162 2853
Shadow DOM 227.7 117 233
Unscoped 2674.5 452 3132
Scoping – classes – concatenated 189.3 104 188
Scoping – attributes – concatenated 191.9 159 2826
Unscoped – concatenated 865.8 422 3148

(Again, this is the median of 25 samples. Raw data.)

Amazingly, Firefox seems to be doing even better here relative to the other browsers. For “Unscoped,” it’s 14.4% of the Safari number (vs 22.2% on the MacBook), and 16.9% of the Chrome number (vs 23.2% on the MacBook). Whatever Stylo is doing, it’s certainly impressive.

Third update – scoping strategies

Updated October 8, 2022

I was curious about which kind of scoping strategy (e.g. Svelte-style or Vue-style) performed best in the benchmark. So I updated the benchmark to generate three “scoped selector” styles:

  1. Full selector: ala Svelte, every part of the selector has a class or attribute added (e.g. div div becomes div.xyz div.xyz)
  2. Right-hand side (RHS): ala Vue, only the right-hand side selector is scoped with a class or attribute (e.g. div div becomes div div.xyz)
  3. Tag prefix: ala Enhance, the tag name of the component is prefixed (e.g. div div becomes my-component div div)

Here are the results, taking the median of 25 iterations on a 2014 Mac Mini (raw data):

Chart data, see table below

Same chart with “Unscoped” removed

Chart data, see table below

Table
Scenario Chrome 106 Firefox 105 Safari 16
Shadow DOM 237.1 120 249
Scoping – classes – RHS 643.1 110 190
Scoping – classes – full 644.1 111 193
Scoping – attributes – RHS 954.3 152 200
Scoping – attributes – full 964 146 204
Scoping – tag prefix 1667.9 163 316
Unscoped 9767.5 3436 6829

Note that this version of the benchmark is slightly different from the previous one – I wanted to cover more selector styles, so I changed how the source CSS is generated to include more pseudo-classes in the ancestor position (e.g. :nth-child(2) div). This is why the “unscoped” numbers are higher than before.

My first takeaway is that Safari 16 has largely fixed the problem with attribute selectors – they are now roughly the same as class selectors. (This optimization seems to be the reason.)

In Firefox, classes are still slightly faster than attributes. I actually reached out to Emilio Cobos Álvarez about this, and he explained that, although Firefox did make an optimization to attribute selectors last year (prompted by my previous blog post), class selectors still have “a more micro-optimized code path.” To be fair, though, the difference is not enormous.

In Chrome, class selectors comfortably outperform attribute selectors, and the tag prefix is further behind. Note though, that these are the “unconcatenated” numbers – when applying the concatenation optimization, all the numbers decrease for Chrome:

Chart data, see table below

Same chart with “Unscoped” removed

Chart data, see table below

Table
Scenario Chrome 106 Firefox 105 Safari 16
Shadow DOM 237.1 120 249
Scoping – classes – RHS – concatenated 182 107 192
Scoping – classes – full – concatenated 183.6 107 190
Scoping – attributes – RHS – concatenated 185.8 148 198
Scoping – attributes – full – concatenated 187.1 142 204
Scoping – tag prefix – concatenated 288.7 159 315
Unscoped – concatenated 6476.3 3526 6882

With concatenation, the difference between classes and attributes is largely erased in Chrome. As before, concatenation has little to no impact on Firefox or Safari.

In terms of which scoping strategy is fastest, overall the tag prefix seems to be the slowest, and classes are faster than attributes. Between “full” selector scoping and RHS scoping, there does not seem to be a huge difference. And overall, any scoping strategy is better than unscoped styles. (Although do keep in mind this is a microbenchmark, and some of the selectors it generates are a bit tortured and elaborate, e.g. :not(.foo) :nth-child(2):not(.bar). In a real website, the difference would probably be less pronounced.)

I’ll also note that the more work I do in this space, the less my work seems to matter – which is a good thing! Between blogging and filing bugs on browsers, I seem to have racked up a decent body count of browser optimizations. (Not that I can really take credit; all I did was nerd-snipe the relevant browser engineers.) Assuming Chromium fixes the concatenation perf cliff, there won’t be much to say except “use some kind of CSS scoping strategy; they’re all pretty good.”

Dialogs and shadow DOM: can we make it accessible?

Last year, I wrote about managing focus in the shadow DOM, and in particular about modal dialogs. Since the <dialog> element has now shipped in all browsers, and the inert attribute is starting to land too, I figured it would be a good time to take another look at getting dialogs to play nicely with shadow DOM.

This post is going to get pretty technical, especially when it comes to the nitty-gritty details of accessibility and web standards. If you’re into that, then buckle up! The ride may be a bit bumpy.

Quick recap

Shadow DOM is weird. On paper, it doesn’t actually change what you can do in the DOM – with open mode, at least, you can access any element on the page that you want. In practice, though, shadow DOM upends a lot of web developer expectations about how the DOM works, and makes things much harder.

Image of Lisa Simpson in front of a sign saying "Keep out. Or enter, I'm a sign not a cop."

I credit Brian Kardell for this description of open shadow DOM, which is maybe the most perfect distillation of how it actually works.

Note: Shadow DOM has two modes: open and closed. Closed mode is a lot more restrictive, but it’s less common – the majority of web component frameworks use open by default (e.g. Angular, Fast, Lit, LWC, Remount, Stencil, Svelte, Vue). Somewhat surprisingly, though, open mode is only 3 times as popular as closed mode, according to Chrome Platform Status (9.9% vs 3.5%).

For accessibility reasons, modal dialogs need to implement a focus trap. However, the DOM doesn’t have an API for “give me all the elements on the page that the user can Tab through.” So web developers came up with creative solutions, most of which amount to:

dialog.querySelectorAll('button, input, a[href], ...')

Unfortunately this is the exact thing that doesn’t work in the shadow DOM. querySelectorAll only grabs elements in the current document or shadow root; it doesn’t deeply traverse.

Like a lot of things with shadow DOM, there is a workaround, but it requires some gymnastics. These gymnastics are hard, and have a complexity and (probably) performance cost. So a lot of off-the-shelf modal dialogs don’t handle shadow DOM properly (e.g. a11y-dialog does not).

Note: My goal here isn’t to criticize a11y-dialog. I think it’s one of the best dialog implementations out there. So if even a11y-dialog doesn’t support shadow DOM, you can imagine a lot of other dialog implementations probably don’t, either.

A constructive dialog

“But what about <dialog>?”, you might ask. “The dang thing is called <dialog>; can’t we just use that?”

If you had asked me a few years ago, I would have pointed you to Scott O’Hara’s extensive blog post on the subject, and said that <dialog> had too many accessibility gotchas to be a practical solution.

If you asked me today, I would again point you to the same blog post. But this time, there is a very helpful 2022 update, where Scott basically says that <dialog> has come a long way, so maybe it’s time to give it a second chance. (For instance, the issue with returning focus to the previously-focused element is now fixed, and the need for a polyfill is much reduced.)

Note: One potential issue with <dialog>, mentioned in Rob Levin’s recent post on the topic, is that clicking outside of the dialog should close it. This has been proposed for the <dialog> element, but the WAI ARIA Authoring Practices Guide doesn’t actually stipulate this, so it seems like optional behavior to me.

To be clear: <dialog> still doesn’t give you 100% of what you’d need to implement a dialog (e.g. you’d need to lock the background scroll), and there are still some lingering discussions about how to handle initial focus. For that reason, Scott still recommends just using a battle-tested library like a11y-dialog.

As always, though, shadow DOM makes things more complicated. And in this case, <dialog> actually has some compelling superpowers:

  1. It automatically limits focus to the dialog, with correct Tab order, even in shadow DOM.
  2. It works with closed shadow roots as well, which is impossible in userland solutions.
  3. It also works with user-agent shadow roots. (E.g. you can Tab through the buttons in a <video controls> or <audio controls>.) This is also impossible in userland, since these elements function effectively like closed shadow roots.
  4. It correctly returns focus to the previously-focused element, even if that element is in a closed shadow root. (This is possible in userland, but you’d need an API contract with the closed-shadow component.)
  5. The Esc key correctly closes the modal, even if the focus is in a user-agent shadow root (e.g. the pause button is focused when you press Esc). This is also not possible in userland.

Here is a demo:

Note: Eagle-eyed readers may wonder: what if the first tabbable element in the dialog is in a shadow root? Does it correctly get focus? The short answer is: yes in Chrome, no in Firefox or Safari (demo). Let’s hope those browsers fix it soon.

So should everybody just switch over to <dialog>? Not so fast: it actually doesn’t perfectly handle focus, per the WAI ARIA Authoring Practices Guide (APG), because it allows focus to escape to the browser chrome. Here’s what I mean:

  • You reach the last tabbable element in the dialog and press Tab.
    • Correct: focus moves to the first tabbable element in the dialog.
    • Incorrect (<dialog>): focus goes to the URL bar or somewhere else in the browser chrome.
  • You reach the first tabbable element in the dialog and press Shift+Tab.
    • Correct: focus moves to the last tabbable element in the dialog.
    • Incorrect (<dialog>): focus goes to the URL bar or somewhere else in the browser chrome.

This may seem like a really subtle difference, but the consensus of accessibility experts seems to be that the WAI ARIA APG is correct, and <dialog> is wrong.

Note: I say “consensus,” but… there isn’t perfect consensus. You can read this comment from James Teh or Scott O’Hara’s aforementioned post (“This is good behavior, not a bug”) for dissenting opinions. In any case, the “leaky” focus trap conflicts with the WAI ARIA APG and the way userland dialogs have traditionally worked.

So we’ve reached (yet another!) tough decision with <dialog>. Do we accept <dialog>, because at least it gets shadow DOM right, even though it gets some other stuff wrong? Do we try to build our own thing? Do we quit web development entirely and go live the bucolic life of a potato farmer?

Inert matter

While I was puzzling over this recently, it occurred to me that inert may be a step forward to solving this problem. For those unfamiliar, inert is an attribute that can be used to mark sections of the DOM as “inert,” i.e. untabbable and invisible to screen readers:

<main inert></main>
<div role="dialog"></div>
<footer inert></footer>

In this way, you could mark everything except the dialog as inert, and focus would be trapped inside the dialog.

Here is a demo:

As it turns out, this works perfectly for tabbing through elements in the shadow DOM, just like <dialog>! Unfortunately, it has exactly the same problem with focus escaping to the browser chrome. This is no accident: the behavior of <dialog> is defined in terms of inert.

Can we still solve this, though? Unfortunately, I’m not sure it’s possible. I tried a few different techniques, such as listening for Tab events and checking if the activeElement has moved outside of the modal, but the problem is that you still, at some point, need to figure out what the “first” and “last” tabbable elements in the dialog are. To do this, you need to traverse the DOM, which means (at the very least) traversing open shadow roots, which doesn’t work for closed or user-agent shadow roots. And furthermore, it involves a lot of extra work for the web developer, who has probably lost focus at this point and is daydreaming about that nice, quiet potato farm.

Note: inert also, sadly, does not help with the Esc key in user-agent shadow roots, or returning focus to closed shadow roots when the dialog is closed, or setting initial focus on an element in a closed shadow root. These are <dialog>-only superpowers. Not that you needed any extra convincing.

Conclusion

Until the spec and browser issues have been ironed out (e.g. browsers change their behavior so that focus doesn’t escape to the browser chrome, or they give us some entirely different “focus trap” primitive), I can see two reasonable options:

  1. Use something like a11y-dialog, and don’t use shadow DOM or user-agent shadow components like <video controls> or <audio controls>. (Or do some nasty hacks to make it partially work.)
  2. Use shadow DOM, but don’t bother solving the “focus escapes to the browser chrome” problem. Use <dialog> (or a library built on top of it) and leave it at that.

For my readers who were hoping that I’d drop some triumphant “just npm install nolans-cool-dialog and it will work,” I’m sorry to disappoint you. Browsers are still rough around the edges in this area, and there aren’t a lot of great options. Maybe there is some mad-science way to actually solve this, but even that would likely involve a lot of complexity, so it wouldn’t be ideal.

Alternatively, maybe some of you are thinking that I’m focusing too much on closed and user-agent shadow roots. As long as you’re only using open shadow DOM (which, recall, is like the sign that says “I’m a sign, not a cop”), you can do whatever you want. So there’s no problem, right?

Personally, though, I like using <video controls> and <audio controls> (why ship a bunch of JavaScript to do something the browser already does?). And furthermore, I find it odd that if you put a <video controls> inside a <dialog>, you end up with something that’s impossible to make accessible per the WAI ARIA APG. (Is it too much to ask for a little internal consistency in the web platform?)

In any case, I hope this blog post was helpful for others tinkering around with the same problems. I’ll keep an eye on the browsers and standards space, and update this post if anything promising emerges.