Archive for September, 2024

Web components are okay

Every so often, the web development community gets into a tizzy about something, usually web components. I find these fights tiresome, but I also see them as a good opportunity to reach across “the great divide” and try to find common ground rather than another opportunity to dunk on each other.

Ryan Carniato started the latest round with “Web Components Are Not the Future”. Cory LaViska followed up with “Web Components Are Not the Future — They’re the Present”. I’m not here to escalate, though – this is a peace mission.

I’ve been an avid follower of Ryan Carniato’s work for years. This post and the steady climb of LWC on the js-framework-benchmark demonstrate that I’ve been paying attention to what he has to say, especially about performance and framework design. The guy has single-handedly done more to move the web framework ecosystem forward in the past 5 years than anyone else I can think of.

That said, I also heavily work with web components, both on the framework side and as a component author. I’ve participated in the Web Components Community Group and Accessibility Object Model group, and I’ve written extensively on shadow DOM, custom elements, and web component accessibility in this blog.

So obviously I’m going to be interested when I see a post from Ryan Carniato on web components. And it’s a thought-provoking post! But I also think he misses the mark on a few things. So let’s dive in:

Performance

[T]he fundamental problem with Web Components is that they are built on Custom Elements.

[…] [E]very interface needs to go through the DOM. And of course this has a performance overhead.

This is completely true. If your goal is to build the absolute fastest framework you can, then you want to minimize DOM nodes wherever possible. This means that web components are off the table.

I fully believe that Ryan knows how to build the fastest possible framework. Again, the results for Solid on the js-framework-benchmark are a testament to this.

That said – and I might alienate some of my friends in the web performance community by saying this – performance isn’t everything. There are other tradeoffs in software development, such as maintainability, security, usability, and accessibility. Sometimes these things come into conflict.

To make a silly example: I could make DOM rendering slightly faster by never rendering any aria-* attributes. But of course sometimes you have to render aria-* attributes to make your interface accessible, and nobody would argue that a couple milliseconds are worth excluding screen reader users.

To make an even sillier example: you can improve performance by using for loops instead of .forEach(). Or using var instead of const/let. Typically, though, these kinds of micro-optimizations are just not worth it.

When I see this kind of stuff, I’m reminded of speedrunners trying to shave milliseconds off a 5-minute run of Super Mario Bros using precise inputs and obscure glitches. If that’s your goal, then by all means: backwards long jump across the entire stage instead of just having Mario run forward. I’ll continue to be impressed by what you’re doing, but it’s just not for me.

Minimizing the use of DOM nodes is a classic optimization – this is the main idea behind virtualization. That said, sometimes you can get away with simpler approaches, even if it’s not the absolute fastest option. I’d put “components as elements” in the same bucket – yes it’s sub-optimal, but optimal is not always the goal.

Similarly, I’ve long argued that it’s fine for custom elements to use different frameworks. Sometimes you just need to gradually migrate from Framework A to Framework B. Or you have to compose some micro-frontends together. Nobody would argue that this is the fastest possible interface, but fine – sometimes tradeoffs have to be made.

Having worked for a long time in the web performance space, I find that the lowest-hanging fruit for performance is usually something dumb like layout thrashing, network waterfalls, unnecessary re-renders, etc. Framework authors like myself love to play performance golf with things like the js-framework-benchmark, and it’s a great flex, but it just doesn’t usually matter in the real world.

That said, if it does matter to you – if you’re building for resource-constrained environments where every millisecond counts: great! Ditch web components! I will geek out and cheer for every speedrunning record you break.

The cost of standards

More code to ship and more code to execute to check these edge cases. It’s a hidden tax that impacts everyone.

Here’s where I completely get off the train from Ryan’s argument. As a framework author, I just don’t find that it’s that much effort to support web components. Detecting props versus attributes is a simple prop in element check. Outputting web components is indeed painful, but hey – nobody said you have to do it. Vue 2 got by with a standalone web component wrapper library, and Remount exists without any input from the React team.

As a framework author, if you want to freeze your thinking in 2011 and code as if nothing new was added to the web platform since then, you absolutely can! And you can still write a great framework! This is the beauty of the web. jQuery v1 is still chugging away on plenty of websites, and in fact it gets faster and faster with every new browser release, since browser perf teams are often targeting whatever patterns web developers used ~5 year ago in an endless cat-and-mouse game.

But assuming you don’t want to freeze your brain in amber, then yes: you do need to account for new stuff added to the web platform. But this is also true of things like Symbols, Proxys, Promises, etc. I just see it as part of the job, and I’m not particularly bothered, since I know that whatever I write will still work in 10 years, thanks to the web’s backwards compatibility guarantees.

Furthermore, I get the impression that a wide swath of the web development community does not care about web components, does not want to support them, and you probably couldn’t convince them to. And that’s okay! The web is a big tent, and you can build entire UIs based on web components, or with a sprinkling of HTML web components, or with none at all. If you want to declare your framework a “no web components” zone, then you can do that and still get plenty of avid fans.

That said, Ryan is right that, by blessing something as “the standard,” it inherently becomes a mental default that needs to be grappled with. Component authors must decide whether their <slot>s should work like native <slot>s. That’s true, but again, you could say this about a lot of new browser APIs. You have to decide whether IntersectionObserver or <img loading="lazy"> is worth it, or whether you’d rather write your own abstraction. That’s fine! At least we have a common point of reference, a shared vocabulary to compare and contrast things.

And just because something is a web standard doesn’t mean you have to use it. For the longest time, the classic joke about JavaScript: The Good Parts was how small it is compared to JavaScript: The Definitive Guide. The web is littered with deprecated (but still supported) APIs like document.domain, with, and <frame>s. Take it or leave it!

Conclusion

[I]n a sense there are nothing wrong with Web Components as they are only able to be what they are. It’s the promise that they are something that they aren’t which is so dangerous.

Here I totally agree with Ryan. As I’ve said before, web components are bad at a lot of things – Server-Side Rendering, accessibility, even interop in some cases. They’re good at plenty of things, but replacing all JavaScript frameworks is not one of them. Maybe we can check back in 10 years, but for now, there are still cases where React, Solid, Svelte, and friends shine and web components flounder.

Ryan is making an eminently reasonable point here, as is the rest of the post, and on its own it’s a good contribution to the discourse. The title is a bit inflammatory, which leads people to wield it as a bludgeon against their perceived enemies on social media (likely without reading the piece), but this is something I blame on social media, not on Ryan.

Again, I find these debates a bit tiresome. I think the fundamental issue, as I’ve previously said, is that people are talking past each other because they’re building different things with different constraints. It’s as if a salsa dancer criticized ballet for not being enough like salsa. There is more than one way to dance!

From my own personal experience: at Salesforce, we build a client-rendered app, with its own marketplace of components, with strict backwards-compatibility guarantees, where the intended support is measured in years if not decades. Is this you? If not, then maybe you shouldn’t build your entire UI out of web components, with shadow DOM and the whole kit-n-kaboodle. (Or maybe you should! I can’t say!)

What I find exciting about the web is the sheer number of people doing so many wild and bizarre things with it. It has everything from games to art projects to enterprise SaaS apps, built with WebGL and Wasm and Service Workers and all sorts of zany things. Every new capability added to the web platform isn’t a limitation on your creativity – it’s an opportunity to express your creativity in ways that nobody imagined before.

Web components may not be the future for you – that’s great! I’m excited to see what you build, and I might steal some ideas for my own corner of the web.

Update: Be sure to read Lea Verou’s and Baldur Bjarnason’s excellent posts on the topic.

Improving rendering performance with CSS content-visibility

Recently I got an interesting performance bug on emoji-picker-element:

I’m on a fedi instance with 19k custom emojis […] and when I open the emoji picker […], the page freezes for like a full second at least and overall performance stutters for a while after that.

If you’re not familiar with Mastodon or the Fediverse, different servers can have their own custom emoji, similar to Slack, Discord, etc. Having 19k (really closer to 20k in this case) is highly unusual, but not unheard of.

So I booted up their repro, and holy moly, it was slow:

Screenshot of Chrome DevTools with an emoji picker showing high ongoing layout/paint costs and 40,000 DOM nodes

There were multiple things wrong here:

  • 20k custom emoji meant 40k elements, since each one used a <button> and an <img>.
  • No virtualization was used, so all these elements were just shoved into the DOM.

Now, to my credit, I was using <img loading="lazy">, so those 20k images were not all being downloaded at once. But no matter what, it’s going to be achingly slow to render 40k elements – Lighthouse recommends no more than 1,400!

My first thought, of course, was, “Who the heck has 20k custom emoji?” My second thought was, “*Sigh* I guess I’m going to need to do virtualization.”

I had studiously avoided virtualization in emoji-picker-element, namely because 1) it’s complex, 2) I didn’t think I needed it, and 3) it has implications for accessibility.

I’ve been down this road before: Pinafore is basically one big virtual list. I used the ARIA feed role, did all the calculations myself, and added an option to disable “infinite scroll,” since some people don’t like it. This is not my first rodeo! I was just grimacing at all the code I’d have to write, and wondering about the size impact on my “tiny” ~12kB emoji picker.

After a few days, though, the thought popped into my head: what about CSS content-visibility? I saw from the trace that lots of time is spent in layout and paint, and plus this might help the “stuttering.” This could be a much simpler solution than full-on virtualization.

If you’re not familiar, content-visibility is a new-ish CSS feature that allows you to “hide” certain parts of the DOM from the perspective of layout and paint. It largely doesn’t affect the accessibility tree (since the DOM nodes are still there), it doesn’t affect find-in-page (+F/Ctrl+F), and it doesn’t require virtualization. All it needs is a size estimate of off-screen elements, so that the browser can reserve space there instead.

Luckily for me, I had a good atomic unit for sizing: the emoji categories. Custom emoji on the Fediverse tend to be divided into bite-sized categories: “blobs,” “cats,” etc.

Screenshot of emoji picker showing categories Bobs and Cats with different numbers of emoji in each but with eight columns in a grid for all

Custom emoji on mastodon.social.

For each category, I already knew the emoji size and the number of rows and columns, so calculating the expected size could be done with CSS custom properties:

.category {
  content-visibility: auto;
  contain-intrinsic-size:
    /* width */
    calc(var(--num-columns) * var(--total-emoji-size))
    /* height */
    calc(var(--num-rows) * var(--total-emoji-size));
}

These placeholders take up exactly as much space as the finished product, so nothing is going to jump around while scrolling.

The next thing I did was write a Tachometer benchmark to track my progress. (I love Tachometer.) This helped validate that I was actually improving performance, and by how much.

My first stab was really easy to write, and the perf gains were there… They were just a little disappointing.

For the initial load, I got a roughly 15% improvement in Chrome and 5% in Firefox. (Safari only has content-visibility in Technology Preview, so I can’t test it in Tachometer.) This is nothing to sneeze at, but I knew a virtual list could do a lot better!

So I dug a bit deeper. The layout costs were nearly gone, but there were still other costs that I couldn’t explain. For instance, what’s with this big undifferentiated blob in the Chrome trace?

Screenshot of Chrome DevTools with large block of JavaScript time called "mystery time"

Whenever I feel like Chrome is “hiding” some perf information from me, I do one of two things: bust out chrome:tracing, or (more recently) enable the experimental “show all events” option in DevTools.

This gives you a bit more low-level information than a standard Chrome trace, but without needing to fiddle with a completely different UI. I find it’s a pretty good compromise between the Performance panel and chrome:tracing.

And in this case, I immediately saw something that made the gears turn in my head:

Screenshot of Chrome DevTools with previous mystery time annotated as ResourceFetcher::requestResource

What the heck is ResourceFetcher::requestResource? Well, even without searching the Chromium source code, I had a hunch – could it be all those <img>s? It couldn’t be, right…? I’m using <img loading="lazy">!

Well, I followed my gut and simply commented out the src from each <img>, and what do you know – all those mystery costs went away!

I tested in Firefox as well, and this was also a massive improvement. So this led me to believe that loading="lazy" was not the free lunch I assumed it to be.

Update: I filed a bug on Chromium for this issue. After more testing, it seems I was mistaken about Firefox – this looks like a Chromium-only issue.

At this point, I figured that if I was going to get rid of loading="lazy", I may as well go whole-hog and turn those 40k DOM elements into 20k. After all, if I don’t need an <img>, then I can use CSS to just set the background-image on an ::after pseudo-element on the <button>, cutting the time to create those elements in half.

.onscreen .custom-emoji::after {
  background-image: var(--custom-emoji-background);
}

At this point, it was just a simple IntersectionObserver to add the onscreen class when the category scrolled into view, and I had a custom-made loading="lazy" that was much more performant. This time around, Tachometer reported a ~40% improvement in Chrome and ~35% improvement in Firefox. Now that’s more like it!

Note: I could have used the contentvisibilityautostatechange event instead of IntersectionObserver, but I found cross-browser differences, and plus it would have penalized Safari by forcing it to download all the images eagerly. Once browser support improves, though, I’d definitely use it!

I felt good about this solution and shipped it. All told, the benchmark clocked a ~45% improvement in both Chrome and Firefox, and the original repro went from ~3 seconds to ~1.3 seconds. The person who reported the bug even thanked me and said that the emoji picker was much more usable now.

Something still doesn’t sit right with me about this, though. Looking at the traces, I can see that rendering 20k DOM nodes is just never going to be as fast as a virtualized list. And if I wanted to support even bigger Fediverse instances with even more emoji, this solution would not scale.

I am impressed, though, with how much you get “for free” with content-visibility. The fact that I didn’t need to change my ARIA strategy at all, or worry about find-in-page, was a godsend. But the perfectionist in me is still irritated by the thought that, for maximum perf, a virtual list is the way to go.

Maybe eventually the web platform will get a real virtual list as a built-in primitive? There were some efforts at this a few years ago, but they seem to have stalled.

I look forward to that day, but for now, I’ll admit that content-visibility is a good rough-and-ready alternative to a virtual list. It’s simple to implement, gives a decent perf boost, and has essentially no accessibility footguns. Just don’t ask me to support 100k custom emoji!

The continuing tragedy of emoji on the web

Pop quiz: what emoji do you see below? [1]

Depending on your browser and operating system, you might see:

Three emoji representations of Martinique - the new black-red-green flag, the old blue-and-white flag, and the letters MQ.

From left to right: Safari on iOS 17, Firefox 130 on Windows 11, and Chrome 128 on Windows 11.

This, frankly, is a mess. And it’s emblematic of how half-heartedly browsers and operating systems have worked to keep their emoji up to date.

What’s responsible for this sorry state? I gave an overview two years ago, and shockingly little has changed – in fact, it’s gotten a bit worse.

In short:

  • Firefox bundles their own emoji font (great!), but unfortunately, thanks to turmoil at Twitter/X, their bet on Twemoji has not shaken out too well, and they are two years behind the latest Unicode standard.
  • Windows 10 users (i.e. 64% of Windows as a whole) are stuck five years behind in emoji versions, and even Windows 11 is still not showing flag emoji, which Microsoft excludes for some mysterious reason. (Geopolitical skittishness? Disregard for sports fans?)
  • Safari has the same fragmentation problem, with multiple users stuck on old versions of iOS, and thus old emoji fonts. For example, some 3.75% of iOS users are still on iOS 15, with only 2022-era emoji. [2]

As a result, every website on the planet that cares about having a consistent emoji experience has to bundle their own font or spritesheet, wasting untold megabytes for something that’s been standardized like clockwork by the Unicode Consortium for 15 years.

My recommendation remains the same: browsers should bundle their own emoji font and ship it outside of OS updates. Firefox does this right; they just need to switch to an up-to-date font like this Twemoji fork. There is an issue on Chromium to add the same functionality. As for Safari, well… they’re not quite evergreen, and fragmentation is just a consequence of that. But shipping a font is not rocket science, so maybe WebKit or iOS could be convinced to ship it out-of-band.

In the meantime, web developers can use a COLR font or polyfill to get a reasonably consistent experience across browsers. It’s just sad to me that, with all the stunning advancements browsers have made in recent years, and with all the overwhelming popularity of emoji, the web still struggles at rendering them.

Footnotes

  1. I’m using Codepen for this because WordPress automatically replaces native emoji with <img>s, since of course browsers can’t be trusted to render a font properly. Although ironically, they render the old flag (on certain browsers anyway): 🇲🇶
  2. For posterity: using Wikimedia’s stats for August 12th through September 16th 2024: 1.2% mobile Safari 15 users / 32% mobile Safari users = 3.75%.