Archive for December, 2023

2023 book review

A stack of books including many mentioned in this post like IQ84 and Pure Invention

Compared to previous years, my reading velocity has taken a bit of a nosedive. Blame videogames, maybe: I’ve put more hours into Civilization 6 than I care to admit, and I’m currently battling Moblins and Bokoblins in Zelda: Tears of the Kingdom.

I’ve also been trying to re-learn the guitar. I basically stopped playing for nearly a decade, but this year I was surprised to learn that it’s a lot like riding a bike: my fingers seem to know things that my brain thought I had forgotten. I’ve caught up on most of the songs I used to know, and I’m looking forward to learning more in 2024.

(The wonderful gametabs.net used to be my go-to source for great finger-picking-style videogame songs, but like a lot of relics of the old internet, its future seems to be in doubt. I may have to find something else.)

In any case! Here are the books:

Quick links

Fiction

Non-fiction

Fiction

A Wizard of Earthsea by Ursula K. Le Guin

One of those classics of fantasy literature that I had never gotten around to reading. I really enjoyed this one, especially as it gave me a new appreciation for Patrick Rothfuss’s The Name of the Wind, which seems to draw heavily from the themes of Earthsea – in particular, the idea that knowing the “true name” of something gives you power over it. I gave the second Earthsea book a shot, but haven’t gotten deep enough to get drawn in yet.

1Q84 by Haruki Murakami

I’ve always enjoyed Murakami’s dreamy, David Lynch-like magic realism. This one runs a little bit too long for my taste – it starts off strong and starts to drag near the end – but I still thoroughly enjoyed it.

Dare to Know by James Kennedy

This one was a bit of a sleeper hit which I was surprised to find wasn’t more popular. It’s a high-concept sci-fi / dystopian novel with a lot of fun mysteries and twists. I would be completely unsurprised if it gets turned into a Christopher Nolan movie in a few years.

Cloud Atlas by David Mitchell

This book has a big reputation, which I think is thoroughly earned. It’s best to read it without knowing anything about what it’s about, so that you can really experience the whole voyage the author is trying to take you on here.

All I’ll say is that if you like sci-fi and aren’t intimidated by weird or archaic language, then this book is for you.

Non-fiction

The Intelligence Illusion by Baldur Bjarnason

Like Out of the Software Crisis last year, this book had a big impact on me. This book deepened my skepticism about the current wave of GenAI hype, although I do admit (like the author) that it still has some reasonable use cases.

Unfortunately I think a lot of people are jumping into the GenAI frenzy without reading sober analyses like these, so we’ll probably have to learn the hard way what the technology is good at and what it’s terrible at.

Pure Invention: How Japan Made the Modern World by Matt Alt

As a certified Japanophile nerd (I did admit I play videogame music, right?), this book was a fun read for me. It’s especially interesting to see Japan’s cultural exports (videogames, manga, etc.) from the perspective of their own home country. I admit I hadn’t thought much about how things like Gundam or Pokémon were perceived by fans back home, so this book gave me a better context for the artifacts that shaped my childhood.

Creative Selection: Inside Apple’s Design Process During the Golden Age of Steve Jobs by Ken Kocienda

Yes, there is some hero-worship of Steve Jobs here, but there is also just a really engrossing story of great engineers doing great work at the right place in the right time. I especially loved the bits about how the original iPhone soft keyboard was designed, and how WebKit was initially chosen as the browser engine for Safari.

Fifty Plants that Changed the Course of History by Bill Laws

I’ve always been one of those pedants who loves to point out that most staples of European cuisine (pizza in Italy, fish and chips in Britain) are really foreign imports, since things like tomatoes and potatoes are New World plants. So this book was perfect for me. It’s also a fun read since it’s full of great illustrations, and gives just the right amount of detail – only the barest overview of how the plants were discovered, how they were popularized, and how they’re used today.

Shadow DOM and the problem of encapsulation

Web components are kind of having a moment right now. And as part of that, shadow DOM is having a bit of a moment too. Or it would, except that much of the conversation seems to be about why you shouldn’t use shadow DOM.

For example, “HTML web components” are based on the idea that you should use most of the goodness of web components (custom elements, lifecycle hooks, etc.), while dropping shadow DOM like a bad habit. (Another name for this is “light DOM components.”)

This is a perfectly fine pattern for certain cases. But I also think some folks are confused about the tradeoffs with shadow DOM, because they don’t understand what shadow DOM is supposed to accomplish in the first place. In this post, I’d like to clear up some of the misconceptions by explaining what shadow DOM is supposed to do, while also weighing its success in actually achieving it.

What the heck is shadow DOM for

The main goal of shadow DOM is encapsulation. Encapsulation is a tricky concept to explain, because the benefits are not immediately obvious.

Let’s say you have a third-party component that you’ve decided to include on your website or webapp. Maybe you found it on npm, and it solved some use case very nicely. Let’s say it’s something simple, like a dropdown component.

Blue button that says click and has a downward-pointing chevron icon

You know what, though? You really don’t like that caret character – you’d rather have a 👇 emoji. And you’d really prefer rounded corners. And the theme color should be red instead of blue. So you hack together some CSS:

.dropdown {
  background: red;
  border-radius: 8px;
}
.dropdown .caret::before {
  content: '👇';
}

Red button that says click and has a downward-pointing index finger emoji icon

Great! You get the styling you want. Ship it.

Except that 6 months later, the component has an update. And it’s to fix a security vulnerability! Your boss is pressuring you to update the component as fast as possible, since otherwise the website won’t pass a security audit anymore. So you go to update, and…

Everything’s broken.

It turns out that the component changed their internal class name from dropdown to picklist. And they don’t use CSS content for the caret anymore. And they added a wrapper <div>, so the border-radius needs to be applied to something else now. Suddenly you’re in for a world of hurt, just to get the component back to the way it used to look.

Global control is great until it isn’t

CSS gives you an amazing superpower, which is that you can target any element on the page as long as you can think of the right selector. It’s incredibly easy to do this in DevTools today – a lot of people are trained to right-click, “Inspect Element,” and rummage around for any class or attribute to start targeting the element. And this works great in the short term, but it affects the long-term maintainability of the code, especially for components you don’t own.

This isn’t just a problem with CSS – JavaScript has this same flaw due to the DOM. Using document.querySelector (or equivalent APIs), you can traverse anywhere you want in the DOM, find an element, and apply some custom behavior to it – e.g. adding an event listener or changing its internal structure. I could tell the same story above using JavaScript rather than CSS.

This openness can cause headaches for component authors as well as component consumers. In a system where the onus is on the component author to ship new versions (e.g. a monorepo, a platform, or even just a large codebase), component authors can effectively get frozen in time, unable to ship any internal refactors for fear of breaking their downstream consumers.

Shadow DOM attempts to solve these problems by providing encapsulation. If the third-party dropdown component were using shadow DOM, then you wouldn’t be able to target arbitrary content inside of it (except with elaborate workarounds that I don’t want to get into).

Of course, by closing off access to global styling and DOM traversal, shadow DOM also greatly limits a component’s customizability. Consumers can’t just decide they want a background to be red, or a border to be rounded – the component author has to provide an explicit styling API, using tools like CSS custom properties or parts. E.g.:

snazzy-dropdown {
  --dropdown-bg: red;
}

snazzy-dropdown::part(caret)::before {
  content: '👇';
}

By exposing an explicit styling API, the risk of breakage across component upgrades is heavily reduced. The component author is effectively declaring an API surface that they intend to support, which limits what they need to keep stable over time. (This API can still break, as with a major version bump, but that’s another story.)

Tradeoffs

When people complain about shadow DOM, they seem to mostly be complaining about style encapsulation. They want to reach in and add a rounded corner on some component, and roll the dice that the component doesn’t change in the future. Depending on what kind of website you’re building, this can be a perfectly acceptable tradeoff. For example:

  • A portfolio site
  • A news article with interactive charts
  • A marketing site for a Super Bowl campaign
  • A landing page that will be rewritten in 2 years anyway

In all of these cases, long-term maintenance is not really a big concern. The page either has a limited shelf life, or it’s just not important to keep its dependencies up to date. So if the dropdown component breaks in a year or two, nobody cares.

Of course, there is also the opposite world where long-term maintenance matters a lot:

  • An interactive productivity app
  • A design system
  • A platform with its own app store for UI components
  • An online multiplayer game

I could go on, but the point is: the second group cares a lot more about long-term maintainability than the first group. If you’ve spent your entire career working on the first group, then you may indeed find shadow DOM to be baffling. You can’t possibly understand why you should be prevented from globally styling whatever you want.

Conversely, if you’ve spent your entire career in the second group, then you may be equally baffled by people who want global access to everything. (“Are they trying to shoot themselves in the foot?”) This is why I think people are often talking past each other about this stuff.

But does it work

So now that we’ve established the problem shadow DOM is trying to solve, there’s the inevitable question: does it actually solve it?

This is an important question, because I think it’s the source of the other major tension with shadow DOM. Even people who understand the problem are not in agreement that shadow DOM actually solves it.

If you want to get a good sense of people’s frustrations with shadow DOM, there are two massive GitHub threads you can check out:

There are a lot of potential solutions being tossed around in those threads (including by me), but I’m not really convinced that any one of them is the silver bullet that is going to solve people’s frustrations with shadow DOM. And the reason is that the core problem here is a coordination problem, not a technical problem.

For example, take “open-stylable shadow roots.” The idea is that a shadow root can inherit the styles from its parent context (exactly like light DOM). But then of course, we get into the coordination problem:

  • Will every web component on npm need to enable open-stylable shadow roots?
  • Or will page authors need a global mechanism to force every component into this mode?
  • What if a component author doesn’t want to be opted-in? What if they prefer the lower maintenance costs of a small API surface?

There’s no right answer here. And that’s because there’s an inherent conflict between the needs of the component author and the page author. The component author wants minimal maintenance costs and to avoid breaking their downstream consumers with every update, and the page author wants to style every component on the page to pixel-perfect precision, while also never being broken.

Stated that way, it sounds like an unsolvable problem. In practice, I think the problem gets solved by favoring one group over the other, which can make some sense depending on the context (largely based on whether your website is in group one or group two above).

A potential solution?

If there is one solution I find promising, it’s articulated by my colleague Caridy Patiño:

Build building blocks that encapsulate logic and UI elements that are “fully” customizable by using existing mechanisms (CSS properties, parts, slots, etc.). Everything must be customizable from outside the shadow.

If a building block is using another building block in its shadow, it must do it as part of the default content of a well-defined slot.

Essentially, what Caridy is saying is that instead of providing a dropdown component to be used like this:

<snazzy-dropdown></snazzy-dropdown>

… you instead provide one like this:

<snazzy-dropdown>
  <snazzy-trigger>
    <button>Click ▼</button>
  </snazzy-trigger>
  <snazzy-listbox>
    <snazzy-option>One</snazzy-option>
    <snazzy-option>Two</snazzy-option>
    <snazzy-option>Three</snazzy-option>
  </snazzy-listbox>
</snazzy-dropdown>

In other words, the component should expose its “guts” externally (using <slot>s in this example) so that everything is stylable. This way, anything the consumer may want to customize is fully exposed to light DOM.

This is not a totally new idea. In fact, outside of the world of web components, plenty of component systems have run into similar problems and arrived at similar solutions. For example, so-called “headless” component systems (such as Radix UI, Headless UI, and Tanstack) have embraced this kind of design.

For comparison, here is an (abridged) example of the dropdown menu from the Radix docs:

<DropdownMenu.Root>
  <DropdownMenu.Trigger>
    <Button variant="soft">
      Options
      <CaretDownIcon />
    </Button>
  </DropdownMenu.Trigger>
  <DropdownMenu.Content>
    <DropdownMenu.Item shortcut="⌘ E">Edit</DropdownMenu.Item>
    <DropdownMenu.Item shortcut="⌘ D">Duplicate</DropdownMenu.Item>
    <!-- ... --->
  <DropdownMenu.Content>
<DropdownMenu.Root>

This is pretty similar to the web component sketch above – the “guts” of the dropdown are on display for all to see, and anything in the UI is fully customizable.

To me, though, these solutions are clearly taking the burden of complexity and shifting it from the component author to the component consumer. Rather than starting with the simplest case and providing a bare-bones default, the component author is instead starting with the complex case, forcing the consumer to (likely) copy-paste a lot of boilerplate into their codebase before they can start tweaking.

Now, maybe this is the right solution! And maybe the long-term maintenance costs are worth it! But I think the tradeoff should still be acknowledged.

As I understand it, though, these kinds of “headless” solutions are still a bit novel, so we haven’t gotten a lot of real-world data to prove the long-term benefits. I have no doubt, though, that a lot of component authors see this approach as the necessary remedy to the problem of runaway configurability – i.e. component consumers ask for every little thing to be configurable, all those configuration options get shoved into one top-level API, and the overall experience starts to look like recursive Swiss Army Knives. (Tanner Linsley gives a great talk about this, reflecting on 5 years of building React Table.)

Personally, I’m intrigued by this technique, but I’m not fully convinced that exposing the “guts” of a component really reduces the overall maintenance cost. It’s kind of like, instead of selling a car with a predefined set of customizations (color, window tint, automatic vs manual, etc.), you’re selling a loose set of parts that the customer can mix-and-match into whatever kind of vehicle they want. Rather than a car off the assembly line, it reminds me of a jerry-rigged contraption from Minecraft or Zelda.

Screenshot from Zelda Tears of the Kingdom showing Link riding a four-wheeled board with a ball and a fan glued to it

In Tears of the Kingdom, you can glue together just about anything, and it will kind of work.

I haven’t worked on such a component system, but I’d worry that you’d get bugs along the lines of, “Well, when I put the slider on the left it works, but when I put it on the right, the scroll position gets messed up.” There is so much potential customizability, that I’m not sure how you could even write tests to cover all the possible configurations. Although maybe that’s the point – there’s effectively no UI, so if the UI is messed up, then it’s the component consumer’s job to fix it.

Conclusion

I don’t have all the answers. At this point, I just want to make sure we’re asking the right questions.

To me, any proposed solution to the current problems with shadow DOM should be prefaced with:

  • What kind of website or webapp is the intended context?
  • Who stands to benefit from this change – the component author or page author?
  • Who needs to shift their behavior to make the whole thing work?

I’m also not convinced that any of this stuff is ripe enough for the standards discussion to begin. There are so many options that can be explored in userland right now (e.g. the “expose the guts” proposal, or a polyfill for open-stylable shadow roots), that it’s premature to start asking standards bodies to standardize anything.

I also think that the inherent conflict between the needs of component authors and component consumers has not really been acknowledged enough in the standards discussions. And the W3C’s priority of constituencies doesn’t help us much here:

User needs come before the needs of web page authors, which come before the needs of user agent implementors, which come before the needs of specification writers, which come before theoretical purity.

In the above formulation, there’s no distinction between component authors and component consumers – they are both just “web page authors.” I suppose conceptually, if we imagine the whole web platform as a “stack,” then we would place the needs of component consumers over component authors. But even that gets muddy sometimes, since component authors and component consumers can work on the same team or even be the same person.

Overall, what I would love to see is a thorough synopsis of the various groups involved in the web component ecosystem, how the existing solutions have worked in practice, what’s been tried and what hasn’t, and what needs to change to move forward. (This blog post is not it; this is just my feeble groping for a vocabulary to even start talking about the problem.)

In my mind, we are still chasing the holy grail of true component reusability. I often think back to this eloquent talk by Jan Miksovsky, where he explains how much has been standardized in the world of building construction (e.g. the size of windows and door frames), whereas us web developers are still stuck rebuilding the same thing over and over again. I don’t know if we’ll ever reach true component reusability (or if building construction is really as rosy as he describes – I can barely wield a hammer), but I do know that I still find the vision inspiring.

Rebuilding emoji-picker-element on a custom framework

In my last post, we went on a guided tour of building a JavaScript framework from scratch. This wasn’t just an intellectual exercise, though – I actually had a reason for wanting to build my own framework.

For a few years now, I’ve been maintaining emoji-picker-element, which is designed as a fast, simple, and lightweight web component that you can drop onto any page that needs an emoji picker.

Screenshot of an emoji picker showing a search box and a grid of emoji

Most of the maintenance work has been about simply keeping pace with the regular updates to the Unicode standard to add new emoji versions. (Shout-out to Emojibase for providing a stable foundation to build on!) But I’ve also worked to keep up with the latest Svelte versions, since emoji-picker-element is based on Svelte.

The project was originally written in Svelte v3, and the v4 upgrade was nothing special. The v5 upgrade was only slightly more involved, which is astounding given that the framework was essentially rewritten from scratch. (How Rich and the team managed to pull this off boggles my mind.)

I should mention at this point that I think Svelte is a great framework, and a pleasure to work with. It’s probably my favorite JavaScript framework (other than the one I work on!). That said, a few things bugged me about the Svelte v5 upgrade:

  • It grew emoji-picker-element‘s bundle size by 7.1kB minified (it was originally a bit more, but Dominic Gannaway graciously made improvements to the tree-shaking).
  • It dropped support for older browsers due to syntax changes, in particular Safari 12 (which is 0.25-0.5% of browsers depending on who you ask).

Now, neither of these things really ought to be a dealbreaker. 7.1kB is not a huge amount for the average webapp, and an emoji picker should probably be lazy-loaded most of the time anyway. Also, Safari 12 might not be worth worrying about (and if it is, it won’t be in a couple years).

I also don’t think there’s anything wrong with building a standalone web component on top of a JavaScript framework – I’ve said so in the past. There are lots of fiddly bits that are hard to get right when you’re building a web component, and 99 times out of 100, you’re much better off using something like Svelte, or Lit, or Preact, or petite-vue, than to try to wing it yourself in Vanilla JS and building a half-baked framework in the process.

That said… I enjoy building a half-baked framework. And I have a bit of a competitive streak that makes me want to trim the bundle size as much as possible. So I decided to take this as an opportunity to rebuild emoji-picker-element on top of my own custom framework.

The end result is more or less what you saw in the previous post: a bit of reactivity, a dash of tagged template literals, and poof! A new framework is born.

This new framework is honestly only slightly more complex than what I sketched out in that post – I ended up only needing 85 lines of code for the reactivity engine and 233 for the templating system (as measured by cloc).

Of course, to get this minuscule size, I had to take some shortcuts. If this were an actual framework I was releasing to the world, I would need to handle a long tail of edge cases, perf hazards, and gnarly tradeoffs. But since this framework only needs to support one component, I can afford to cut some corners.

So does this tiny framework actually cut the mustard? Here are the results:

  • The bundle size is 6.1kB smaller than the current implementation (and ~13.2kB smaller than the Svelte 5 version).
  • Safari 12 is still supported (without needing code transforms).
  • There is no regression in runtime performance (as measured by Tachometer).
  • Initial memory usage is reduced by 140kB.

Here are the stats:

Metric Svelte v4 Svelte v5 Custom
Bundle size (min) 42.6kB 49.7kB 36.5kB
↳ Delta +7.1kB (+17%) -6.1kB (-14%)
Bundle size (min+gz) 14.9kB 18.8kB 12.6kB
↳ Delta +3.9kB (+26%) -2.3kB (-15%)
Initial memory usage 1.23MB 1.5MB 1.09MB
↳ Delta +270kB (+22%) -140kB (-11%)

Note: I’m not trying to say that Svelte 5 is bad, or that I’m smarter than the Svelte developers. As mentioned above, the only way I can get these fantastic numbers is by seriously cutting a lot of corners. And I actually really like the new features in Svelte v5 (snippets in particular are amazing, and the benchmark performance is truly impressive). I also can’t fault Svelte for focusing on their most important consumers, who are probably building entire apps out of Svelte components, and don’t care much about a higher baseline bundle size.

So was it worth it? I dunno. Maybe I will get a flood of bug reports after I ship this, and I will come crawling back to Svelte. Or maybe I will find that it’s too hard to add new features without the flexibility of a full framework. But I doubt it. I enjoyed building my own framework, and so I think I’ll keep it around just for the fun of it.

Side projects for me are always about three things: 1) learning, 2) sharing something with the world, and 3) having fun while doing so. emoji-picker-element ticks all three boxes for me, so I’m going to stick with the current design for the time being.

Let’s learn how modern JavaScript frameworks work by building one

Hand-drawn looking JavaScript logo saying DIY JS

In my day job, I work on a JavaScript framework (LWC). And although I’ve been working on it for almost three years, I still feel like a dilettante. When I read about what’s going on in the larger framework world, I often feel overwhelmed by all the things I don’t know.

One of the best ways to learn how something works, though, is to build it yourself. And plus, we gotta keep those “days since last JavaScript framework” memes going. So let’s write our own modern JavaScript framework!

What is a “modern JavaScript framework”?

React is a great framework, and I’m not here to dunk on it. But for the purposes of this post, “modern JavaScript framework” means “a framework from the post-React era” – i.e. Lit, Solid, Svelte, Vue, etc.

React has dominated the frontend landscape for so long that every newer framework has grown up in its shadow. These frameworks were all heavily inspired by React, but they’ve evolved away from it in surprisingly similar ways. And although React itself has continued innovating, I find that the post-React frameworks are more similar to each other than to React nowadays.

To keep things simple, I’m also going to avoid talking about server-first frameworks like Astro, Marko, and Qwik. These frameworks are excellent in their own way, but they come from a slightly different intellectual tradition compared to the client-focused frameworks. So for this post, let’s only talk about client-side rendering.

What sets modern frameworks apart?

From my perspective, the post-React frameworks have all converged on the same foundational ideas:

  1. Using reactivity (e.g. signals) for DOM updates.
  2. Using cloned templates for DOM rendering.
  3. Using modern web APIs like <template> and Proxy, which make all of the above easier.

Now to be clear, these frameworks differ a lot at the micro level, and in how they handle things like web components, compilation, and user-facing APIs. Not all frameworks even use Proxys. But broadly speaking, most framework authors seem to agree on the above ideas, or they’re moving in that direction.

So for our own framework, let’s try to do the bare minimum to implement these ideas, starting with reactivity.

Reactivity

It’s often said that “React is not reactive”. What this means is that React has a more pull-based rather than a push-based model. To grossly oversimplify things: in the worst case, React assumes that your entire virtual DOM tree needs to be rebuilt from scratch, and the only way to prevent these updates is to implement React.memo (or in the old days, shouldComponentUpdate).

Using a virtual DOM mitigates some of the cost of the “blow everything away and start from scratch” strategy, but it doesn’t fully solve it. And asking developers to write the correct memo code is a losing battle. (See React Forget for an ongoing attempt to solve this.)

Instead, modern frameworks use a push-based reactive model. In this model, individual parts of the component tree subscribe to state updates and only update the DOM when the relevant state changes. This prioritizes a “performant by default” design in exchange for some upfront bookkeeping cost (especially in terms of memory) to keep track of which parts of the state are tied to which parts of the UI.

Note that this technique is not necessarily incompatible with the virtual DOM approach: tools like Preact Signals and Million show that you can have a hybrid system. This is useful if your goal is to keep your existing virtual DOM framework (e.g. React) but to selectively apply the push-based model for more performance-sensitive scenarios.

For this post, I’m not going to rehash the details of signals themselves, or subtler topics like fine-grained reactivity, but I am going to assume that we’ll use a reactive system.

Note: there are lots of nuances when talking about what qualifies as “reactive.” My goal here is to contrast React with the post-React frameworks, especially Solid, Svelte v5 in “runes” mode, and Vue Vapor.

Cloning DOM trees

For a long time, the collective wisdom in JavaScript frameworks was that the fastest way to render the DOM is to create and mount each DOM node individually. In other words, you use APIs like createElement, setAttribute, and textContent to build the DOM piece-by-piece:

const div = document.createElement('div')
div.setAttribute('class', 'blue')
div.textContent = 'Blue!'

One alternative is to just shove a big ol’ HTML string into innerHTML and let the browser parse it for you:

const container = document.createElement('div')
container.innerHTML = `
  <div class="blue">Blue!</div>
`

This naïve approach has a big downside: if there is any dynamic content in your HTML (for instance, red instead of blue), then you would need to parse HTML strings over and over again. Plus, you are blowing away the DOM with every update, which would reset state such as the value of <input>s.

Note: using innerHTML also has security implications. But for the purposes of this post, let’s assume that the HTML content is trusted. 1

At some point, though, folks figured out that parsing the HTML once and then calling cloneNode(true) on the whole thing is pretty danged fast:

const template = document.createElement('template')
template.innerHTML = `
  <div class="blue">Blue!</div>
`
template.content.cloneNode(true) // this is fast!

Here I’m using a <template> tag, which has the advantage of creating “inert” DOM. In other words, things like <img> or <video autoplay> don’t automatically start downloading anything.

How fast is this compared to manual DOM APIs? To demonstrate, here’s a small benchmark. Tachometer reports that the cloning technique is about 50% faster in Chrome, 15% faster in Firefox, and 10% faster in Safari. (This will vary based on DOM size and number of iterations, but you get the gist.)

What’s interesting is that <template> is a new-ish browser API, not available in IE11, and originally designed for web components. Somewhat ironically, this technique is now used in a variety of JavaScript frameworks, regardless of whether they use web components or not.

Note: for reference, here is the use of cloneNode on <template>s in Solid, Vue Vapor, and Svelte v5.

There is one major challenge with this technique, which is how to efficiently update dynamic content without blowing away DOM state. We’ll cover this later when we build our toy framework.

Modern JavaScript APIs

We’ve already encountered one new API that helps a lot, which is <template>. Another one that’s steadily gaining traction is Proxy, which can make building a reactivity system much simpler.

When we build our toy example, we’ll also use tagged template literals to create an API like this:

const dom = html`
  <div>Hello ${ name }!</div>
`

Not all frameworks use this tool, but notable ones include Lit, HyperHTML, and ArrowJS. Tagged template literals can make it much simpler to build ergonomic HTML templating APIs without needing a compiler.

Step 1: building reactivity

Reactivity is the foundation upon which we'll build the rest of the framework. Reactivity will define how state is managed, and how the DOM updates when state changes.

Let's start with some "dream code" to illustrate what we want:

const state = {}

state.a = 1
state.b = 2

createEffect(() => {
  state.sum = state.a + state.b
})

Basically, we want a “magic object” called state, with two props: a and b. And whenever those props change, we want to set sum to be the sum of the two.

Assuming we don’t know the props in advance (or have a compiler to determine them), a plain object will not suffice for this. So let’s use a Proxy, which can react whenever a new value is set:

const state = new Proxy({}, {
  get(obj, prop) {
    onGet(prop)
    return obj[prop]
  },
  set(obj, prop, value) {
    obj[prop] = value
    onSet(prop, value)
    return true
  }
})

Right now, our Proxy doesn’t do anything interesting, except give us some onGet and onSet hooks. So let’s make it flush updates after a microtask:

let queued = false

function onSet(prop, value) {
  if (!queued) {
    queued = true
    queueMicrotask(() => {
      queued = false
      flush()
    })
  }
}

Note: if you’re not familiar with queueMicrotask, it’s a newer DOM API that’s basically the same as Promise.resolve().then(...), but with less typing.

Why flush updates? Mostly because we don’t want to run too many computations. If we update whenever both a and b change, then we’ll uselessly compute the sum twice. By coalescing the flush into a single microtask, we can be much more efficient.

Next, let’s make flush update the sum:

function flush() {
  state.sum = state.a + state.b
}

This is great, but it’s not yet our “dream code.” We’ll need to implement createEffect so that the sum is computed only when a and b change (and not when something else changes!).

To do this, let’s use an object to keep track of which effects need to be run for which props:

const propsToEffects = {}

Next comes the crucial part! We need to make sure that our effects can subscribe to the right props. To do so, we’ll run the effect, note any get calls it makes, and create a mapping between the prop and the effect.

To break it down, remember our “dream code” is:

createEffect(() => {
  state.sum = state.a + state.b
})

When this function runs, it calls two getters: state.a and state.b. These getters should trigger the reactive system to notice that the function relies on the two props.

To make this happen, we’ll start with a simple global to keep track of what the “current” effect is:

let currentEffect

Then, the createEffect function will set this global before calling the function:

function createEffect(effect) {
  currentEffect = effect
  effect()
  currentEffect = undefined
}

The important thing here is that the effect is immediately invoked, with the global currentEffect being set in advance. This is how we can track whatever getters it might be calling.

Now, we can implement the onGet in our Proxy, which will set up the mapping between the global currentEffect and the property:

function onGet(prop) {
  const effects = propsToEffects[prop] ?? 
      (propsToEffects[prop] = [])
  effects.push(currentEffect)
}

After this runs once, propsToEffects should look like this:

{
  "a": [theEffect],
  "b": [theEffect]
}

…where theEffect is the “sum” function we want to run.

Next, our onSet should add any effects that need to be run to a dirtyEffects array:

const dirtyEffects = []

function onSet(prop, value) {
  if (propsToEffects[prop]) {
    dirtyEffects.push(...propsToEffects[prop])
    // ...
  }
}

At this point, we have all the pieces in place for flush to call all the dirtyEffects:

function flush() {
  while (dirtyEffects.length) {
    dirtyEffects.shift()()
  }
}

Putting it all together, we now have a fully functional reactivity system! You can play around with it yourself and try setting state.a and state.b in the DevTools console – the state.sum will update whenever either one changes.

Now, there are plenty of advanced cases that we’re not covering here:

  1. Using try/catch in case an effect throws an error
  2. Avoiding running the same effect twice
  3. Preventing infinite cycles
  4. Subscribing effects to new props on subsequent runs (e.g. if certain getters are only called in an if block)

However, this is more than enough for our toy example. Let’s move on to DOM rendering.

Step 2: DOM rendering

We now have a functional reactivity system, but it’s essentially “headless.” It can track changes and compute effects, but that’s about it.

At some point, though, our JavaScript framework needs to actually render some DOM to the screen. (That’s kind of the whole point.)

For this section, let’s forget about reactivity for a moment and imagine we’re just trying to build a function that can 1) build a DOM tree, and 2) update it efficiently.

Once again, let’s start off with some dream code:

function render(state) {
  return html`
    <div class="${state.color}">${state.text}</div>
  `
}

As I mentioned, I’m using tagged template literals, ala Lit, because I found them to be a nice way to write HTML templates without needing a compiler. (We’ll see in a moment why we might actually want a compiler instead.)

We’re re-using our state object from before, this time with a color and text property. Maybe the state is something like:

state.color = 'blue'
state.text = 'Blue!'

When we pass this state into render, it should return the DOM tree with the state applied:

<div class="blue">Blue!</div>

Before we go any further, though, we need a quick primer on tagged template literals. Our html tag is just a function that receives two arguments: the tokens (array of static HTML strings) and expressions (the evaluated dynamic expressions):

function html(tokens, ...expressions) {
}

In this case, the tokens are (whitespace removed):

[
  "<div class=\"",
  "\">",
  "</div>"
]

And the expressions are:

[
  "blue",
  "Blue!"
]

The tokens array will always be exactly 1 longer than the expressions array, so we can trivially zip them up together:

const allTokens = tokens
    .map((token, i) => (expressions[i - 1] ?? '') + token)

This will give us an array of strings:

[
  "<div class=\"",
  "blue\">",
  "Blue!</div>"
]

We can join these strings together to make our HTML:

const htmlString = allTokens.join('')

And then we can use innerHTML to parse it into a <template>:

function parseTemplate(htmlString) {
  const template = document.createElement('template')
  template.innerHTML = htmlString
  return template
}

This template contains our inert DOM (technically a DocumentFragment), which we can clone at will:

const cloned = template.content.cloneNode(true)

Of course, parsing the full HTML whenever the html function is called would not be great for performance. Luckily, tagged template literals have a built-in feature that will help out a lot here.

For every unique usage of a tagged template literal, the tokens array is always the same whenever the function is called – in fact, it’s the exact same object!

For example, consider this case:

function sayHello(name) {
  return html`<div>Hello ${name}</div>`
}

Whenever sayHello is called, the tokens array will always be identical:

[
  "<div>Hello ",
  "</div>"
]

The only time tokens will be different is for completely different locations of the tagged template:

html`<div></div>`
html`<span></span>` // Different from above

We can use this to our advantage by using a WeakMap to keep a mapping of the tokens array to the resulting template:

const tokensToTemplate = new WeakMap()

function html(tokens, ...expressions) {
  let template = tokensToTemplate.get(tokens)
  if (!template) {
    // ...
    template = parseTemplate(htmlString)
    tokensToTemplate.set(tokens, template)
  }
  return template
}

This is kind of a mind-blowing concept, but the uniqueness of the tokens array essentially means that we can ensure that each call to html`...` only parses the HTML once.

Next, we just need a way to update the cloned DOM node with the expressions array (which is likely to be different every time, unlike tokens).

To keep things simple, let’s just replace the expressions array with a placeholder for each index:

const stubs = expressions.map((_, i) => `__stub-${i}__`)

If we zip this up like before, it will create this HTML:

<div class="__stub-0__">
  __stub-1__
</div>

We can write a simple string replacement function to replace the stubs:

function replaceStubs (string) {
  return string.replaceAll(/__stub-(\d+)__/g, (_, i) => (
    expressions[i]
  ))
}

And now whenever the html function is called, we can clone the template and update the placeholders:

const element = cloned.firstElementChild
for (const { name, value } of element.attributes) {
  element.setAttribute(name, replaceStubs(value))
}
element.textContent = replaceStubs(element.textContent)

Note: we are using firstElementChild to grab the first top-level element in the template. For our toy framework, we’re assuming there’s only one.

Now, this is still not terribly efficient – notably, we are updating textContent and attributes that don’t necessarily need to be updated. But for our toy framework, this is good enough.

We can test it out by rendering with different state:

document.body.appendChild(render({ color: 'blue', text: 'Blue!' }))
document.body.appendChild(render({ color: 'red', text: 'Red!' }))

This works!

Step 3: combining reactivity and DOM rendering

Since we already have a createEffect from the rendering system above, we can now combine the two to update the DOM based on the state:

const container = document.getElementById('container')

createEffect(() => {
  const dom = render(state)
  if (container.firstElementChild) {
    container.firstElementChild.replaceWith(dom)
  } else {
    container.appendChild(dom)
  }
})

This actually works! We can combine this with the “sum” example from the reactivity section by merely creating another effect to set the text:

createEffect(() => {
  state.text = `Sum is: ${state.sum}`
})

This renders “Sum is 3”:

You can play around with this toy example. If you set state.a = 5, then the text will automatically update to say “Sum is 7.”

Next steps

There are lots of improvements we could make to this system, especially the DOM rendering bit.

Most notably, we are missing a way to update content for elements inside a deep DOM tree, e.g.:

<div class="${color}">
  <span>${text}</span>
</div>

For this, we would need a way to uniquely identify every element inside of the template. There are lots of ways to do this:

  1. Lit, when parsing HTML, uses a system of regexes and character matching to determine whether a placeholder is within an attribute or text content, plus the index of the target element (in depth-first TreeWalker order).
  2. Frameworks like Svelte and Solid have the luxury of parsing the entire HTML template during compilation, which provides the same information. They also generate code that calls firstChild and nextSibling to traverse the DOM to find the element to update.

Note: traversing with firstChild and nextSibling is similar to the TreeWalker approach, but more efficient than element.children. This is because browsers use linked lists under the hood to represent the DOM.

Whether we decided to do Lit-style client-side parsing or Svelte/Solid-style compile-time parsing, what we want is some kind of mapping like this:

[
  {
    elementIndex: 0, // <div> above
    attributeName: 'class',
    stubIndex: 0 // index in expressions array
  },
  {
    elementIndex: 1 // <span> above
    textContent: true,
    stubIndex: 1 // index in expressions array
  }
]

These bindings would tell us exactly which elements need to be updated, which attribute (or textContent) needs to be set, and where to find the expression to replace the stub.

The next step would be to avoid cloning the template every time, and to just directly update the DOM based on the expressions. In other words, we not only want to parse once – we want to only clone and set up the bindings once. This would reduce each subsequent update to the bare minimum of setAttribute and textContent calls.

Note: you may wonder what the point of template-cloning is, if we end up needing to call setAttribute and textContent anyway. The answer is that most HTML templates are largely static content with a few dynamic “holes.” By using template-cloning, we clone the vast majority of the DOM, while only doing extra work for the “holes.” This is the key insight that makes this system work so well.

Another interesting pattern to implement would be iterations (or repeaters), which come with their own set of challenges, like reconciling lists between updates and handling “keys” for efficient replacement.

I’m tired, though, and this blog post has gone on long enough. So I leave the rest as an exercise to the reader!

Conclusion

So there you have it. In the span of one (lengthy) blog post, we’ve implemented our very own JavaScript framework. Feel free to use this as the foundation for your brand-new JavaScript framework, to release to the world and enrage the Hacker News crowd.

Personally I found this project very educational, which is partly why I did it in the first place. I was also looking to replace the current framework for my emoji picker component with a smaller, more custom-built solution. In the process, I managed to write a tiny framework that passes all the existing tests and is ~6kB smaller than the current implementation, which I’m pretty proud of.

In the future, I think it would be neat if browser APIs were full-featured enough to make it even easier to build a custom framework. For example, the DOM Part API proposal would take out a lot of the drudgery of the DOM parsing-and-replacement system we built above, while also opening the door to potential browser performance optimizations. I could also imagine (with some wild gesticulation) that an extension to Proxy could make it easier to build a full reactivity system without worrying about details like flushing, batching, or cycle detection.

If all those things were in place, then you could imagine effectively having a “Lit in the browser,” or at least a way to quickly build your own “Lit in the browser.” In the meantime, I hope that this small exercise helped to illustrate some of the things framework authors think about, and some of the machinery under the hood of your favorite JavaScript framework.

Thanks to Pierre-Marie Dartus for feedback on a draft of this post.

Footnotes

1. Now that we’ve built the framework, you can see why the content passed to innerHTML can be considered trusted. All HTML tokens either come from tagged template literals (in which case they’re fully static and authored by the developer) or are placeholders (which are also written by the developer). User content is only set using setAttribute or textContent, which means that no HTML sanitization is required to avoid XSS attacks. Although you should probably just use CSP anyway!