The balance has shifted away from SPAs

There’s a feeling in the air. A zeitgeist. SPAs are no longer the cool kids they once were 10 years ago.

Hip new frameworks like Astro, Qwik, and Elder.js are touting their MPA capabilities with “0kB JavaScript by default.” Blog posts are making the rounds listing all the challenges with SPAs: history, focus management, scroll restoration, Cmd/Ctrl-click, memory leaks, etc. Gleeful potshots are being taken against SPAs.

I think what’s less discussed, though, is how the context has changed in recent years to give MPAs more of an upper hand against SPAs. In particular:

  1. Chrome implemented paint holding – no more “flash of white” when navigating between MPA pages. (Safari already did this.)
  2. Chrome implemented back-forward caching – now all major browsers have this optimization, which makes navigating back and forth in an MPA almost instant.
  3. Service Workers – once experimental, now effectively 100% available for those of us targeting modern browsers – allow for offline navigation without needing to implement a client-side router (and all the complexity therein).
  4. Shared Element Transitions, if accepted and implemented across browsers, would also give us a way to animate between MPA navigations – something previously only possible (although difficult) with SPAs.

This is not to say that SPAs don’t have their place. Rich Harris has a great talk on “transitional apps,” which outlines some reasons you may still want to go with an SPA. For instance, you might want an omnipresent element that survives page navigations, such as an audio/video player or a chat widget. Or you may have an infinite-loading list that, on pressing the back button, returns to the previous position in the list.

Even teams that are not explicitly using these features may still choose to go with an SPA, just because of the “unknown” factor. “What if we want to implement navigation animations some day?” “What if we want to add an omnipresent video player?” “What if there’s some customization we want that’s not supported by existing browser APIs?” Choosing an MPA is a big architectural decision that may effectively cut off the future possibility of taking control of the page in cases where the browser APIs are not quite up to snuff. At the end of the day, an SPA gives you full control, and many teams are hesitant to give that up.

That said, we’ve seen a similar scenario play out before. For a long time, jQuery provided APIs that the browser didn’t, and teams that wanted to sleep soundly at night chose jQuery. Eventually browsers caught up, giving us APIs like querySelector and fetch, and jQuery started to seem like unnecessary baggage.

I suspect a similar story may play out with SPAs. To illustrate, let’s consider Rich’s examples of things you’d “need” an SPA for:

  • Omnipresent chat widget: use Shared Element Transitions to keep the widget painted during MPA navigations.
  • Infinite list that restores scroll position on back button: use content-visibility and maybe store the state in the Service Worker if necessary.
  • Omnipresent audio/video player that keeps playing during navigations: not possible today in an MPA, but who knows? Maybe the Picture-in-Picture API will support this someday.

To be clear, though, I don’t think SPAs are going to go away entirely. I’m not sure how you could reasonably implement something like Photoshop or Figma as an MPA. But if new browser APIs and features keep landing that slowly chip away at SPAs’ advantages, then more and more teams in the future will probably choose to build MPAs.

Personally I think it’s exciting that we have so many options available to us (and they’re all so much better than they were 10 years ago!). I hope folks keep an open mind, and keep pushing both SPAs and MPAs (and “transitional apps,” or whatever we’re going to call the next thing) to be better in the future.

The struggle of using native emoji on the web

Emoji are a standard overseen by the Unicode Consortium. The web is a standard governed by bodies such as the W3C, WHATWG, and TC39. Both emoji and the web are ubiquitous.

So you might be forgiven for thinking that, in 2022, it’s possible to plop an emoji on a web page and have it “just work”:

If you see a lotus flower above, then congratulations! You’re on a browser or operating system that supports Emoji 14.0, released in September 2021. If not, you might see something that looks like the scoreboard on an old 80’s arcade game:

Black square with monospace text inside of hexademical encoding

Another apt description would be “robot barf.”

Let’s try another one. What does this emoji look like to you?

If you see a face with spiral eyes, then wonderful! Your browser can render Emoji 13.1, released in September 2020. If not, you might see a puzzling combination of face with crossed-out eyes and a shooting (“dizzy”) star:

😵💫

It’s a fun bit of cartoon iconography to know that this combination means “dizzy face,” but for most folks, it doesn’t really evoke the same meaning. It’s not much better than the robot barf.

Emoji and browser support

If you’re like me, you’re a minimalist when it comes to web development. If I don’t have to rebuild something from scratch, then I’ll avoid doing so. I try to “use the platform” as much as possible and lean on existing web standards and browser capabilities.

When it comes to emoji, there are a lot of potential upsides to using the platform. You don’t need to bring your own heavy emoji font, or use a spritesheet, or do any manual DOM processing to replace text with <img>s. But sadly, if you try to avoid these heavy-handed techniques and just, you know, use emoji on the web, you’ll quickly run into the kinds of problems I describe above.

The first major problem is that, although emoji are released by the Unicode Consortium at a yearly cadence, OSes don’t always update in a timely manner to add the latest-and-greatest characters. And the browser, in most cases, is beholden to the OS to render whatever emoji fonts are provided by the underlying system (e.g. Apple Color Emoji on iOS, Microsoft Segoe Color Emoji on Windows, etc.).

In the case of major releases (such as Emoji 14.0), a missing character means the “robot barf” shown above. In the case of minor releases (such as Emoji 13.1), it can mean that the emoji renders as a bizarre “double” emoji – some of my favorites include “man with floating wig of red hair” (👨🦰) for “man with red hair” (👨‍🦰) and “bear with snowflake” (🐻❄️) for “polar bear” (🐻‍❄️).

If I’m trying to convince you that native emoji are worth investing in for your website, I’ve probably lost half my audience at this point. Most chat and social media app developers would prefer to have a consistent experience across all browsers and devices – not a broken experience for some users. And even if the latest emoji were perfectly supported across devices, these developers may still prefer a uniform look-and-feel, which is why vendors like Twitter, Facebook, and WhatsApp actually design their own emoji fonts.

Detecting broken emoji

Let’s say, though, that you’re comfortable with emoji looking different on different platforms. After all – maybe Apple users would prefer to see Apple emoji, and Windows users would prefer to see Windows emoji. And in any case, you’d rather not reinvent what the OS already provides. What do you have to do in this case?

Well, first you need a way to detect broken emoji. This is actually much harder than it sounds, and basically boils down to rendering the emoji to a <canvas>, testing that it has an actual color, and also testing that it doesn’t render as two separate characters. (is-emoji-supported is a decent JavaScript library that does this.)

This solution has a few downsides. First off, you now need to run JavaScript before rendering any text – with all the problems therein for SSR, performance, etc. Second, it doesn’t actually solve the problem – it just tells you that there is a problem. And it might not even work – I’ve seen this technique fail in cross-origin iframes in Firefox, presumably because the <canvas> triggered the browser’s fingerprinting detection.

But again, let’s just say that you’re comfortable with all this. You detect broken emoji and perhaps replace them with text saying “emoji not supported.” Or maybe you want a more graceful degradation, so you include half a megabyte of JSON data describing every emoji ever created, so that you can actually show some text to describe the emoji. (Of course, that file is only going to get bigger, and you’ll need to update it every year.)

I know what you’re thinking: “I just wanted to show an emoji on my web page. Why do I have to know everything about emoji?” But just wait: it gets worse.

Black-and-white older emoji

Okay, so now you’re successfully detecting whether an emoji is supported, so you can hide or replace those newfangled emoji that are causing problems. But would it occur to you that the oldest emoji might be problematic too?

This is the classic smiling face emoji. But depending on your browser, instead of the more familiar full-color version, you might see a simple black-and-white smiley. In case you don’t see it, here is a comparison, and here’s how it looks in Chrome on Windows:

Screenshot showing a black and white smiley face with no font-family and a color smiley face with system emoji font family

You’ll also see this same problem for some other older emoji, such as red heart (❤️) and heart suit (♥️), which both render as black hearts rather than red ones.

So how can we render these venerable emoji in glorious Technicolor? Well, after a lot of trial-and-error, I’ve landed on this CSS:

div {
  font-family: "Twemoji Mozilla",
               "Apple Color Emoji",
               "Segoe UI Emoji",
               "Segoe UI Symbol",
               "Noto Color Emoji",
               "EmojiOne Color",
               "Android Emoji",
               sans-serif;
}

Basically, what we have to do is point the font-family at a known list of built-in emoji fonts on various operating systems. This is similar to the “system font” trick.

If you’re wondering what “Twemoji Mozilla” is, well, it turns out that Firefox is a bit odd in that it actually bundles its own version of Twitter’s Twemoji font on Windows and Linux. This will be important later, but let’s set it aside for now.

What is an emoji, anyway?

At this point, you may be getting pretty tired of this blog post. “Nolan,” you might say, “why don’t you just tell me what to do? Just give me a snippet I can slap onto my website to fix all these dang emoji problems!” Well I wish it were as simple as just chucking a CSS font-family onto your body and calling it a day. But if you try that naïve approach, you’ll start to see some bizarre characters:

The text "Call me at #555-0123! You might have to hit the * or # on your smartphone™" with some of the characters thicker and larger and cartoonier than the others.

As it turns out, characters like the asterisk (*), octothorpe (#), trademark (™), and even the numbers 0-9 are technically emoji. And depending on your browser and OS, the system emoji font will either not render them at all, or it might render them as the somewhat-cartoony versions you see above.

Maybe to some folks it’s acceptable for these characters to be rendered as emoji, but I would wager that the average person doesn’t consider these numbers and symbols to be “emoji.” And it would look odd to treat them like that.

So all right, some “emoji” are not really emoji. This means we need to ensure that some characters (like the smiley face) render using the system emoji font, whereas other kinda-sorta emoji characters (like * and #) don’t. Potentially you could use a JavaScript tool like emoji-regex or a CSS tool like emoji-unicode-range to manage this, but in my experience, neither one handles all the various edge cases (nor have I found an off-the-shelf solution that does). And either way, it’s starting to feel pretty far from “use the platform.”

Windows woes

I could stop right here, and hopefully I’ve made the point that using native emoji on the web is a painful experience. But I can’t help mentioning one more problem: flag emoji on Windows.

As it turns out, Microsoft’s emoji font does not have country flags on either Windows 10 or Windows 11. So instead of the US flag emoji, you’ll just see the characters “US” (and the equivalent country codes for other flags). Microsoft might have a good geopolitical reason to do this (although they’d have to explain why no other emoji vendor follows suit), but in any case, it makes it hard to talk about sports matches or national independence days.

Grid of some flag emoji such as the pirate flag and rainbow flag, followed by many two-letter character codes instead of emoji

Flag emoji in Chrome on Windows. You can have the pirate flag, you can have the race car flag, but you can’t root for Argentina vs Brazil in a soccer match.

Interestingly, this problem is actually solvable in Firefox, since they ship their own “Mozilla Twemoji” font (which, furthermore, tends to stay more up-to-date than the built-in Microsoft font). But the most popular browser engine on Windows, Chromium, does not ship their own emoji font and doesn’t plan to. There’s actually a neat tool called country-flag-emoji-polyfill that can detect the broken flag support and patch in a minimal Twemoji font to fix it, but again, it’s a shame that web developers have to jump through so many hoops to get this working.

(At this point, I should mention that the Unicode Consortium themselves have come out against flag emoji and won’t be minting any more. I can understand the sentiment behind this – a font consortium doesn’t want to be in the business of adjudicating geopolitical boundaries. But in my opinion, the cat’s already out of the bag. And it seems bizarre that Wales and Scotland get their own flag, but no other countries, states, provinces, municipalities, duchies, earldoms, or holy empires ever will. It seems guaranteed to lead to an explosion of non-standard vendor-specific flags, which is already happening according to Emojipedia.)

Conclusion

I could go on. I really could. I could talk about the sad state of browser support for color fonts, or how to avoid mismatched emoji fonts in Firefox, or subtle issues with measuring emoji width on Windows, or how you need to install a separate package for emoji to work at all in Chrome on Linux.

But in the end, my message is a simple one: I, as a web developer, would like to use emoji on my web sites. And for a variety of reasons, I cannot.

Screenshot of a grid of emoji smileys where some emoji are empty boxes with text inside

I build an emoji picker called emoji-picker-element. This is what it would look like if I didn’t bend over backwards to fix emoji problems.

At a time when web browsers have gained a staggering array of new capabilities – including Bluetooth, USB, and access to the filesystem – it’s still a struggle to render a smiley face. It feels a bit odd to argue in 2022 that “the web should have emoji support,” and yet here I stand, cap in hand, making my case.

You might wonder why browsers have been so slow to fix this problem. I suspect part of it is that there are ready workarounds, such as twemoji, which parses the DOM to look for emoji sequences and replaces them with <img>s. The fact that this technique isn’t great for performance (downloading extra images, processing the DOM and mutating it, needing to run JavaScript at all) might seem unimportant when you consider the benefits (a unified look-and-feel across devices, up-to-date emoji support).

Part of me also wonders if this is one of those cases where the needs of larger entities have eclipsed the needs of smaller “mom-and-pop” web shops. A well-funded tech company building a social media app with a massive user base has the resources to handle these emoji problems – heck, they might even design their own emoji font! Whereas your average small-time blogger, agency, or studio would probably prefer for emoji to “just work” without a lot of heavy lifting. But for whatever reason, their voices are not being heard.

What do I wish browsers would do? I don’t have much of a grand solution in mind, but I would settle for browsers following the Firefox model and bundling their own emoji font. If the OS can’t keep its emoji up-to-date, or if it doesn’t want to support certain characters (like country flags), then the browser should fill that gap. It’s not a huge technical hurdle to bundle a font, and it would help spare web developers a lot of the headaches I listed above.

Another nice feature would be some sensible way to render what are colloquially known as “emoji” as emoji. So for instance, the “smiley face” should be rendered as emoji, but the numbers 0-9 and symbols like * and # should not. If backwards compatibility is a concern, then maybe we need a new CSS property along the lines of text-rendering: optimizeForLegibility – something like emoji-rendering: optimizeForCommonEmoji would be nice.

In any case, even if this blog post has only served to dissuade you from ever trying to use native emoji on the web, I hope that I’ve at least done a decent job of summarizing the current problems and making the case for browsers to help solve it. Maybe someday, when browsers everywhere can render a smiley face, I can write something other than :-) to show my approval.

Update: At some point, WordPress started automatically converting emoji in this blog post to <img>s. I’ve replaced some of the examples with CodePens to make it clearer what’s going on. Of course, the fact that WordPress feels compelled to use <img>s instead of native emoji kind of proves my point.

Five years of quitting Twitter

It’s been almost five years since I deleted my Twitter account. I didn’t just delete the app or deactivate – I deleted my whole account and my entire tweet history, lighting a match and burning the bridge behind me.

I don’t want to pretend to be some kind of seer, but since then, divesting yourself from social media has become a somewhat fashionable lifestyle choice. For a certain type of person, it’s the kind of pro-mental health, self-care kind of thing you might do along with going vegan or taking up Vispassana meditation. (To make it clear that I’m not above such intellectual trendiness, I’ve tried all those things too.)

In this post, I want to talk honestly about the good and the bad that comes with deleting your Twitter account, from the perspective of a tech guy who’s plugged in to several different software communities (open source, web development, Node.js, etc.).

The good

Let’s start off with the good stuff. Twitter is no longer what I check first thing in the morning and the last thing before I go to sleep. In fact, I instituted a personal rule to charge my phone outside of the bedroom altogether so that I’m not tempted to read it in bed. (I don’t always hold fast to this.)

I have my RSS feed, I have Hacker News, and I have various news outlets (Ars Technica, Wired, etc.), so there’s plenty for me to read on the internet. But unlike Twitter, I actually run out of stuff to read and eventually get bored with my phone. I consider this a plus, even if it ends up driving me towards other screens – video games in particular. But even if my lofty goal is to spend more time reading books or riding my bike, I still consider time spent with my Switch or doing crossword puzzles to be time better spent than flicking through social media.

I also disabled all notifications on my phone except for IM and email, which helps reduce the neediness of my little pocket Tamagatchi. IM notifications are invaluable for keeping up with family and friends, but my email notifications are still sometimes a source of stress, so I try to unsubscribe as much as possible from any newsletters, automated updates, and other bullshit. If my email is going to buzz in my pocket and show me a notification, I want it to be something important.

I still have a Mastodon account, and I still host a Mastodon server at toot.cafe, but I’m not very active anymore. I mostly treat it as a write-only medium. My reasons for this are various, but basically I’ve become less of a booster of Mastodon (and the fediverse in general) over time. It’s a neat idea, and it still works pretty well for the cohort of hardcore techies and tech-adjacent folks who seem to be there, but I just don’t find it super interesting any more. Sometimes I think of Mastodon as my Twitter nicotine patch – it sorta feels like Twitter, it scratches the same itch, but it’s just not nearly as compelling.

The bad

If you’re the kind of techie who uses social media to connect with your peers and build your personal brand – the kind of person who speaks at conferences, writes blog posts, talks on podcasts, etc. – then quitting Twitter is a terrible idea. My blog posts get less traffic than they used to, I don’t get invited to as many conferences anymore, and even when I do give the odd podcast interview, there’s always an awkward moment when they ask for my Twitter handle, or which social media account they should direct traffic and followers to. (I dunno, GitHub? I think I have a Reddit account?)

My main public outlet these days is my blog, and from looking at the WordPress stats, my overall traffic has taken a hit since I quit Twitter. I’ve kind of ceased to exist for a certain segment of my (former) audience, and for the rest, I only exist when someone takes pity on me and links to my blog from Twitter, Reddit, Hacker News, or a big site like CSS Tricks. (I don’t abstain from Reddit or Hacker News, but I’m also not super active there.) It feels kind of weird to have quit Twitter, and yet to relish the traffic spike from a well-timed Twitter mention.

For those people who are re-sharing my content on social media, I suspect most of them found it from their RSS feed. So RSS definitely still seems alive and well, even if it’s just a small upstream tributary for the roaring downstream river of Twitter, Reddit, etc.

Another odd downside of deleting your Twitter account is that, after a cool-down period, someone can grab your Twitter handle. I didn’t realize this was a thing, so someone has squatted on my old Twitter name, presumably because they hope to re-sell it later, or maybe because they want the SEO juice? I have no idea. I would be mad about it, but the fact that this account exists (and my old mentions on Twitter still link to it!) makes Twitter a slightly shittier place, so in my own petty way, I’m kind of glad it exists.

The mixed bag

Some things that I miss from Twitter are both good and bad. Twitter is a sprawling global conversation, and a lot of the important debates in web development (client-rendering vs server-rendering, web components vs frameworks, etc.) were born and thrive there. I miss out on a lot of those debates, and many of them could serve as good fodder for a thoughtful blog post or open-source project, so I regret not having the creative spark that comes from those conversations.

The problem is that a lot of these debates are, in my opinion, either trivial or manufactured. Twitter (like all social media) is an outrage machine, designed to goose engagement using whatever means the algorithm finds through blind optimization. I fully believe that phenomena like “the great divide” in web development wouldn’t exist without Twitter, and to the extent that it does exist in the “real world,” it’s only because it was hatched on Twitter before infecting the rest of us. Social media engagement thrives when it finds a wedge to drive between two parts of a community, where it can cause incendiary content to cross-pollinate from one camp to another, creating an endless cycle of irritation, condemnation, dunking, and flaming.

Occasionally in my RSS feed I’ll read a post that starts off by saying, “There’s been a huge debate about…” or “There’s been a recent controversy over…” and then eventually I realize the whole post is about some Twitter beef. I don’t miss being on the front lines of these kind of battles, but I do think some of these debates are worth having, so I have mixed feelings about it.

Conclusion

I don’t plan on coming back to Twitter. Mostly because I just don’t need it anymore – I’m not super active at conferences or meetups, I don’t have a workshop or service I need to sell, and so there’s little professional reason for me to be there. I like posting on my blog, but I can only hope that my content gets attention in direct correlation to the value that people derive from it. If I write a good blog post, people will read it. I try to focus on that and that alone.

Honestly, even that lifestyle – writing blog posts, watching it occasionally blow up on Hacker News and Reddit, reading occasionally scathing comments – is hard enough on my mental health. Whenever I write a blog post these days, I have a period of anxiety and dread where I worry about the potential backlash. I mitigate that a bit by carefully editing my posts to remove anything that could be misconstrued, and to occasionally have some trusted friends review a draft (thank you all!), but frankly it’s a bit sad that I even do this, because my writing has gotten decidedly more boring over the years.

Sometimes I go back and read my blog posts from 2014 and marvel at how freewheeling, irreverent, and downright joyful my writing was. I don’t really write like that anymore, because social media (and the internet in general) have conditioned me to constantly fret over negative attention. So I act as my own PR firm, carefully focus-testing and bowdlerizing my prose until it’s as dry as a slice of burnt toast. Sometimes I can escape from this trap a little bit (like I’m trying to do right now), but overall I worry that my writing has gotten worse, not better, over time. (Another worry!)

So given my inherent worry-prone nature about posting content on the internet, Twitter is probably just not right for me. The high I would get from seeing a tweet go viral and getting adulation from my peers just doesn’t outweigh the anxiety, the sleeplessness, or the careful tiptoeing and sanitization of my thoughts that come with heavy social media use. I’m already bad enough with that as it is, just with my blog; coming back to Twitter would dial that up to 11.

So I deleted my Twitter account, and I plan to keep it that way. Should you do the same? Well, I dunno. If you need it for your livelihood, then decidedly not. You should probably just see Twitter as a necessary evil and try to insulate yourself from the bad parts while profiting from the good parts. If you’re a casual user, then maybe you’ve already figured out a healthy way to live with Twitter (curating your feed, turning off the algorithmic timeline, whatever), and if so – good for you! For me, I have too much stubbornness and too little faith in my own ability to manage my social media addiction to want to give Twitter a second try.

Memory leaks: the forgotten side of web performance

I’ve researched and learned enough about client-side memory leaks to know that most web developers aren’t worrying about them too much. If a web app leaks 5 MB on every interaction, but it still works and nobody notices, then does it matter? (Kinda sounds like a “tree in the forest” koan, but bear with me.)

Even those who have poked around in the browser DevTools to dabble in the arcane art of memory leak detection have probably found the experience… daunting. The effort-to-payoff ratio is disappointingly high, especially compared to the hundreds of other things that are important in web development, like security and accessibility.

So is it really worth the effort? Do memory leaks actually matter?

I would argue that they do matter, if only because the lack of care (as shown by public-facing SPAs leaking up to 186 MB per interaction) is a sign of the immaturity of our field, and an opportunity for growth. Similarly, five years ago, there was much less concern among SPA authors for accessibility, security, runtime performance, or even ensuring that the back button maintained scroll position (or that the back button worked at all!). Today, I see a lot more discussion of these topics among SPA developers, and that’s a great sign that our field is starting to take our craft more seriously.

So why should you, and why shouldn’t you, care about memory leaks? Obviously I’m biased because I have an axe to grind (and a tool I wrote, fuite), but let me try to give an even-handed take.

Memory leaks and software engineering

In terms of actual impact on the business of web development, memory leaks are a funny thing. If you speed up your website by 2 seconds, everyone agrees that that’s a good thing with a visible user impact. If you reduce your website’s memory leak by 2 MB, can we still agree it was worth it? Maybe not.

Here are some of the unique characteristics of memory leaks that I’ve observed, in terms of how they actually fit into the web development process. Memory leaks are:

  1. Low-impact until critical
  2. Hard to diagnose
  3. Trivial to fix once diagnosed

Low-impact…

Most web apps can leak memory and no one will ever notice. Not the user, not the website author – nobody. There are a few reasons for this.

First off, browsers are well aware that the web is a leaky mess and are already ruthless about killing background tabs that consume too much memory. (My former colleague on the Microsoft Edge performance team, Todd Reifsteck, told me way back in 2016 that “the web leaks like a sieve.”) A lot of users are tab hoarders (essentially using tabs as bookmarks), and there’s a tacit understanding between browser and user that you can’t really have 100 tabs open at once (in the sense that the tab is actively running and instantly available). So you click on a tab that’s a few weeks old, boom, there’s a flash of white while the page loads, and nobody seems to mind much.

Second off, even for long-lived SPAs that the user may habitually check in on (think: GMail, Evernote, Discord), there are plenty of opportunities for a page refresh. The browser needs to update. The user doesn’t trust that the data is fresh and hits F5. Something goes wrong because programmers are terrible at managing state, and users are well aware that the old turn-it-off-and-back-on-again solves most problems. All of this means that even a multi-MB leak can go undetected, since a refresh will almost always occur before an Out Of Memory crash.

Screenshot of Chrome browser window with sad tab and "aw snap something went wrong" message

Chrome’s Out Of Memory error page. If you see this, something has gone very wrong.

Third, it’s a tragedy-of-the-commons situation, and people tend to blame the browser. Chrome is a memory hog. Firefox gobbles up RAM. Safari is eating all my memory. For reasons I can’t quite explain, people with 100+ open tabs are quick to blame the messenger. Maybe this goes back to the first point: tab hoarders expect the browser to automatically transition tabs from “thing I’m actively using” to “background thing that is basically a bookmark,” seamlessly and without a hitch. Browsers have different heuristics about this, some heuristics are better than others, and so in that sense, maybe it is the browser’s “fault” for failing to adapt to the user’s tab-hoarding behavior. In any case, the website author tends to escape the blame, especially if their site is just 1 out of 100 naughty tabs that are all leaking memory. (Although this may change as more browsers call out tabs individually in Task Manager, e.g. Edge and Safari.)

…Until critical

What’s interesting, though, is that every so often a memory leak will get so bad that people actually start to notice. Maybe someone opens up Task Manager and wonders why a note-taking app is consuming more RAM than DOTA. Maybe the website slows to a crawl after a few hours of usage. Maybe the users are on a device with low available memory (and of course the developers, with their 32GB workstations, never noticed).

Here’s what often happens in this case: a ticket lands on some web developer’s desk that says “Memory usage is too high, fix it.” The developer thinks to themselves, “I’ve never given much thought to memory usage, well let’s take a stab at this.” At some point they probably open up DevTools, click “Memory,” click “Take snapshot,” and… it’s a mess. Because it turns out that the SPA leaks, has always leaked, and in fact has multiple leaks that have accumulated over time. The developer assumes this is some kind of sudden-onset disease, when in fact it’s a pre-existing condition that has gradually escalated to stage-4.

The funny thing is that the source of the leak – the event listener, the subscriber, whatever – might not even be the proximate cause of the recent crisis. It might have been there all along, and was originally a tiny 1 MB leak nobody noticed, until suddenly someone attached a much bigger object to the existing leak, and now it’s a 100 MB leak that no one can ignore.

Unfortunately to get there, you’re going to have to hack your way through the jungle of the half-dozen other leaks that you ignored up to this point. (We fixed the leak! Oh wait, no we didn’t. We fixed the other leak! Oh wait, there’s still one more…) But that’s how it goes when you ignore a chronic but steadily worsening illness until the moment it becomes a crisis.

Hard to diagnose

This brings us to the second point: memory leaks are hard to diagnose. I’ve already written a lot about this, and I won’t rehash old content. Suffice it to say, the tooling is not really up to the task (despite some nice recent innovations), even if you’re a veteran with years of web development experience. Some gotchas that tripped me up include the fact that you have to ignore WeakMaps and circular references, and that the DevTools console itself can leak memory.

Oh and also, browsers themselves can have memory leaks! For instance, see these ResizeObserver/IntersectionObserver leaks in Chromium, Firefox, and Safari (fixed in all but Firefox), or this Chromium leak in lazy-loading images (not fixed), or this discussion of a leak in Safari. Of course, the tooling will not help you distinguish between browser leaks and web page leaks, so you just kinda have to know this stuff. In short: good luck!

Even with the tool that I’ve written, fuite, I won’t claim that we’ve reached a golden age of memory leak debugging. My tool is better than what’s out there, but that’s not saying much. It can catch the dumb stuff, such as leaking event listeners and DOM nodes, and for the more complex stuff like leaking collections (Arrays, Maps, etc.), it can at least point you in the right direction. But it’s still up to the web developer to decide which leaks are worth chasing (some are trivial, others are massive), and to track them down.

I still believe that the browser DevTools (or perhaps professional testing tools, such as Cypress or Sentry), should be the ones to handle this kind of thing. The browser especially is in a much better position to figure out why memory is leaking, and to point the web developer towards solutions. fuite is the best I could do with userland tooling (such as Puppeteer), but overall I’d still say we’re in the Stone Age, not the Space Age. (Maybe fuite pushed us to the Bronze Age, if I’m being generous to myself.)

Trivial to fix once diagnosed

Here’s the really surprising thing about memory leaks, though, and perhaps the reason I find them so addictive and keep coming back to them: once you figure out where the leak is coming from, they’re usually trivial to fix. For instance:

  • You called addEventListener but forgot to call removeEventListener.
  • You called setInterval, but forgot to call clearInterval when the component unloaded.
  • You added a DOM node, but forgot to remove it when the page transitions away.
  • Etc.

You might have a multi-MB leak, and the fix is one line of code. That’s a massive bang-for-the-buck! That is, if you discount the days of work it might have taken to find that line of code.

This is where I would like to go with fuite. It would be amazing if you could just point a tool at your website and have it tell you exactly which line of code caused a leak. (It’d be even better if it could open a pull request to fix the leak, but hey, let’s not get ahead of ourselves.)

I’ve taken some baby steps in this direction by adding stacktraces for leaking collections. So for instance, if you have an Array that is growing by 1 on every user interaction, fuite can tell you which line of code actually called Array.push(). This is a huge improvement over v1.0 of fuite (which just told you the Array was leaking, but not why), and although there are edge cases where it doesn’t work, I’m pretty proud of this feature. My goal is to expand this to other leaks (event listeners, DOM nodes, etc.), although since this is just a tool I’m building in my spare time, we’ll see if I get to it.

Screenshot of console output showing leaking collections and stacktraces for each

fuite showing stacktraces for leaking collections.

After releasing this tool, I also learned that Facebook has built a similar tool and is planning to open-source it soon. That’s great! I’m excited to see how it works, and I’m hoping that having more tools in this space will help us move past the Stone Age of memory leak debugging.

Conclusion

So to bring it back around: should you care about memory leaks? Well, if your boss is yelling at you because customers are complaining about Out Of Memory crashes, then yeah, you absolutely should. Are you leaking 5 MB, and nobody has complained yet? Well, maybe an ounce of prevention is worth a pound of cure in this case. If you start fixing your memory leaks now, it might avoid that crisis in the future when 5 MB suddenly grows to 50 MB.

Alternatively, are you leaking a measly ~1 kB because your routing library is appending some metadata to an Array? Well, maybe you can let that one slide. (fuite will still report this leak, but I would argue that it’s not worth fixing.)

On the other hand, all of these leaks are important in some sense, because even thinking about them shows a dedication to craftsmanship that is (in my opinion) too often lacking in web development. People write a web app, they throw something buggy over the wall, and then they rewrite their frontend four years later after users are complaining too much. I see this all the time when I observe how my wife uses her computer – she’s constantly telling me that some app gets slower or buggier the longer she uses it, until she gives up and refreshes. Whenever I help her with her computer troubles, I feel like I have to make excuses for my entire industry, for why we feel it’s acceptable to waste our users’ time with shoddy, half-baked software.

Maybe I’m just a dreamer and an idealist, but I really enjoy putting that final polish on something and feeling proud of what I’ve created. I notice, too, when the software I use has that extra touch of love and care – and it gives me more confidence in the product and the team behind it. When I press the back button and it doesn’t work, I lose a bit of trust. When I press Esc on a modal and it doesn’t close, I lose a bit of trust. And if an app keeps slowing down until I’m forced to refresh, or if I notice the memory steadily creeping up, I lose a bit of trust. I would like to think that fixing memory leaks is part of that extra polish that won’t necessarily win you a lot of accolades, but your users will subtly notice, and it will build their confidence in your software.

Thanks to Jake Archibald and Todd Reifsteck for feedback on a draft of this post.

2021 book review

I’ve been doing end-of-the year book reviews for almost 5 years now. At this point I have to ask myself: why am I still doing this?

To encourage myself to read more? To show off? To convince myself that this blog is about more than just tech stuff? There may be some truth to all those, but I think my main goal is just to recommend some good books to others. I don’t use GoodReads (although I link to it, as it seems nice), so this is my forum where I highlight books I’ve enjoyed, in the hope that others might find something interesting to read in the new year.

So without further ado, on with the book reviews!

Quick links

Fiction

Non-fiction

Fiction

Like last year, I’ve been reading a lot of fantasy novels. My methodology is crude: I just googled “best fantasy novels” and started from there. In the past, I was never much of a wizards-and-pegasuses kind of reader (I always preferred sci-fi and dystopias), so I’m trying to make up for lost time.

The Name of the Wind and The Wise Man’s Fear by Patrick Rothfuss

I struggled to like the first book. My main beef was that 1) it’s a bit too predictable and groan-inducing with how the main character is a preternaturally gifted Mary Sue who just inevitably excels at everything, and 2) after a great “street urchin” backstory, the action really grinds to a halt when the character arrives at university and mostly mopes after his would-be girlfriend.

The second book, however, redeems the first one in my eyes. It makes up for some of the dull campiness of the first book with a never-ending series of inventive subplots. Just as soon as you’re bored with one setting or cast of characters, it dramatically switches to another. It almost feels like a collection of vignettes.

I’m eagerly awaiting the third book, which (like The Winds of Winter by George R. R. Martin), seems perpetually delayed.

Dragonflight by Anne McCaffrey

This is another book that I struggled to like. The premise is so good (dragons! extra-planetary colonization! a perennial existential threat!), and I have friends who rave about the Dragonriders of Pern series. But to be honest, I just found it to be a bit of a slog. I felt like the author was taking too much time to set up names, places, history, concepts – almost like she started writing an encyclopedic Silmarillion rather than an accessible Hobbit. By the time I had gotten the lingo down and could keep the characters’ names straight, the story was over.

I’ve picked up the next couple books in the series, and I’m going to give them a shot, but I don’t have high hopes.

Kindred by Octavia Butler

What I love about this book is that the author takes a completely ridiculous premise and treats it with utmost seriousness, and by the end you’re so invested in the story that it doesn’t matter that the paranormal elements are never explained. Gives you a good sense of what it would feel like to live in a society where daily barbarism is completely normalized.

Is it sci-fi? Is it fantasy? Hard to categorize, but I would lean towards sci-fi, if we define sci-fi as “putting human beings in otherworldly situations to see how they tick.” In any case, a great read.

2034 by Elliot Ackerman and James Stavridis

A chilling and all-too-plausible near-future sci-fi. I appreciate the attention to detail that comes from having a subject matter expert (in military matters) as a co-author. Hopefully it will turn out to be a cautionary tale rather than a prescient prediction.

The Ministry for the Future by Kim Stanley Robinson

A book that starts out with a bang and gradually limps towards an ending. I had to put it down ~80% of the way through because it got into preachy, starry-eyed utopia territory. Maybe I’m just a cynic, but the more pessimistic predictions in the book seem way more believable to me.

Premier Sang by Amélie Nothomb

Amélie Nothomb is one of my favorite Francophone authors, and not just because my French is terrible, and her writing is simple enough that I can understand it without constantly switching to a dictionary. I picked up this book at random while on vacation in France and gobbled it up on the plane ride back.

The story starts out with an incredible hook – a firing squad! – and from there gives a richly detailed (and ultimately personal) character study. The scenes from the protagonist’s childhood, where he’s alternately coddled and neglected (but craves the latter!), are especially poignant.

Sorry for recommending a non-English book, but hopefully it will be translated soon!

Non-fiction

Why We Love Dogs, Eat Pigs, and Wear Cows by Melanie Joy

I’ve spent probably the past 15 years of my life struggling with a basic question: what to eat? I’ve gone through carnism, pescetarianism, vegetarianism, veganism, and right back around several times. These days I’m probably best-described as flexitarian (i.e. I avoid meat, but I won’t turn down a turkey dinner at Thanksgiving).

If you’re not already interested in vegetarianism or veganism, this book will not convince you of anything. For myself, I found it pretty depressing, because the situation feels kind of hopeless to me. The sheer scale of animal suffering in factory farms makes it a good candidate for one of, if not the, most consequential ethical questions of our day, and yet the average person couldn’t care less, and is irritated to even consider it. Exploring this question will make you the most unpopular person at a dinner party, and probably cause a lot of stress and annoyance for your friends and relatives if they feel obliged to accommodate your dietary choices.

So why do I read this stuff? Well I guess, like a good car crash, I just can’t look away. If I’m going to be an ethical monster, I would at least like to be cognizant of it when I put a forkful of egg or cheese (or rarely, meat) into my mouth. And I’d like to have a ready-made answer if someone asks why I always order the tofu. And I’d like to steel my resolve as I continually search for good beans-and-rice and tempeh recipes that can compete with my fond memories of a juicy Reuben sandwich. (This stir-fry recipe is quite good.) My inner monologue on food is complicated, I don’t have it all figured out, but I’m trying to wrestle with the tough questions.

Dialogues on Ethical Vegetarianism by Michael Huemer

Another pro-vegan book that will depress you if you’re already converted, and probably convince you of nothing if you’re not. For myself, I found it interesting because it fairly neatly demolishes all of the plausible excuses for eating meat or animal products. (Yes, that one, and that one, and that other one you just thought of.) This is a good book for the open-minded person who really wants to engage with the best arguments for veganism, not just the straw-man.

I’ll also say that, for a philosophy book, this is eminently readable. I really enjoy the short, brisk pace and the “Socratic dialogue” style rather than a long-winded essay format.

How Not to Die by Michael Grieger

As you might have noticed, I kind of went on a tear this year reading vegan literature. I really wanted to confront my meat-eating (and egg-eating, and dairy-eating) head on, so I tried to read all the “greatest hits” of vegan literature.

This book has a lot of sensible advice (eat more whole grains, eat more nuts and berries), although I get the impression that the author is pretty dogmatic in promoting a pure-vegan lifestyle. Based on reviews I’ve read of the book, he tends to ignore any research that advocates for moderate consumption of eggs, cheese, and fish, even though those are (as far as I can tell) pretty good ingredients in a healthy diet.

On the other hand, I do appreciate his no-nonsense, uncompromising position on certain health questions. (Salt? Nope, just avoid it. Oil? Nope, just fry everything with water or vinegar! Exercise? 30 minutes every day!) I prefer the “give it to me straight, doc” approach, rather than a resigned shrug and “Well, if you’re going to drink beer and eat potato chips, at least do it in moderation.” Although I think his advice is much too extreme for the average person to actually adhere to.

Hate, Inc. by Matt Taibbi

One of the best political non-fiction books I’ve read. For a few years, I’ve had the gnawing feeling that something in the media (including social media) felt “off,” but I couldn’t quite put my finger on it. This book does a good job of explaining why our media feels so hyper-partisan, and therefore less trustworthy.

Against the Grain by James C. Scott

Elaborates on one of the minor points you may recall from Sapiens by Yuval Noah Harari about how the agricultural revolution was probably kind of a bum deal for humanity. Also has some interesting commentary on the origins of viruses from livestock, and how they probably played havoc on early civilizations. (This book was written pre-Covid, by the way!)

A Brief History of Everyone Who Ever Lived by Adam Rutherford

A fun and intriguing read. Makes you realize how silly and petty (and temporary!) most of our human squabbles over race and ethnicity are. Also gives a great explanation of why “I descended from Charlemagne” is not such a remarkable statement.

The End of the End of History by Alex Hochuli, George Hoare, and Philip Cunliffe

A good, heterodox leftist perspective on the whole “what the heck is up with liberal democracy?” genre. A good pairing with The New Class War by Michael Lind.

Introducing fuite: a tool for finding memory leaks in web apps

Debugging memory leaks in web apps is hard. The tooling exists, but it’s complicated, cumbersome, and often doesn’t answer the simple question: Why is my app leaking memory?

Because of this, I’d wager that most web developers are not actively monitoring for memory leaks. And of course, if you’re not testing something, it’s easy for bugs to slip through.

When I first started looking into memory leaks, I assumed it was a rare thing. How could JavaScript – a language with an automatic garbage collector – be a big source of memory leaks? But the more I learned, the more I suspected that memory leaks were actually quite common in Single Page Apps (SPAs) – it’s just that nobody is testing for it!

Since most web developers aren’t fiddling with the Chrome memory tools for the fun of it, they probably won’t notice a leak until the browser tab crashes with an Out Of Memory error, or the page slows down, or someone happens to open up the Task Manager and notice that a website is using many megabytes (or even gigabytes!) of memory. But at that point, it’s gotten bad enough that there may be multiple leaks on the same page.

I’ve written about memory leaks in the past, but my advice basically boils down to: “Use the Chrome DevTools, follow these dozen tedious steps, and then maybe you can figure out why your page is leaking.” This is not a great developer experience, and I’m sure many readers just shook their heads in despair and moved on. It would be much better if a tool could find memory leaks automatically.

That’s why I wrote fuite (French for “leak”). fuite is a CLI tool that you can point at any URL, and it will analyze the page for memory leaks:

npx fuite https://example.com

That’s it! By default, it assumes that the site is a client-rendered SPA, and it will crawl the page for internal links (such as /about or /contact). Then, for each link, it runs the following steps:

  1. Click the link
  2. Press the browser back button
  3. Repeat to see if memory grows

If fuite finds any leaks, it will show which objects are suspected of causing the leak:

Test         : Go to /foo and back
Memory change: +10 MB
Leak detected: Yes

Leaking objects:

| Object            | # added | Retained size increase |
| ----------------- | ------- | ---------------------- |
| HTMLIFrameElement | 1       | +10 MB                 |

Leaking event listeners:

| Event        | # added | Nodes  |
| ------------ | ------- | ------ |
| beforeunload | 2       | Window |

Leaking DOM nodes:

DOM size grew by 6 node(s)  

To do this, fuite uses the basic strategy outlined in my blog post. It will launch Chrome, run some scenario n number of times (7 by default) and see if any objects are leaking a multiple of n times (7, 14, 21, etc.).

fuite will also analyze any Arrays, Objects, Maps, Sets, event listeners, and the overall DOM to see if any of those are leaking. For instance, if an Array grows by exactly 7 after 7 iterations, then it’s probably leaking.

Testing real-world websites

Somewhat surprisingly, the “basic” scenario of clicking internal links and pressing the back button is enough to find memory leaks in many SPAs. I tested fuite against the home pages for 10 popular frontend frameworks, and found leaks in all of them:

Site Leak detected Internal links Average growth Max growth
Site 1 yes 8 27.2 kB 43 kB
Site 2 yes 10 50.4 kB 78.9 kB
Site 3 yes 27 98.8 kB 135 kB
Site 4 yes 8 180 kB 212 kB
Site 5 yes 13 266 kB 1.07 MB
Site 6 yes 8 638 kB 1.15 MB
Site 7 yes 7 1.37 MB 2.25 MB
Site 8 yes 15 3.49 MB 4.28 MB
Site 9 yes 43 5.57 MB 7.37 MB
Site 10 yes 16 14.9 MB 186 MB

In this case, “internal links” refers to the number of internal links tested, “average growth” refers to the average memory growth for every link (i.e. clicking it and then pressing the back button), and “max growth” refers to whichever internal link was leaking the most. Note that these numbers don’t include one-time setup costs, as fuite does one preflight iteration before the normal 7 iterations.

To confirm these results yourself, you can use the Chrome DevTools Memory tab. Here is a screenshot of the worst-performing site from my set, where I click a link, press the back button, take a heap snapshot, and repeat:

Screenshot of the Chrome DevTools memory heapsnapshots list, showing memory starting at 18.7MB and increasing by roughly 6MB every iteration until reaching 41 MB on iteration 5

On this particular site, memory grows by about 6 MB every time you click a link and go back.

To avoid naming and shaming, I haven’t listed the actual websites. The point is just to show a representative sample of some popular SPAs – the authors of those websites are free to run fuite themselves and track down these leaks. (Please do!)

Caveats

Note, though, that not every leak in an SPA is an egregious problem that needs to be addressed. SPAs need to, for example, maintain the focus and scroll state to properly support accessibility, which means that there may be some small metadata that is stored for every page navigation. fuite will dutifully report such leaks (because they are leaks), but it’s up to the developer to decide if a tiny leak is worth chasing or not.

Some memory growth may also be due to browser-internal changes (such as JITing), which the web page can’t really control. So the memory growth numbers are an imperfect measure of what you stand to gain by fixing leaks – it could very well be that a few kBs of growth are unavoidable. (Although fuite tries to ignore browser-internal growth, and will only say “leaks detected” if there is actionable advice for the web developer.)

In rare cases, some memory growth may also be due to outright browser bugs. While analyzing the sites above, I actually found one (Site #4) that seems to be suffering from this Chrome bug due to <img loading="lazy"> not being unloaded. Unfortunately it’d be hard for fuite to detect browser bugs, so if you’re mystified by a leak, it’s good to cross-check against other browsers!

Also note that it’s almost impossible for a Multi-Page App (MPA) to leak, because the browser clears memory on every page navigation. (Assuming no browser bugs, of course.) During my testing, I found two frontend frameworks whose home pages were MPAs, and unsurprisingly, fuite couldn’t find any leaks in them. These were excluded from the results above.

Memory leaks are more of a concern for SPAs, where memory isn’t cleared automatically on each navigation. fuite is primarily designed for SPAs, although you can run it on MPAs too.

fuite currently only measures the JavaScript heap memory in the main frame of the page, so cross-origin iframes, Web Workers, and Service Workers are not measured. Something like performance.measureUserAgentSpecificMemory() would be more accurate, but it’s only available in cross-origin isolated contexts, so it’s not practical for a general-purpose tool right now.

Other memory leak scenarios

The “crawl for internal links” scenario is just the default one – you can also build your own. fuite is built on top of Puppeteer, so for whatever scenario you want to test, you essentially just need to write a Puppeteer script to tell the browser what to do. Some common scenarios you might test are:

  • Open a modal dialog and then close it
  • Hover over an element to show a tooltip, then mouse away to dismiss it
  • Scroll through an infinite-loading list, then navigate away and back
  • Etc.

In each of these scenarios, you would expect memory to be the same before and after. But of course, it’s not always so simple with web apps! You may be surprised how many of your dialogs and tooltips are harboring memory leaks.

To analyze leaks, fuite captures heap snapshot files, which you can load in the Chrome DevTools to inspect. It also has a --debug mode that you can use for more fine-grained analysis: stepping through the test as it’s running, debugging the browser in real-time, analyzing the leaking objects, etc.

Under the hood, fuite is a fairly basic tool, and I won’t claim that it can do 100% of the work of fixing memory leaks. There is still the human component of figuring out why your objects were allocated and retained, and then finding a reasonable fix. But my goal is to automate ~95% of the work, so that it actually becomes achievable to fix memory leaks in web apps.

You can find fuite on GitHub. Happy leak hunting!

Update: I made a video tutorial showing how to debug memory leaks with fuite.

One weird trick to improve your website’s performance

Every so often, I come across a web performance post from what I like to call the “one weird trick” genre. It goes something like this:

“I improved my page load time by 50% by adding one line of CSS!”

or

“It’s 2x faster to use this JavaScript API than this other one!”

The thing is, I love a good performance post. I love when someone finds some odd little unexplored corner of browser performance and shines a light on it. It might actually provide some good data that can influence framework authors, library developers, and even browser vendors to improve their performance.

But more often than not, the “one weird trick” genre drives me nuts, because of what’s not included in the post:

  • Did you test on multiple browsers?
  • Did you profile to try to understand why something is slower or faster?
  • Did you publish your benchmark so that others can verify your results?

That’s why I wrote “How to write about web performance”, where I tried to summarize everything that I think makes for a great web perf post. But of course, not everyone reads my blog religiously (how dare they?), so the “one weird trick” genre continues unabated.

Look, I get it. Writing about performance is hard. And we’re not all experts. I’ve made the same mistakes myself, in posts like “High performance web worker messages” (2016) – where I found the “one weird trick” that it’s faster to stringify an object before sending it to a web worker. Of course this makes little sense (the browser should be able to serialize the object faster than you can do it yourself), and Surma has demonstrated that there’s no need to do this stringify dance in modern versions of Chrome. (As I’ve said before: if you’re not wrong about web perf today, you’ll be wrong tomorrow when browsers change!)

That said, I do occasionally find a post that really exemplifies what’s great about the web perf genre. For instance, this post by Eoin Hennessy about improving Webpack performance really ticks all the boxes. The author wasn’t satisfied with finding “one weird trick” – they had to understand why the trick worked. So they actually went to the trouble of building Node from source (!) to find the true root cause, and they even submitted a patch to Webpack to fix it.

A post like this, like a good mystery novel, has everything that makes for a satisfying story: the problem, the search, the resolution, the ending. Unlike the “one weird trick” posts, this one doesn’t leave me craving more. Instead, it leaves me feeling like I truly learned something about how browser engines work.

So if you’ve found “one weird trick,” that’s great! There might actually be something really interesting there. But unless you do the extra research, it’s hard to say more than just “Well, this technique worked for me, on my website, in Chrome, in this scenario…” (etc.). If you want to extrapolate from your results to something more widely-applicable, you have to put in the work.

So here are some things you can do. Test in multiple browsers. File a browser bug if one is slower than the others. Ask around if you know any web perf experts or folks who work at browser vendors. Take a performance profile. And if you put in just a bit of extra effort, you might find more than “one weird trick” – you might find a valuable learning opportunity for web developers, browser vendors, or anyone interested in how the web works.

How to write about web performance

I’ve been writing about performance for a long time. I like to think I’ve gotten pretty good at it, but sometimes I look back on my older blog posts and cringe at the mistakes I made.

This post is an attempt to distill some of what I’ve learned over the years to offer as advice to other aspiring tinkerers, benchmarkers, and anyone curious about how browsers actually work when you put them to the test.

Why write about web performance?

The first and maybe most obvious question is: why bother? Why write about web performance? Isn’t this something that’s better left to the browser vendors themselves?

In some ways, this is true. Browser vendors know how their product actually works. If some part of the system is performing slowly, you can go knock on the door of your colleague who wrote the code and ask them why it’s slow. (Or send them a DM, I suppose, in our post-pandemic world.)

But in other ways, browser vendors really aren’t in a good position to talk frankly about web performance. Browsers are in the business of selling browsers. Web performance claims are often used in marketing, with claims like “Browser X is 25% faster than Browser Y,” which might need to get approved by the marketing department, the legal department, not to mention various owners and stakeholders…

And that’s only if your browser is the fast one. If you run a benchmark and it turns out that your browser is the slow one, or it’s a mixed bag, then browser vendors will keep pretty quiet about it. This is why whenever a browser vendor releases a new benchmark, surprise surprise! Their browser wins. So the browser vendors’ hands are pretty tied when it comes to accurately writing about how their product actually works.

Of course, there are exceptions to this rule. Occasionally you will find someone’s personal blog, or a comment on a public bugtracker, which betrays that their browser is actually not so great in some benchmark. But nobody is going to go out of their way to sing from the mountaintops about how lousy their browser is in a benchmark. If anything, they’ll talk about it after they’ve done the work to make things faster, meaning the benchmark already went through several rounds of internal discussion, and was maybe used to evaluate some internal initiative to improve performance – a process that might last years before the public actually hears about it.

Other times, browser vendors will release a new feature, announce it with some fanfare, and then make vague claims about how it improves performance without delving into any specifics. If you actually look into these claims, though, you might find that the performance improvement is pretty meager, or it only manifests in a specific context. (Don’t expect the team who built the feature to eagerly tell you this, though.)

By the way, I don’t blame the browser vendors at all for this situation. I worked on the performance team at Microsoft Edge (back in the EdgeHTML days, before the switch to Chromium), and I did the same stuff. I wrote about scrolling performance because, at the time, our engine was the best at scrolling. I wrote about input responsiveness after we had already made it faster. (Not before! Definitely not before.) I designed benchmarks that explicitly showed off the improvements we had made. I worked on marketing videos that showed our browser winning in experiments where we already knew we’d win.

And if you think I’m picking on Microsoft, I could easily find examples of the other browser vendors doing the same thing. But I choose not to, because I’d rather pick on myself. (If you work for a browser vendor and are reading this, I’m sure some examples come to mind.)

Don’t expect a car company to tell you that their competitor has better mileage. Don’t expect them to admit that their new model has a lousy safety rating. That’s what Consumer Reports is for. In the same way, if you don’t work at a browser vendor (I don’t, anymore), then you are blessedly free to say whatever you want about browsers, and to honestly assess their claims and compare them to each other in fair, unbiased benchmarks.

Plus, as a web developer, you might actually be in a better position to write a benchmark that is more representative of real-world code. Browser developers spend most of their day writing C, C++, and Rust, not necessarily HTML, CSS, and JavaScript. So they aren’t always familiar with the day-to-day concerns of working web developers.

The subtle science of benchmarking

Okay, so that was my long diatribe about why you’d bother writing about web performance. So how do you actually go about doing it?

First off, I’d say to write the benchmark before you start writing your blog post. Your conclusions and high-level takeaways may be vastly different depending on the results of the benchmark. So don’t assume you already know what the results are going to be.

I’ve made this mistake in the past! Once, I wrote an entire blog post before writing the benchmark, and then the benchmark completely upended what I was going to say in the post. I had to scrap the whole thing and start from scratch.

Benchmarking is science, and you should treat it with the seriousness of a scientific endeavor. Expect peer review, which means – most importantly! – publish your benchmark publicly and provide instructions for others to test it. Because believe me, they will! I’ve had folks notify me of a bug in my benchmark after I published a post, so I had to go back and edit it to correct the results. (This is annoying, and embarrassing, but it’s better than willfully spreading misinformation.)

Since you may end up running your benchmark multiple times, and even generating your charts and tables multiple times, make an effort to streamline the process of gathering the data. If some step is manual, try to automate it.

These days, I like Tachometer because it automates a lot of the boring parts of benchmarking – launching a browser, taking a measurement, taking multiple measurements, taking enough measurements to achieve statistical significance, etc. Unfortunately it doesn’t automate the part where you generate charts and graphs, but I usually write small scripts to output the data in a format where I can easily import it into a spreadsheet app.

This also leads to an important point: take accurate measurements. A common mistake is to use Date.now() – instead, you should use performance.now(), since this gives you a high-resolution timestamp. Or even better, use performance.mark() and performance.measure() – these are also high-resolution, but with the added benefit that you can actually see your measurements laid out visually in the Chrome DevTools. This is a great way to double-check that you’re actually measuring what you think you’re measuring.

Screenshot of a performance trace in Chrome DevTools with an annotation in the User Timing section for the "total" span saying "What the benchmark is measuring" and stacktraces in the main thread with the annotation "What the browser is doing"

Note: Sadly, the Firefox and Safari DevTools still don’t show performance marks/measures in their profiler traces. They really should; IE11 had this feature years ago.

As mentioned above, it’s also a good idea to take multiple measurements. Benchmarks will always show variance, and you can prove just about anything if you only take one sample. For best results, I’d say take at least three measurements and then calculate the median, or better yet, use a tool like Tachometer that will use a bunch of fancy statistics to find the ideal number of samples.

Humility

Writing about web performance is really hard, so it’s important to be humble. Browsers are incredibly complex, so you have to accept that you will probably be wrong about something. And if you’re not wrong, then you will be wrong in 5 years when browsers update their engines to make your results obsolete.

There are a few ways you can limit your likelihood of wrongness, though. Here are a few strategies that have worked well for me in the past.

First off, test in multiple browser engines. This is a good way to figure out if you’ve identified a quirk in a particular browser, or a fundamental truth about how the web works. Heck, if you write a benchmark where one browser performs much more poorly than the other ones, then congratulations! You’ve found a browser bug, and now you have a reproducible test case that you can file on that browser.

(And if you think they won’t be grateful or won’t fix the problem, then prepare to be surprised. I’ve filed several such bugs on browsers, and they usually at least acknowledge the issue if not outright fix it within a few releases. Sometimes browser developers are grateful when you file a bug like this, because they might already know something is a problem, but without bug reports from customers, they weren’t able to convince management to prioritize it.)

Second, reduce the variables. Test on the same hardware, if possible. (For testing the three major browser engines – Blink, Gecko, and WebKit – this sadly means you’re always testing on macOS. Do not trust WebKit on Windows/Linux; I’ve found its performance to be vastly different from Safari’s.) Browsers can differ based on whether the device is plugged into power or has low battery, so make sure that the device is plugged in and charged. Don’t run other applications or browser windows or tabs while you’re running the benchmark. If networking is involved, use a local server if possible to eliminate latency. (Or configure the server to always respond with a particular delay, or use throttling, as necessary.) Update all the browsers before running the test.

Third, be aware of caching. It’s easy to fool yourself if you run 1,000 iterations of something, and it turns out that the last 999 iterations are all cached. JavaScript engines have JIT compilers, meaning that the first iteration can be different from the second iteration, which can be different from the third, etc. If you think you can figure out something low-level like “Is const faster than let?”, you probably can’t, because the JIT will outsmart you. Browsers also have bytecode caching, which means that the first page load may be different from the second, and the second may even be different from the third. (Tachometer works around this by using a fresh browser tab each iteration, which is clever.)

My point here is that, for all of your hard work to do rigorous, scientific benchmarking, you may just turn out to be wrong. You’ll publish your blog post, you’ll feel very proud of yourself, and then a browser engineer will contact you privately and say, “You know, it only works like this on a 60FPS monitor.” Or “only on Intel CPUs.” Or “only on macOS Big Sur.” Or “only if your DOM size is greater than 1,000 and the layer depth is above 10 and you’re using a trackball mouse and it’s a Tuesday and the moon is in the seventh house.”

There are so many variables in browser performance, and you can’t possibly capture them all. The best you can do is document your methodology, explain what your benchmark doesn’t test, and try not to make grand sweeping statements like, “You should always use const instead of let; my benchmark proves it’s faster.” At best, your benchmark proves that one number is higher than another in your very specific benchmark in the very specific way you tested it, and you have to be satisfied with that.

Conclusion

Writing about browser performance is hard, but it’s not fruitless. I’ve had enough successes over the years (and enough stubbornness and curiosity, I guess) that I keep doing it.

For instance, I wrote about how bundlers like Rollup produced faster JavaScript files than bundlers like Webpack, and Webpack eventually improved its implementation. I filed a bug on Firefox and Chrome showing that Safari had an optimization they didn’t, and both browsers fixed it, so now all three browsers are fast on the benchmark. I wrote a silly JavaScript “optimizer” that the V8 team used to improve their performance.

I bring up all these examples less to brag, and more to show that it is possible to improve things by simply writing about them. In all three of the above cases, I actually made mistakes in my benchmarks (pretty dumb ones, in some cases), and had to go back and fix it later. But if you can get enough traction and get the right people’s attention, then the browsers and bundlers and frameworks can change, without you having to actually write the code to do it. (To this day, I can’t write a line of C, C++, or Rust, but I’ve influenced browser vendors to write it for me, which aligns with my goal of spending more time playing Tetris than learning new programming languages.)

My point in writing all this is to try to convince you (if you’ve read this far) that it is indeed valuable for you to write about web performance. Even if you don’t feel like you really understand how browsers work. Even if you’re just getting started as a web developer. Even if you’re just curious, and you want to poke around at browsers to see how they tick. At worst you’ll be wrong (which I’ve been many times), and at best you might teach others about performant programming patterns, or even influence the ecosystem to change and make things better for everyone.

There are plenty of upsides, and all you need is an HTML file and a bit of patience. So if that sounds interesting to you, get started and have fun benchmarking!

My love-hate affair with technology

Ten years ago I would have considered myself someone who was excited about new technology. I always had the latest smartphone, I would read the reviews of new Android releases with a lot of interest, and I was delighted when things like Google Maps Navigation, speech-to-text, or keyboard swiping made my life easier.

Nowadays, to the average person I probably look like a technology curmudgeon. I don’t have a smart speaker, a smart watch, or any smart home appliances. My 4-year-old phone runs a de-Googled LineageOS that barely runs any apps other than Signal and F-Droid. My house has a Raspberry Pi running Nextcloud for file storage and Pi-hole for ad blocking. When I bought a new TV I refused to connect it to the Internet; instead, I hooked it up to an old PC running Ubuntu so I can watch Netflix, Hulu, etc.

My wife complains that none of the devices in our house work, and she’s right. The Pi-hole blocks a lot of websites, and it’s a struggle to unblock them. Driving the TV with a wireless keyboard is cumbersome. Nextcloud is clunky compared to something like Dropbox or Google Drive. I even tried cloudflared for a while, but I had to give up when DNS kept periodically failing.

One time – no joke – I had a dream that I was using some open-source alternative to a popular piece of software, and it was slow and buggy. I don’t even remember what it was, but I remember being frustrated. This is just what I’m used to nowadays – not using a technology because it’s the best-in-class or makes my life easier, but because it meets some high-minded criteria about how I think software should be: privacy-respecting, open-source, controlled by the user, etc.

To the average person, this is probably crazy. “Nolan,” they’d say. “You couldn’t order a Lyft because their web app didn’t work in Firefox for Android. Your files don’t sync away from home because you’re only running Nextcloud on your local network. Your friends can’t even message you on WhatsApp, Facebook, or Twitter because you don’t have an account and the apps don’t work on your phone. If you want to live in the eighteenth century so bad, why don’t you get a horse and buggy while you’re at it?”

Maybe this nagging voice in my head is right (and I do think these thoughts sometimes). Maybe what I’m practicing is a kind of tech veganism that, like real veganism, is a great idea in theory but really hard to stick to in practice. (And yes, I’ve tried real veganism too. Maybe I should join a monastery at this point.)

On the other hand, I have to remind myself that there are benefits to the somewhat ascetic lifestyle I’ve chosen. The thing that finally pushed me to switch from stock Android to de-Googled LineageOS was all the ads and notifications in Google Maps. I remember fumbling around with a dozen settings, but never being able to get rid of the “Hey, rate this park” message. (Because everything on Earth needs a star rating apparently.)

And now, I don’t have to deal with Google Maps anymore! Instead I deal with OsmAnd~, which broke down the other day and failed to give me directions. So it goes.

Maybe someday I’ll relent. Maybe I’ll say, “I’m too old for this shit” and start using technology that actually works instead of technology that meets some idealistic and probably antiquated notion of software purity. Maybe I’ll be forced to, because I need a pacemaker that isn’t open-source. Or maybe there will be some essential government service that requires a Google or Apple phone – my state’s contact tracing app does! I got jury duty recently and was unsurprised to find that they do everything through Zoom. At what point will it be impossible to be a tech hermit, without being an actual hermit?

That said, I’m still doing what I’m doing for now. It helps that I’m on Mastodon, where there are plenty of folks who are even more hardcore than me. (“I won’t even look at a computer if it’s running non-FLOSS software,” they smirk, typing from their BSD laptop behind five layers of Tor.) Complaining to this crowd about how I can’t buy a TV anymore without it spying on me makes me feel a little bit normal. Just a bit.

The thing that has always bothered me about this, and which continues to bother me, is that I’m only able to live this lifestyle because I have the technical know-how. The average person would neither know how to do any of the things I’m doing (installing a custom Android ROM, setting up Nextcloud, etc.), nor would they probably want to, given that it’s a lot of extra hassle for a sub-par experience.

And who am I, anyway? Edward Snowden? Why am I LARPing as a character in a spy novel when I could be focusing on any one of a million other hobbies in the world?

I guess the answer is: this is my hobby. Figuring out how to get my Raspberry Pi to auto-update is a hobby. Tinkering with my TV setup so that I can get Bluetooth headphones working while the TV is in airplane mode is a hobby. Like a gearhead who’s delighted when their car breaks down (“Hey! Now I can fix it!”), I don’t mind when the technology around me doesn’t work – it gives me something to do on the weekend! But I have no illusions that this lifestyle makes sense for most people. Or that it will even make sense for me, once I get older and probably bored of my hobby.

For the time being, though, I’m going to keep acting like technology is an enemy I need to subdue rather than a purveyor of joys and delights. So if you want to know how it’s going, subscribe to my blog via RSS or message me on Signal. Or if that fails, come visit me in a horse and buggy.

Speeding up IndexedDB reads and writes

Recently I read James Long’s article “A future for SQL on the web”. It’s a great post, and if you haven’t read it, you should definitely go take a look!

I don’t want to comment on the specifics of the tool James created, except to say that I think it’s a truly amazing feat of engineering, and I’m excited to see where it goes in the future. But one thing in the post that caught my eye was the benchmark comparisons of IndexedDB read/write performance (compared to James’s tool, absurd-sql).

The IndexedDB benchmarks are fair enough, in that they demonstrate the idiomatic usage of IndexedDB. But in this post, I’d like to show how raw IndexedDB performance can be improved using a few tricks that are available as of IndexedDB v2 and v3:

  • Pagination (v2)
  • Relaxed durability (v3)
  • Explicit transaction commits (v3)

Let’s go over each of these in turn.

Pagination

Years ago when I was working on PouchDB, I hit upon an IndexedDB pattern that, at the time, improved performance in Firefox and Chrome by roughly 40-50%. I’m probably not the first person to come up with this idea, but I’ll lay it out here.

In IndexedDB, a cursor is basically a way of iterating through the data in a database one-at-a-time. And that’s the core problem: one-at-a-time. Sadly, this tends to be slow, because at every step of the iteration, JavaScript can respond to a single item from the cursor and decide whether to continue or stop the iteration.

Effectively this means that there’s a back-and-forth between the JavaScript main thread and the IndexedDB engine (running off-main-thread). You can see it in this screenshot of the Chrome DevTools performance profiler:

Screenshot of Chrome DevTools profiler showing multiple small tasks separated by a small amount of idle time each

Or in Chrome tracing, which shows a bit more detail:

Screenshot of Chrome tracing tool showing multiple separate tasks, separated by a bit of idle time. The top of each task says RunNormalPriorityTask, and near the bottom each one says IDBCursor continue.

Notice that each call to cursor.continue() gets its own little JavaScript task, and the tasks are separated by a bit of idle time. That’s a lot of wasted time for each item in a database!

Luckily, in IndexedDB v2, we got two new APIs to help out with this problem: getAll() and getAllKeys(). These allow you to fetch multiple items from an object store or index in a single go. They can also start from a given key range and return a given number of items, meaning that we can implement a paginated cursor:

const batchSize = 100
let keys, values, keyRange = null

function fetchMore() {
  // If there could be more results, fetch them
  if (keys && values && values.length === batchSize) {
    // Find keys greater than the last key
    keyRange = IDBKeyRange.lowerBound(keys.at(-1), true)
    keys = values = undefined
    next()
  }
}

function next() {
  store.getAllKeys(keyRange, batchSize).onsuccess = e => {
    keys = e.target.result
    fetchMore()
  }
  store.getAll(keyRange, batchSize).onsuccess = e => {
    values = e.target.result
    fetchMore()
  }
}

next()

In the example above, we iterate through the object store, fetching 100 items at a time rather than just 1. Using a modified version of the absurd-sql benchmark, we can see that this improves performance considerably. Here are the results for the “read” benchmark in Chrome:

Chart image, see table below

Click for table

DB size (columns) vs batch size (rows):

100 1000 10000 50000
1 8.9 37.4 241 1194.2
100 7.3 34 145.1 702.8
1000 6.5 27.9 100.3 488.3

(Note that a batch size of 1 means a cursor, whereas 100 and 1000 use a paginated cursor.)

And here’s Firefox:

Chart image, see table below

Click for table

DB size (columns) vs batch size (rows):

100 1000 10000 50000
1 2 15 125 610
100 2 9 70 468
1000 2 8 51 271

And Safari:

Chart image, see table below

Click for table

DB size (columns) vs batch size (rows):

100 1000 10000 50000
1 11 106 957 4673
100 1 5 44 227
1000 1 3 26 127

All benchmarks were run on a 2015 MacBook Pro, using Chrome 92, Firefox 91, and Safari 14.1. Tachometer was configured with 15 minimum iterations, a 1% horizon, and a 10-minute timeout. I’m reporting the median of all iterations.

As you can see, the paginated cursor is particularly effective in Safari, but it improves performance in all browser engines.

Now, this technique isn’t without its downsides. For one, you have to choose an explicit batch size, and the ideal number will depend on the size of the data and the usage patterns. You may also want to consider the downsides of overfetching – i.e. if the cursor should stop at a given value, you may end up fetching more items from the database than you really need. (Although ideally, you can use the upper bound of the key range to guard against that.)

The main downside of this technique is that it only works in one direction: you cannot build a paginated cursor in descending order. This is a flaw in the IndexedDB specification, and there are ideas to fix it, but currently it’s not possible.

Of course, instead of implementing a paginated cursor, you could also just use getAll() and getAllKeys() as-is and fetch all the data at once. This probably isn’t a great idea if the database is large, though, as you may run into memory pressure, especially on constrained devices. But it could be useful if the database is small.

getAll() and getAllKeys() both have great browser support, so this technique can be widely adopted for speeding up IndexedDB read patterns, at least in ascending order.

Relaxed durability

The paginated cursor can speed up database reads, but what about writes? In this case, we don’t have an equivalent to getAll()/getAllKeys() that we can lean on. Apparently there was some effort put into building a putAll(), but currently it’s abandoned because it didn’t actually improve write performance in Chrome.

That said, there are other ways to improve write performance. Unfortunately, none of these techniques are as effective as the paginated cursor, but they are worth investigating, so I’m reporting my results here.

The most significant way to improve write performance is with relaxed durability. This API is currently only available in Chrome, but it has also been implemented in WebKit as of Safari Technology Preview 130.

The idea behind relaxed durability is to resolve some disagreement between the browser vendors as to whether IndexedDB transactions should optimize for durability (writes succeed even in the event of a power failure or crash) or performance (writes succeed quickly, even if not fully flushed to disk).

It’s been well documented that Chrome’s IndexedDB performance is worse than Firefox’s or Safari’s, and part of the reason seems to be that Chrome defaults to a durable-by-default mode. But rather than sacrifice durability across-the-board, the Chrome team wanted to expose an explicit API for developers to decide which mode to use. (After all, only the web developer knows if IndexedDB is being used as an ephemeral cache or a store of priceless family photos.) So now we have three durability options: default, relaxed, and strict.

Using the “write” benchmark, we can test out relaxed durability in Chrome and see the improvement:

Chart image, see table below

Click for table
Durability 100 1000 10000 50000
Default 26.4 125.9 1373.7 7171.9
Relaxed 17.1 112.9 1359.3 6969.8

As you can see, the results are not as dramatic as with the pagination technique. The effect is most visible in the smaller database sizes, and the reason turns out to be that relaxed durability is better at speeding up multiple small transactions than one big transaction.

Modifying the benchmark to do one transaction per item in the database, we can see a much clearer impact of relaxed durability:

Chart image, see table below

Click for table
Durability 100 1000
Default 1074.6 10456.2
Relaxed 65.4 630.7

(I didn’t measure the larger database sizes, because they were too slow, and the pattern is clear.)

Personally, I find this option to be nice-to-have, but underwhelming. If performance is only really improved for multiple small transactions, then usually there is a simpler solution: use fewer transactions.

It’s also underwhelming given that, even with this option enabled, Chrome is still much slower than Firefox or Safari:

Chart image, see table below

Click for table
Browser 100 1000 10000 50000
Chrome (default) 26.4 125.9 1373.7 7171.9
Chrome (relaxed) 17.1 112.9 1359.3 6969.8
Firefox 8 53 436 1893
Safari 3 28 279 1359

That said, if you’re not storing priceless family photos in IndexedDB, I can’t see a good reason not to use relaxed durability.

Explicit transaction commits

The last technique I’ll cover is explicit transaction commits. I found it to be an even smaller performance improvement than relaxed durability, but it’s worth mentioning.

This API is available in both Chrome and Firefox, and (like relaxed durability) has also been implemented in Safari Technology Preview 130.

The idea is that, instead of allowing the transaction to auto-close based on the normal flow of the JavaScript event loop, you can explicitly call transaction.close() to signal that it’s safe to close the transaction immediately. This results in a very small performance boost because the IndexedDB engine is no longer waiting for outstanding requests to be dispatched. Here is the improvement in Chrome using the “write” benchmark:

Chart image, see table below

Click for table
Relaxed / Commit 100 1000 10000 50000
relaxed=false, commit=false 26.4 125.9 1373.7 7171.9
relaxed=false, commit=true 26 125.5 1373.9 7129.7
relaxed=true, commit=false 17.1 112.9 1359.3 6969.8
relaxed=true, commit=true 16.8 112.8 1356.2 7215

You’d really have to squint to see the improvement, and only for the smaller database sizes. This makes sense, since explicit commits can only shave a bit of time off the end of each transaction. So, like relaxed durability, it has a bigger impact on multiple small transactions than one big transaction.

The results are similarly underwhelming in Firefox:

Chart image, see table below

Click for table
Commit 100 1000 10000 50000
No commit 8 53 436 1893
Commit 8 52 434 1858

That said, especially if you’re doing multiple small transactions, you might as well use it. Since it’s not supported in all browsers, though, you’ll probably want to use a pattern like this:

if (transaction.commit) {
  transaction.commit()
}

If transaction.commit is undefined, then the transaction can just close automatically, and functionally it’s the same.

Update: Daniel Murphy points out that transaction.commit() can have bigger perf gains if the page is busy with other JavaScript tasks, which would delay the auto-closing of the transaction. This is a good point! My benchmark doesn’t measure this.

Conclusion

IndexedDB has a lot of detractors, and I think most of the criticism is justified. The IndexedDB API is awkward, it has bugs and gotchas in various browser implementations, and it’s not even particularly fast, especially compared to a full-featured, battle-hardened, industry-standard tool like SQLite. The new APIs unveiled in IndexedDB v3 don’t even move the needle much. It’s no surprise that many developers just say “forget it” and stick with localStorage, or they create elaborate solutions on top of IndexedDB, such as absurd-sql.

Perhaps I just have Stockholm syndrome from having worked with IndexedDB for so long, but I don’t find it to be so bad. The nomenclature and the APIs are a bit weird, but once you wrap your head around it, it’s a powerful tool with broad browser support – heck, it even works in Node.js via fake-indexeddb and indexeddbshim. For better or worse, IndexedDB is here to stay.

That said, I can definitely see a future where IndexedDB is not the only player in the browser storage game. We had WebSQL, and it’s long gone (even though I’m still maintaining a Node.js port!), but that hasn’t stopped people from wanting a more high-level database API in the browser, as demonstrated by tools like absurd-sql. In the future, I can imagine something like the Storage Foundation API making it more straightforward to build custom databases of top of low-level storage primitives – which is what IndexedDB was designed to do, and arguably failed at. (PouchDB, for one, makes extensive use of IndexedDB’s capabilities, but I’ve seen plenty of storage wrappers that essentially use IndexedDB as a dumb key-value store.)

I’d also like to see the browser vendors (especially Chrome) improve their IndexedDB performance. The Chrome team has said that they’re focused on read performance rather than write performance, but really, both matter. A mobile app developer can ship a prebuilt SQLite .db file in their app; in terms of quickly populating a database, there is nothing even remotely close for IndexedDB. As demonstrated above, cursor performance is also not great

For those web developers sticking it out with IndexedDB, though, I hope I’ve made a case that it’s not completely a lost cause, and that its performance can be improved. Who knows: maybe the browser vendors still have some tricks up their sleeves, especially if we web developers keep complaining about IndexedDB performance. It’ll be interesting to watch this space evolve and to see how both IndexedDB and its alternatives improve over the years.