Archive for the ‘Web’ Category

One year of Pinafore

Screenshot of Pinafore showing a compose input

Pinafore is a standalone web client for Mastodon, which recently hit version 1.9.0. Here are some notable new features:

It’s been about a year since I first launched Pinafore. So I’d like to reflect on where the project came from, and where I hope to take it.

Background

In 2017, I was in a funk. I had stopped contributing to the PouchDB project largely due to burnout, and for various reasons I eventually left my job at Microsoft. In the meantime, I had become enamored of Mastodon and even contributed to it, but I was feeling restless and looking for a new project.

The Mastodon codebase is extremely well-written. I’m convinced that Eugen Rochko is some kind of savant. However, I never took much of a liking to React, and I found it difficult to fix some fundamental problems in the Mastodon UI, such as offline support or the occasionally jerky scrolling. I also really missed the single-column layout of Twitter (I was never a Tweetdeck fan).

So the idea came to me to create my own Mastodon web client. I had been working on web sites for years, but aside from some small prototypes, I had never built an entire web app by myself. This was an opportunity to test some of my ideas about how a web app “should” be, leveraging my experience in web performance and standards. Also, I wanted to teach myself about accessibility, which I had never really studied before.

I knew I wanted to use Svelte, because I agreed with Rich Harris and Tom Dale that JavaScript frameworks should focus less on being runtime APIs and more on being compilers. Incidentally, I was at the same talk by Jed Schmitt that Rich mentions in this video, and it blew my mind as much as it blew his. (The difference between Rich and me is that he actually went off and built a whole framework based on it!)

I started working on Pinafore at the end of December 2017, and released it in April 2018. So after 18 months of development, I’d like to consider where Pinafore has done well and where it can improve.

Success metrics

Pinafore doesn’t have any trackers on it, so I don’t know how many people are using it. Sure, I could use a privacy-respecting tracker like Fathom, but the Mastodon community is pretty allergic to any kind of tracking, so I’ve been hesitant to add it. In any case, I don’t really care, because I would work on Pinafore regardless of how many people are using it.

However, I do get a trickle of questions and bug reports about Pinafore, and the #Pinafore hashtag is pretty active. I’ve also heard from several folks that it’s their preferred Mastodon interface. The reasons they give are usually one of the following:

  • Accessibility: I’ve focused a lot on making Pinafore work well with keyboard navigation and screen readers. (Marco Zehe‘s guidance really helped!)
  • Design: the single-column layout of Pinafore is a key differentiator with the Mastodon frontend (although not for long).
  • Instance-switching: people who juggle multiple accounts on different instances don’t necessarily want one browser tab for each.

My favorite user testimonial, though, is from my wife. She told me, “I like Pinafore because it never loses my place in the timeline.” (Much of my motivation for working on Pinafore can be credited to “wife-driven development” – I like making her happy!)

So this confirms that I’ve achieved at least some of the goals from the Pinafore introductory blog post. Although notably, offline support is rarely mentioned, but I’ll get to that later.

Collaboration

Pinafore has also benefited from a lot of community contributions. I’d like to specifically thank:

And of course everyone else who contributed. Thank you so much!

There are some challenges with building a dev community around Pinafore. The app is implemented using Svelte v2 and Sapper, which unfortunately causes two downsides in terms of onboarding: 1) Svelte isn’t a very well-known framework, and 2) Svelte v2 is incompatible with Svelte v3, and there’s no upgrade path currently.

I’ll have to continue grappling with these challenges, but for now I’m very satisfied with Svelte v2. It’s fast, lightweight, and does everything I need it to. So I’m not in a big hurry to upgrade.

And oh yeah: Svelte really is lightweight. Pinafore only loads 32KB of compressed JavaScript for the landing page, and 137KB for the Home timeline. The total size of all JS assets is under 300KB compressed (<1MB raw). It gets a perfect 100 score from Lighthouse.

Screenshot of Lighthouse showing perfect 100 score in all categories, including Performance, Accessibility, Best Practices, and SEO

If you didn’t think I was going to brag about web perf vanity metrics, then you don’t know me very well.

Future plans

My first goal with Pinafore is completeness. Even though I’ve been working on it for over a year, there are still plenty of missing features compared to the Mastodon frontend. And although the gap has been narrowing, Mastodon itself hasn’t stopped innovating, so there’s always new stuff to add. (Polls! Blurhash! Keybase! Does Eugen ever sleep?)

Beyond that, I’d like to start focusing on features that make Pinafore a more pleasant social media experience. One of the virtues of decentralized social media is that we can experiment with features that give people control over their social media experience, even if it hampers addictiveness or growth. To that end, I’ve added a set of wellness features, inspired by Tristan Harris’s Center for Humane Technology. I’ll probably tweak and expand these features as feedback rolls in.

I’d also like to improve offline support. Even though Pinafore does have an offline mode, and even though it uses a Service Worker to cache static assets, it’s not very offline-first. Instead, it uses offline storage more as a fallback for when the network fails, rather than as the primary source of truth.

Given my background working on offline-first technology and advocating for it, I find this a bit disappointing. But it turns out that it’s really difficult to implement an offline-first social media UI. How do you deal with offline writes? How do you handle the gap between fresh content and stale content within the same timeline? These are not easy questions, and for the most part I’ve punted on them. But Pinafore can do better.

Conclusion

Pinafore is a passion project for me. It gives me something interesting to do on weekends and evenings, and it teaches me a lot about how the web platform works.

I also see Pinafore as an opportunity to provide more options to the Mastodon community, and to prove that you don’t have to treat Eugen as a gatekeeper for every minor UI tweak you’d like to see in Mastodon. Mastodon is decentralized; let’s decentralize the interface!

I have every intention to keep working on Pinafore, and I’m curious to know where you think it should go next.

Get off of Twitter

Twitter logo with a red "no" sign over it

Stop complaining about Twitter on Twitter. Deny them your attention, your time, and your data. Get off of Twitter.

The more time you spend on Twitter, the more money you make for Twitter. Get off of Twitter.

You at-mention @jack and call him out for the harassment and disinformation on his platform. You get a few hundred likes and retweets, each one sending your brain a little boost of serotonin. Twitter learns that you are interested in people who criticize @jack and starts to recommend you their tweets. You end up spending more time on Twitter, and advertisers learn a little bit more about you. You make @jack more money.

Get off of Twitter.

You can’t criticize Twitter on Twitter. It just doesn’t work. The medium is the message.

There’s an old joke where one fish says to the other, “How’s the water today?” And the fish responds, “What’s water?” On Twitter, you might ask, “How’s the outrage today?” (The answer, of course, is “I hate it! I’m so outraged about it!”)

Get off of Twitter.

Write blog posts. Use RSS. Use micro.blog. Use Mastodon. Use Pleroma. Use whatever you want, as long as it isn’t manipulating you with algorithms or selling access to your data to advertisers.

You’re worried about losing your influence. How about using your influence for something good? How about using it to stick it to Twitter, if you really dislike Twitter so much? Maybe if you do it, and your friends do it, then it will cause a sea change. After all, who was ever “influential” by following the crowd?

As Gandhi said (in paraphrase), “Be the change you want to see in the world.” Or as another influencer put it: “I’m starting with the man in the mirror.” Or if you prefer: “Practice what you preach.”

Get off of Twitter.

In defense of the Right Thing

It has come to my attention that many people believe the Wrong Thing. I find this to be an intolerable state of affairs, so this is a blog post defending what is Right.

How do I know there are so many people who believe the Wrong Thing? Well because, like everyone, I use Twitter. And holy moly! My feed is chock-full of Wrong Thinkers.

Sometimes it feels like everyone in the world believes the Wrong Thing, and I’m the last lonely person clinging to what’s Right. There must be a global epidemic of Wrongness. Why else would Twitter fill my feed with so many of these dunces and ninnies and halfwits?

I don’t even follow these people. Why should they be in my timeline, unless the whole world is full of Wrong people?

Every time I see their Wrong tweets, I seethe with rage and eagerly click to read the full thread. I might spend hours this way, thumbing through Wrong tweets. “How can so many people be so Wrong?” I’ll say to myself, shaking my head as I continue to scroll.

Sometimes when I find a really Wrong tweet, I’ll quote it and tweet it out with the perfect devastating repartee. That way, more people who agree with me are exposed to these Wrong views. That’ll teach ’em!

I do have to commend the others who proudly rise in defense of what is Right. On Twitter, I often see them caught in an epic battle with the Wrongers – “34 people are talking about this!” Well, here comes a 35th, joining the fray to fight the good fight.

To be honest, I sometimes get tired of feeling angry all the time. But how can I not be, when the world is full of people who are so very Wrong?

Curiously, the Wrongers seem to come from all sides of any issue, and they are legion. People who use margarine instead of butter, people who peel the banana from the top instead of the bottom, people who crack a boiled egg from the big end rather than the small end. These are exactly the things that drive me nuts, and somehow my feed is full of people who believe the opposite of me on precisely those issues! Sometimes I feel utterly embroiled, helpless, a tiny voice of reason shouting against an angry mob of Wrong Thinkers.

But sadly, this is just how the world is these days. The world is full of people arguing, calling each other out, or watching a fight unfold with the horrified glee of a driver craning their neck to get a good look at a car wreck.

I know this is how the world is, because I see it on Twitter. And Twitter is an utterly unbiased mirror of the world, with no algorithms that subtly push the discussion in one direction or the other, regardless of whether it is good for discourse or compassion or human well-being but only whether it is good for Twitter.

Building a modern carousel with CSS scroll snap, smooth scrolling, and pinch-zoom

Recently I had some fun implementing an image carousel for Pinafore. The requirements were pretty simple: users should be able to swipe horizontally through up to 4 images, and also pinch-zoom to get a closer look.

The finished product looks like this:

 

Often when you’re building something like this, it’s tempting to use an off-the-shelf solution. The problem is that this often adds a large dependency size, or the code is inflexible, or it’s framework-specific (React, Vue, etc.), or it may not be optimized for performance and accessibility.

Come on, it’s 2019. Isn’t there a decent way to build a carousel with native browser APIs?

As it turns out, there is. My carousel implementation uses a few simple building blocks:

  1. CSS scroll snap
  2. scrollTo() with smooth behavior
  3. The <pinch-zoom> custom element

CSS scroll snap

Let’s start off with CSS scroll snap. This is what makes the scrollable element “snap” to a certain position as you scroll it.

The browser support is pretty good. The only trick is that you have to write one implementation for the modern scroll snap API (supported by Chrome and Safari), and another for the older scroll snap points API (supported by Firefox[1]).

You can detect support using @supports (scroll-snap-align: start). As usual for iOS Safari, you’ll also need to add -webkit-overflow-scrolling: touch to make the element scrollable.

But lo and behold, we now have the world’s simplest carousel implementation. It doesn’t even require JavaScript – just HTML and CSS!

Note: for best results, you may want to view the above pen in full mode.

The benefit of having all this “snapping” logic inside of CSS rather than JavaScript is that the browser is doing the heavy lifting. We don’t have to use touchmove listeners or requestAnimationFrame to try to get the pixel-perfect snapping behavior with the right animation curve – the browser handles all of it for us, in native code.

And unlike touchmove, this scroll-snapping works for any method of scrolling – touchpad, touchscreen, scrollbar, you name it.

scrollTo() with smooth scrolling

The next piece of the puzzle is that most carousels have little indicator buttons that let you navigate between the items in the list.

Screenshot of a carousel containing an image of a cat with indicator buttons below showing 1 filled circle and 3 unfilled circles

For this, we will need a little bit of JavaScript. We can use the scrollTo() API with {behavior: 'smooth'}, which tells the browser to smoothly scroll to a given offset:

function scrollToItem(itemPosition, numItems, scroller) {
  scroller.scrollTo({
    scrollLeft: Math.floor(
      scroller.scrollWidth * (itemPosition / numItems)
    ),
    behavior: 'smooth'
  })
}

The only trick here is that Safari doesn’t support smooth scroll behavior and Edge doesn’t support scrollTo() at all. But we can detect support and fall back to a JavaScript implementation, such as this one.

Here is my technique for detecting native smooth scrolling:

function testSupportsSmoothScroll () {
  var supports = false
  try {
    var div = document.createElement('div')
    div.scrollTo({
      top: 0,
      get behavior () {
        supports = true
        return 'smooth'
      }
    })
  } catch (err) {}
  return supports
}

Being careful to set aria-labels and aria-pressed states for the buttons, and adding a debounced scroll listener to update the pressed state as the user scrolls, we end up with something like this:

View in full mode

You can also add generic “go left” and “go right” buttons; the principle is the same.

Hiding the scrollbar (optional)

Now, the next piece of the puzzle is that most carousels don’t have a scrollbar, and depending on the browser and OS, you might not like how the scrollbar appears.

Also, our carousel already includes all the buttons needed to scroll left and right, so it effectively has its own scrollbar. So we can consider removing the native one.

To accomplish this, we can start with overflow-x: auto rather than overflow-x: scroll, which ensures that at least if there’s only one image (and thus no possibility of scrolling left or right), the scrollbar doesn’t show.

Beyond that, we may be tempted to add overflow-x: hidden, but this actually makes the list entirely unscrollable. Bummer.

So we can use a little hack instead. Here is some CSS to remove the scrollbar, which works in Chrome, Edge, Firefox, and Safari:

.scroll {
  scrollbar-width: none;
  -ms-overflow-style: none;
}
.scroll::-webkit-scrollbar {
  display: none;
}

And it works! The scrollbar is gone:

View in full mode

Admittedly, though, this is a bit icky. The only standards-based CSS here is scrollbar-width, which is currently only supported by Firefox. The -webkit-scrollbar hack is for Chrome and Safari, and the -ms-overflow-style hack is for Edge/IE.

So if you don’t like vendor-specific CSS, or if you think scrollbars are better for accessibility, then you can just keep the scrollbar around. Follow your heart!

Pinch-zoom

For pinch-zooming, this is one case where I allowed myself an indulgence: I use the <pinch-zoom> element from Google Chrome Labs.

I like it because it’s extremely small (5.2kB minified) and it uses Pointer Events under the hood, meaning it supports mobile touchscreens, touchpads, touchscreen laptops, and any device that supports pinch-zooming.

However, this element isn’t totally compatible with a scrollable list, because dragging your finger left and right causes the image to move left and right, rather than scroll left and right.

 

I thought this was actually a nice touch, though, since it allows you to choose which part of the image to zoom in on. So I decided to keep it.

To make this work inside a scrollable carousel, though, I decided to add a separate mode for zooming. You have to tap the magnifying glass to enable zooming, at which point dragging your finger moves the image itself rather than the carousel.

Toggling the pinch-zoom mode was as simple as removing or adding the <pinch-zoom> element to toggle it [2]. I also decided to add some explicit “zoom in” and “zoom out” buttons for the benefit of users who don’t have a device that supports pinch-zooming.

 

Of course, I could have implemented this myself using raw Pointer Events, but <pinch-zoom> offers a small footprint, a nice API, and good browser compatibility (e.g. on iOS Safari, where Pointer Events are not supported). So it felt like a worthy addition.

Intrinsic sizing

The last piece of the puzzle (I promise!) is a way to keep the images from doing a re-layout when they load. This can lead to janky-looking reflows, especially on slow connections.

 

Assuming we know the dimensions of the images in advance, we can fix this by using the intrinsicsize attribute. Unfortunately this isn’t supported in any browser yet, but it’s coming soon to Chrome! And it’s way easier than any other (hacky) solution you may think of.

Here it is in Chrome 72 with the “experimental web platform features” flag enabled:

 

Notice that the buttons don’t jump around while the image loads. Much nicer!

Accessibility check

Looking over the WAI Carousel Concepts document, there are a few good things to keep in mind when implementing this carousel:

  1. To make the carousel more keyboard-navigable, you may add keyboard shortcuts, for instance and to navigate left and right. (Note though that a scrollable horizontal list can already be focused and scrolled with the keyboard.)
  2. Use <ul> and <li> elements instead of <div>s, so that a screen reader announces it as a list.
  3. The smooth-scrolling can be distracting or nausea-inducing for some folks, so respect prefers-reduced-motion or provide an option to turn it off.
  4. As mentioned previously, use aria-label and aria-pressed for the indicator buttons.

Compatibility check

But what about IE support? I can hear your boss screaming at you already.

If you’re one of the unfortunate souls who still has to maintain IE11 support, rest assured: a scroll-snap carousel is just a normal horizontal-scrolling list on IE. It doesn’t work exactly the same, but hey, does it need to? IE11 users probably don’t expect the best anymore.

Conclusion

So that’s it! I decided not to publish this as a library, and I’m leaving the pinch-zooming and intrinsic sizing as an exercise for the reader. I think the core building blocks are simple enough that folks really ought to just take the native APIs and run with them.

Any decisions I could bake into a library would only limit the flexibility of the carousel, and leave its users high-and-dry when they need to tweak something, because I’ve taught them how to use my library instead of the native browser API.

At some point, it’s just better to go native.

Footnotes

1. For whatever reason, I couldn’t get the old scroll snap points spec to work in Edge. Sarah Drasner apparently ran into the same issue. On the bright side, though, a horizontally scrolling list without snap points is just a regular horizontally scrolling list!

2. The first version of this blog post recommended using pointer-events: none to toggle the zoom mode on or off. It turns out that this breaks right-clicking to download an image. So it seems better to just remove or add the <pinch-zoom> element to toggle it.

Things I’ve been wrong about, things I’ve been right about

The end of the year is a good time for reflection, and this year I’m going to do something a bit different. I’d like to list some of the calls I’ve made over the years, and how well those predictions have turned out.

So without further ado, here’s a list of things I’ve been wrong about or right about over the years.

Quick links:

Wrong: web workers will take over the world

Around 2015, I got really excited by web workers. I gave talks, I wrote a blog post, and I wrote an app that got a fair amount of attention. Unfortunately it turned out web workers were not going to take over the world in the way I imagined.

My enthusiasm for web workers mostly came from my experience with Android development. In Android development, if you don’t want your app to be slow, you move work to a background thread. After I became a web developer, I discovered it was suddenly very hard to make apps that weren’t janky. Oh, the web browser only has one thread? Well, there’s your problem.

What I didn’t know at the time, though, was that browsers already had a lot of tricks for moving work off the main thread; they’re just not necessarily very obvious, or directly exposed to web developers. You can see my ignorance in this video, where I’m purportedly showing the performance advantages of my worker-powered Pokémon app by scrolling the screen on mobile Chrome.

As I learned later, though, scrolling runs off-main-thread in modern browsers (and more importantly, composited). So the only thing that’s going to make this scrolling smoother is to not block it with unnecessary touchstart/touchmove listeners, or for the Chrome team to improve their scrolling implementation (as in fact, they have been doing). There are also differences between subscrollers and main-document scrollers, as I learned later.

All of these things are non-obvious to web developers, because they’re not directly exposed in an API. So in my ignorance, I pointed to the one case where threading is exposed in a web API, i.e. web workers.

While it is true, though, that blocking the main thread is a major cause of slowdowns in web pages, web workers aren’t the panacea I imagined. The reasons are laid out very succinctly by Evan You in this talk, but to summarize his points: moving work from the main thread to a background worker is very difficult, and the payoff is not so great.

The main reason it’s difficult is that you always have to come back to the main thread to do work on the DOM anyway. This is what libraries like worker-dom do. Also, some APIs can only be invoked synchronously on the main thread, such as getBoundingClientRect. Furthermore, as pointed out by Pete Hunt, web workers cannot handle preventDefault or stopPropagation (e.g. in a click handler), because those must be handled synchronously.

So on the one hand, you can’t just take existing web code and port it to a web worker; there are some things that have to be tweaked, and other things that are just impossible. Then on the other hand, moving things to a worker creates its own costs. The cost of cloning data between threads can be expensive (note: to be fair, Chrome has improved their cloning performance since I wrote that post). There is also a built-in latency when sending messages between the two threads, so you don’t necessarily want to pay that round-trip cost for every interaction. Also, some work has to be done on the main thread anyway, and it’s not guaranteed that those costs are the small ones.

Since then, I’ve come to believe that the best way to avoid the cost of main thread work (such as virtual DOM diffing in a library like React) is not by moving it to a background thread, but rather by skipping it entirely. SvelteJS does this, as explained in this talk by Rich Harris. Also, you can use APIs like requestIdleCallback to delay work on the main thread. Future APIs like hasPendingUserInput may help as well.

Of course, I’m not saying that web workers should never be used. For long-running computations that don’t rely on the DOM, they’re definitely useful. And perhaps sometime in the future it will become more viable to run your “entire app” in a worker thread, as sketched out here by Jason Miller. APIs like SharedArrayBuffer, blöcks, and some kind of asynchronous preventDefault/stopPropagation may tip the balance in web workers’ favor. But for now I’ll say that I was wrong on this call.

Wrong: Safari is the new IE

“Safari is the new IE” is probably my most-read post of all time, since it got picked up by Ars Technica. Unfortunately I’ve learned a lot about how the browser industry works since I wrote it, and nowadays I regard it with a bit of embarrassment.

As it turns out, Safari is not really the new IE. At the time I wrote that post (2015), the WebKit team was dragging their feet a bit, but since then they’ve really picked up the pace. If you look at metrics like HTML5Test, CanIUse, or ES2015+ compatibility tables, you’ll see they’ve made a lot of progress since 2015. They’re still behind Chrome and Firefox, but they’re giving Edge a run for its money (although Edge is now switching to Chromium, so that’s less relevant).

Also, the WebKit team does a lot of work that is less obvious to web developers, but which they still deserve credit for. Safari is a beast when it comes to performance, and they often set the standard that other engines aim to beat. It’s no surprise that the Speedometer benchmark came from the WebKit team (with Safari originally winning), and quickly became a point of focus for Chrome, Edge, and Firefox. The MotionMark and JetStream benchmarks also originally came from WebKit.

The WebKit team also does some interesting privacy work, including intelligent tracking protection and double-keying of cross-origin storage. (I.e. if example.com stores data in an iframe inside of another website, that data will not be shared with example.com itself or other example.com iframes on other websites. This limits this ability of sites to do third-party tracking.)

To be clear, though, I don’t regret writing that blog post. It was a cry of anger from a guy who was tired of dealing with a lot of IndexedDB bugs, which the WebKit team eventually got around to fixing. Heck, I’ve been told that my blog post may have even motivated Apple to make those fixes, and to release excellent developer-facing features like Safari Technology Preview. So kudos, WebKit team: you proved me wrong!

In any case, it’s unlikely that we’ll ever have a situation like IE6 again, with one browser taking up 95% of all usage. The best contender for that title is currently Chrome, and although it ticks some of the IE6 boxes (outsized influence on the ecosystem, de-facto standardization of implementation details), it doesn’t tick some other ones (lack of investment from the company building it, falling behind on web standards). The state of browser diversity is certainly worrying, though, which makes it all the more important to support non-Chromium browsers like Safari and Firefox, and give them credit for the good work they’re doing.

So in that regard, I am sorry for creating a meme that still seems to stick to this day.

Right: developer experience is trumping user experience

This blog post didn’t get a lot of attention when I published it in early 2016, but I think I tapped into something that was happening in the web community: web developers were starting to focus on obscure developer-facing features of frontend frameworks rather than tangible benefits for end-users.

Since then, this point has been articulated particularly well by folks on the Chrome team and at Google, such as Malte Ubl (“Developer experience and user experience,” 2017) and Alex Russell (“The developer experience bait-and-switch,” 2018). Paul Lewis also touched on it a bit in late 2015 in “The cost of frameworks”.

Developer experience (DX) vs user experience (UX) is now a topic of hot debate among web developers, and although I may not have put my finger on the problem very well, I did start writing about it in early 2016. So I’ll chalk this up as something I was right about.

Right: I’m better off without a Twitter account

I deleted my Twitter account a little over a year ago, and I have no regrets about that.

Twitter has made some good moves in the past year, such as as bringing back the chronological timeline, but overall it’s still a cesspool of negativity, preening, and distractions. Also, I don’t believe any one company should have a monopoly on microblogging, so I’m putting my money where my mouth is by denying them my attention and ad revenue.

These days I use Mastodon and RSS, and I’m much happier. Mastodon in particular has served as a kind of nicotine patch for my Twitter addiction, and for that I’m grateful.

The fediverse does have some of the same negative characteristics as Twitter (brigading, self-righteousness, lack of nuance), but overall it’s much smaller and quieter than Twitter, and more importantly less addictive, so I use social media less these days than I used to. I tend to spend more time on my hobbies instead, one of which is (ironically) building a Mastodon client!

Right: the cost of small modules

“The cost of small modules” was one of my most-read posts of 2016, and in terms of the overall conclusions, I was right. JavaScript compilation and initial execution are expensive, as has been covered quite well by Addy Osmani in “The cost of JavaScript”.

Furthermore, a lot of the bloat was coming from the bundlers themselves. In the post, I identified Browserify and Webpack as the main offenders, with Closure Compiler and Rollup showing how to do it right. Since I wrote that post, though, Webpack and Browserify have stepped up their game, and now module concatenation is a standard practice for JavaScript bundlers.

One thing I didn’t understand at the time was why JavaScript compilation was so expensive for the “function-wrapping” format of bundlers like Webpack and Browserify. I only realized it later when researching some quirks about how JavaScript engines parse function bodies. The conclusions from that research were interesting, but the larger takeaway of “only include the code you need” was the important one.

Mixed: progressive enhancement isn’t dead, but it smells funny

For better or worse, progressive enhancement doesn’t seem to be doing very well these days. In retrospect, this blog post was more about Twitter shaming (see above for my thoughts on Twitter), but I think the larger point about progressive enhancement losing its cachet is right.

As we slowly enter a world where there is one major browser engine (Chromium), which is frequently updated and leading on web standards, supporting old or weird browsers just becomes less important. Developers have already voted with their feet to target mostly Chrome and Chrome derivatives, putting pressure on other browsers to either adopt Chromium themselves or else bleed users (and therefore relevance). It’s a self-perpetuating cycle – the less developers care about progressive enhancement, the less it matters.

I also believe the term “progressive enhancement” has been somewhat co-opted by the Chrome devrel team as as euphemism for giving the best experience to Chrome and a poorer experience to “older browsers” (aka non-Chrome browsers). It’s a brilliant re-branding that feeds into web developers’ deepest wish, which is to live in a Chrome-only world where they only have to focus on Chrome.

That’s not to say progressive enhancement is without its virtues. Insofar as it encourages people to actually think about accessibility, performance, and web standards, it’s a good thing. But these days it’s becoming less about “build with HTML, layer on CSS, sprinkle on JavaScript” and more about “support a slightly older version of Chrome, target the latest version of Chrome.”

The other point I made in that blog post, which was about JavaScript-heavy webapps being better for the “next billion” Internet users, may turn out to be wrong. I’m not sure. Static websites are certainly easier on the user’s battery, and with a Service Worker they can still have the benefits of offline capabilities.

Perhaps with the new Portals proposal, we won’t even need to build SPAs to have fancy transitions between pages. I have a hunch that SPAs are being overused these days, and that user experience is suffering as a consequence, but that’s another bet that will have to be evaluated at a later date.

Conclusions

So that’s all for my roundup of bad takes, good takes, and the stuff in between. Hope you found it interesting, and happy 2019!

Scrolling the main document is better for performance, accessibility, and usability

When I first wrote Pinafore, I thought pretty deeply about some aspects of the scrolling, but not enough about others.

For instance, I implemented a custom virtual list in Svelte.js, as well as an infinite scroll to add more content as you scroll the timeline. When it came to where to put the scrollable element, though, I didn’t think too hard about it.

Screenshot of Pinafore UI showing a top nav with a scrollable content below

A fixed-position nav plus a scrollable section below. Seems simple, right? I went with what seemed to me like an obvious solution: an absolute-position element below the nav bar, which could be scrolled up and down.

Then Sorin Davidoi opened this issue, pointing out that using the entire document (i.e. the <body>) as the scrolling element would allow mobile browsers to hide the address bar while scrolling down. I wasn’t aware of this, so I went ahead and implemented it.

This indeed allowed the URL bar to gracefully shrink or hide, across a wide range of mobile browsers. Here’s Safari for iOS:

And Chrome for Android:

And Firefox for Android:

As it turned out, though, this fix solved more than just the address bar problem – it also improved the framerate of scrolling in Chrome for Android. This is a longstanding issue in Pinafore that had puzzled me up till now. But with the “document as scroller” change, the framerate is magically improved:

Of course, as the person who wrote one of the more comprehensive analyses of cross-browser scrolling performance, this really shouldn’t have surprised me. My own analysis showed that some browsers (notably Chrome) hadn’t optimized subscrolling to nearly the same degree as main-document scrolling. Somehow, though, I didn’t put two-and-two together and realize that this is why Pinafore’s scrolling was janky in Chrome for Android. (It was fine in Firefox for Android and Safari for iOS, which is also perhaps why I didn’t feel pressed to fix it.)

In retrospect, the Chrome Dev Tools’s “scrolling performance issues” tool should have been enough to tip me off, but I wasn’t sure what to do when it says “repaints on scroll.” Nor did I know that moving the scrolling element to the main document would do the trick. Most of the advice online suggests using will-change: transform, but in this case it didn’t help. (Although in the past, I have found that will-change can improve mobile Chrome’s scrolling in some cases.)

Screenshot of Pinafore with a blue overlay saying "repaints on scroll."

The “repaints on scroll” warning. This is gone now that the scrollable element is the document body.

As if the mobile UI and performance improvements weren’t enough, this change also improved accessibility. When users first open Pinafore, they often want to start scrolling by tapping the “down” or “PageDown” key on the keyboard. However, this doesn’t work if you’re using a subscroller, because unlike the main document, the subscroller isn’t focused by default. So we had to add custom behavior to focus the scrollable element when the page first loads. Once I got rid of the subscroller, though, this code could be removed.

Another nice fix was that it’s no longer necessary to add -webkit-overflow-scrolling: touch so that iOS Safari will use smooth scrolling. The main document already scrolls smoothly on iOS.

This subscroller fix may be obvious to more experienced web devs, but to me it was a bit surprising. From a design standpoint, the two options seemed roughly equivalent, and it didn’t occur to me that one or the other would have such a big impact, especially on mobile browsers. Given the difference in performance, accessibility, and usability though, I’ll definitely think harder in the future about exactly which element I want to be the scrollable one.

Note that what I’m not saying in this blog post is that you should avoid subscrollers at all costs. There are some cases where the design absolutely calls for a subscroller, and the fact that Chrome hasn’t optimized for this scenario (whereas other browsers like Firefox, Edge, and Safari have) is a real bug, and I hope they’ll fix it.

However, if the visual design of the page calls for the entire document to be scrollable, then by all means, make the entire document scrollable! And check out document.scrollingElement for a good cross-browser API for managing the scrollTop and scrollHeight.

Update: Steve Genoud points out that there’s an additional benefit to scrolling the main document on iOS: you can tap the status bar to scroll back up to the top. Another usability win!

Update: Michael Howell notes that this technique can cause problems for fragment navigation, e.g. index.html#fragment, because the fixed nav could cover up the target element. Amusingly, I’ve noticed this problem in WordPress.com (where my blog is hosted) if you navigate to a fragment while logged in. I also ran into this in Pinafore in the case of element.scrollIntoView(), which I worked around by updating the scrollTop to account for the nav height right after calling scrollIntoView(true). Good to be aware of!

Accurately measuring layout on the web

We all want to make faster websites. The question is just what to measure, and how to use that information to determine what’s “slow” and what could be made faster.

The browser rendering pipeline is complicated. For that reason, it’s tricky to measure the performance of a webpage, especially when components are rendered client-side and everything becomes an intricate ballet between JavaScript, the DOM, styling, layout, and rendering. Many folks stick to what they understand, and so they may under-measure or completely mis-measure their website’s frontend performance.

So in this post, I want to demystify some of these concepts, and offer techniques for accurately measuring what’s going on when we render things on the web.

The web rendering pipeline

Let’s say we have a component that is rendered client-side, using JavaScript. To keep things simple, I wrote a demo component in vanilla JS, but everything I’m about to say would also apply to React, Vue, Angular, etc.

When we use the handy Performance profiler in the Chrome Dev Tools, we see something like this:

Screenshot of Chrome Dev Tools showing work on the UI thread divided into JavaScript, then Style, then Layout, then Render

This is a view of the CPU costs of our component, in terms of milliseconds on the UI thread. To break things down, here are the steps required:

  1. Execute JavaScript – executing (but not necessarily compiling) JavaScript, including any state manipulation, “virtual DOM diffing,” and modifying the DOM.
  2. Calculate style – taking a CSS stylesheet and matching its selector rules with elements in the DOM. This is also known as “formatting.”
  3. Calculate layout – taking those CSS styles we calculated in step #2 and figuring out where the boxes should be laid out on the screen. This is also known as “reflow.”
  4. Render – the process of actually putting pixels on the screen. This often involves painting, compositing, GPU acceleration, and a separate rendering thread.

All of these steps invoke CPU costs, and therefore all of them can impact the user experience. If any one of them takes a long time, it can lead to the appearance of a slow-loading component.

The naïve approach

Now, the most common mistake that folks make when trying to measure this process is to skip steps 2, 3, and 4 entirely. In other words, they just measure the time spent executing JavaScript, and completely ignore everything after that.

Screenshot of Chrome Dev Tools, showing an arrow pointing after JavaScript but before Style and Layout with the text 'Most devs stop measuring here'

When I worked as a browser performance engineer, I would often look at a trace of a team’s website and ask them which mark they used to measure “done.” More often than not, it turned out that their mark landed right after JavaScript, but before style and layout, meaning the last bit of CPU work wasn’t being measured.

So how do we measure these costs? For the purposes of this post, let’s focus on how we measure style and layout in particular. As it turns out, the render step is much more complicated to measure, and indeed it’s impossible to measure accurately, because rendering is often a complex interplay between separate threads and the GPU, and therefore isn’t even visible to userland JavaScript running on the main thread.

Style and layout calculations, however, are 100% measurable because they block the main thread. And yes, this is true even with something like Firefox’s Stylo engine – even if multiple threads can be employed to speed up the work, ultimately the main thread has to wait on all the other threads to deliver the final result. This is just the way the web works, as specc’ed.

What to measure

So in practical terms, we want to put a performance mark before our JavaScript starts executing, and another one after all the additional work is done:

Screenshot of Chrome Dev Tools, with arrow pointing before JavaScript execution saying 'Ideal start' and arrow pointing after Render (Paint) saying 'Ideal end'

I’ve written previously about various JavaScript timers on the web. Can any of these help us out?

As it turns out, requestAnimationFrame will be our main tool of choice, but there’s a problem. As Jake Archibald explains in his excellent talk on the event loop, browsers disagree on where to fire this callback:

Screenshot of Chrome Dev Tools showing arrow pointing before style/layout saying "Chrome, FF, Edge >= 18" and arrow pointing after style/layout saying "Safari, IE, Edge < 18"

Now, per the HTML5 event loop spec, requestAnimationFrame is indeed supposed to fire before style and layout are calculated. Edge has already fixed this in v18, and perhaps Safari will fix it in the future as well. But that would still leave us with inconsistent behavior in IE, as well as in older versions of Safari and Edge.

Also, if anything, the spec-compliant behavior actually makes it more difficult to measure style and layout! In an ideal world, the spec would have two timers – one for requestAnimationFrame, and another for requestAnimationFrameAfterStyleAndLayout (or something like that). In fact, there has been some discussion at the WHATWG about adding an API for this, but so far it’s just a gleam in the spec authors’ eyes.

Unfortunately, we live in the real world with real constraints, and we can’t wait for browsers to add this timer. So we’ll just have to figure out how to crack this nut, even with browsers disagreeing on when requestAnimationFrame should fire. Is there any solution that will work cross-browser?

Cross-browser “after frame” callback

There’s no solution that will work perfectly to place a callback right after style and layout, but based on the advice of Todd Reifsteck, I believe this comes closest:

requestAnimationFrame(() => {
  setTimeout(() => {
    performance.mark('end')
  })
})

Let’s break down what this code is doing. In the case of spec-compliant browsers, such as Chrome, it looks like this:

Screenshot of Chrome Dev Tools showing 'Start' before JavaScript execution, requestAnimationFrame before style/layout, and setTimeout falling a bit after Paint/Render

Note that rAF fires before style and layout, but the next setTimeout fires just after those steps (including “paint,” in this case).

And here’s how it works in non-spec-compliant browsers, such as Edge 17:

Screenshot of Edge F12 Tools showing 'Start' before JavaScript execution, and requestAnimationFrame/setTimeout both almost immediately after style/layout

Note that rAF fires after style and layout, and the next setTimeout happens so soon that the Edge F12 Tools actually render the two marks on top of each other.

So essentially, the trick is to queue a setTimeout callback inside of a rAF, which ensures that the second callback happens after style and layout, regardless of whether the browser is spec-compliant or not.

Downsides and alternatives

Now to be fair, there are a lot of problems with this technique:

  1. setTimeout is somewhat unpredictable in that it may be clamped to 4ms (or more in some cases).
  2. If there are any other setTimeout callbacks that have been queued elsewhere in the code, then ours may not be the last one to run.
  3. In the non-spec-compliant browsers, doing the setTimeout is actually a waste, because we already have a perfectly good place to set our mark – right inside the rAF!

However, if you’re looking for a one-size-fits-all solution for all browsers, rAF + setTimeout is about as close as you can get. Let’s consider some alternative approaches and why they wouldn’t work so well:

rAF + microtask

requestAnimationFrame(() => {
  Promise.resolve().then(() => {
    performance.mark('after')
  })
})

This one doesn’t work at all, because microtasks (e.g. Promises) run immediately after JavaScript execution has completed. So it doesn’t wait for style and layout at all:

Screenshot of Chrome Dev Tools showing microtask firing before style/layout

rAF + requestIdleCallback

requestAnimationFrame(() => {
  requestIdleCallback(() => {
    performance.mark('after')
  })
})

Calling requestIdleCallback from inside of a requestAnimationFrame will indeed capture style and layout:

Screenshot of Chrome Dev Tools showing requestIdleCallback firing a bit after render/paint

However, if the microtask version fires too early, I would worry that this one would fire too late. The screenshot above shows it firing fairly quickly, but if the main thread is busy doing other work, rIC could be delayed a long time waiting for the browser to decide that it’s safe to run some “idle” work. This one is far less of a sure bet than setTimeout.

rAF + rAF

requestAnimationFrame(() => {
  requestAnimationFrame(() => {
    performance.mark('after')
  })
})

This one, also called a “double rAF,” is a perfectly fine solution, but compared to the setTimeout version, it probably captures more idle time – roughly 16.7ms on a 60Hz screen, as opposed to the standard 4ms for setTimeout – and is therefore slightly more inaccurate.

Screenshot of Chrome Dev Tools showing a second requestAnimationFrame firing a bit after render/paint

You might wonder about that, given that I’ve already talked about setTimeout(0) not really firing in 0 (or even necessarily 4) milliseconds in a previous blog post. But keep in mind that, even though setTimeout() may be clamped by as much as a second, this only occurs in a background tab. And if we’re running in a background tab, we can’t count on rAF at all, because it may be paused altogether. (How to deal with noisy telemetry from background tabs is an interesting but separate question.)

So rAF+setTimeout, despite its flaws, is probably still better than rAF+rAF.

Not fooling ourselves

In any case, whether we choose rAF+setTimeout or double rAF, we can rest assured that we’re capturing any event-loop-driven style and layout costs. With this measure in place, it’s much less likely that we’ll fool ourselves by only measuring JavaScript and direct DOM API performance.

As an example, let’s consider what would happen if our style and layout costs weren’t just invoked by the event loop – that is, if our component were calling one of the many APIs that force style/layout recalculation, such as getBoundingClientRect(), offsetTop, etc.

If we call getBoundingClientRect() just once, notice that the style and layout calculations shift over into the middle of JavaScript execution:

Screenshot of Chrome Dev Tools showing style/layout costs moved to the left inside of JavaScript execution under getBoundingClientRect with red triangles on each purple rectangle

The important point here is that we’re not doing anything any slower or faster – we’ve merely moved the costs around. If we don’t measure the full costs of style and layout, though, we might deceive ourselves into thinking that calling getBoundingClientRect() is slower than not calling it! In fact, though, it’s just a case of robbing Peter to pay Paul.

It’s worth noting, though, that the Chrome Dev Tools have added little red triangles to our style/layout calculations, with the message “Forced reflow is a likely performance bottleneck.” This can be a bit misleading in this case, because again, the costs are not actually any higher – they’ve just moved to earlier in the trace.

(Now it’s true that, if we call getBoundingClientRect() repeatedly and change the DOM in the process, then we might invoke layout thrashing, in which case the overall costs would indeed be higher. So the Chrome Dev Tools are right to warn folks in that case.)

In any case, my point is that it’s easy to fool yourself if you only measure explicit JavaScript execution, and ignore any event-loop-driven style and layout costs that come afterward. The two costs may be scheduled differently, but they both impact performance.

Conclusion

Accurately measuring layout on the web is hard. There’s no perfect metric to capture style and layout – or indeed, rendering – even though all three can impact the user experience just as much as JavaScript.

However, it’s important to understand how the HTML5 event loop works, and to place performance marks at the appropriate points in the component rendering lifecycle. This can help avoid any mistaken conclusions about what’s “slower” or “faster” based on an incomplete view of the pipeline, and ensure that style and layout costs are accounted for.

I hope this blog post was useful, and that the art of measuring client-side performance is a little less mysterious now. And maybe it’s time to push browser vendors to add requestAnimationFrameAfterStyleAndLayout (we’ll bikeshed on the name though!).

Thanks to Ben Kelly, Todd Reifsteck, and Alex Russell for feedback on a draft of this blog post.