Archive for September, 2018

YubiKeys are neat

I recently picked up a YubiKey, because we use them at work and I was impressed with how simple and easy-to-use they are. I’ve been really happy with it so far – enough to write a blog post about it.

Photo of my YubiKeys on a keychain on a table

Basically, YubiKey works like this: whenever you need to do two-factor authentication (2FA), you just plug this little wafer into a USB port and tap a button, and it types out your one-time pass code. Interestingly, it does this by pretending to be a keyboard, which means it doesn’t require any special drivers. (Although it’s funny how Mac pops up a window saying, “Set up your keyboard…”)

The YubiKey Neo, which is the one I got, also supports NFC, so you can use it on a phone or tablet as well. I’ve only tested it on Android, but apparently iOS has some support too.

YubiKey is especially nice for sites like Google, GitHub, and Dropbox, because it runs directly in the browser using the FIDO U2F standard. Currently this is only supported in Chrome, but in Firefox you can also set security.webauth.u2f to true in about:config and it works just fine. (I use Firefox as my main browser, so I can confirm that this works across a variety of websites.)

One thing that pleasantly surprised me about YubiKey is that you can even use it for websites that don’t support U2F devices. Just download the Yubico Authenticator app, plug in your YubiKey, and now your YubiKey is an OTP app, i.e. a replacement for Google Authenticator, Authy, FreeOTP, etc. (Note that Yubico Authenticator doesn’t seem to support iOS, but it runs on desktops and Android, and is even open source on F-Droid.)

What I like the most about Yubico Authenticator is that it works the same across multiple devices, as long as you’re using the same YubiKey. This is great for me, because I have a weird Android setup, and so I’m frequently factory-resetting my phone, meaning I’d normally have to go through the hassle of setting up all my 2FA accounts again. But with YubiKey, I just have to remember to hold onto this little device that’s smaller than a stick of gum and fits on a keyring.

One thing I did find a bit annoying, though, is that the NFC communication between my YubiKey and OnePlus 5T is pretty spotty. To get it to work, I have to remove my phone from its case and the YubiKey from my keyring and clumsily mash them together a few times until it finally registers. But it does work.

Overall though, YubiKey is really cool. Definitely a worthy addition to one’s keyring, and as a bonus it makes me feel like a 21st-century James Bond. (I mean, when I plug it in and it “just works,” not when I’m mashing it into my phone like a monkey.)

If you’d like to read more about YubiKey and security, you might enjoy this article by Maciej Ceglowski on “basic security precautions for non-profits and journalists in the United States.”

Update: In addition to U2F, there is also an emerging standard called WebAuthn which is supported in Chrome, Firefox, and Edge without flags and is supported by YubiKey. So far though, website support seems limited, with Dropbox being a major exception.

Moving on from Microsoft

When I first joined Microsoft, I wrote an idealistic blog post describing the web as a public good, one that I hoped to serve by working for a browser vendor. I still believe in that vision of the web: it’s the freest platform on earth, it’s not totally dominated by any one entity, and it provides both users and developers with a degree of control that isn’t really available in the walled-garden app models of iOS, Android, etc.

Plaque saying "this is for everyone", dedicated, to Tim Berners-Lee

“This is for everyone”, a tribute to the web and Tim Berners-Lee (source)

After joining Microsoft, I continued to be blown away by the dedication and passion of those working on the Edge browser, many of whom shared my feelings about the open web. There were folks who were ex-Mozilla, ex-Opera, ex-Google, ex-Samsung – basically ex-AnyBrowserVendor. There were also plenty of Microsoft veterans who took their expertise in Windows, the CLR, and other Microsoft technologies and applied it with gusto to the unique challenges of the web.

It was fascinating to see so many people from so many different backgrounds working on something as enormously complex as a browser, and collaborating with like-minded folks at other browser vendors in the open forums of the W3C, TC39, WHATWG, and other standards bodies. Getting a peek behind the curtain at how the web “sausage” is made was a unique experience, and one that I cherish.

I’m also proud of the work I accomplished during my tenure at Microsoft. I wrote a blog post on how browser scrolling works, which not only provided a comprehensive overview for web developers, but I suspect may have even spurred Mozilla to up their scrolling game. (Congrats to Firefox for becoming the first browser to support asynchronous keyboard scrolling!)

I also did some work on the Intersection Observer spec, which I became interested in after discovering some cross-browser incompatibilities prompted by a bug in Mastodon. This was exactly the kind of thing I wanted to do for the web:

  1. Find a browser bug while working on an open-source project.
  2. Rather than just work around the bug, actually fix the browsers!
  3. Discuss the problem with the spec owners at other browser vendors.
  4. Submit the fixes to the spec and the web platform tests.

I didn’t do as much spec work as I would have liked (in particular, as a member of the Web Performance Working Group), but I am happy with the small contributions I managed to make.

While at Microsoft, I was also given the opportunity to speak at several conferences, an experience I found exhilarating if not a bit exhausting. (Eight talks in one year was perhaps too ambitious of me!) Overall, though, being a public speaker was a part of the browser gig that I thoroughly enjoyed, and the friendships I made with other conference attendees will surely linger in my mind long after I’ve forgotten whatever it was I gave a talk about. (Thankfully, though, there are always the videos!)

Photo of me delivering the talk "Solving the web performance crisis", a talk I gave on web performance

“Solving the web performance crisis,” a talk I gave on JavaScript performance (video link)

I also wrote a blog post on input responsiveness, which later inspired a post on JavaScript timers. It’s amazing how much you can learn about how a browser works by (wait for it) working on a browser! During my first year at Microsoft, I found myself steeped in discussions about the HTML5 event loop, Promises, setTimeout, setImmediate, and all the other wonderful ways of scheduling things on the web, because at the time we were knee-deep in rewriting the EdgeHTML event loop to improve performance and reliability.

Some of this work was even paralleled by other browser vendors, as shown in this blog post by Ben Kelly about Firefox 52. I fondly recall Ben and me swapping some benchmarks at a TPAC meeting. (When you get into the nitty-gritty of this stuff, sometimes it feels like other browser vendors are the only ones who really understand what you’re going through!)

I also did some work internal to Microsoft that I believe had a positive impact. In short, I met with lots of web teams and coached them on performance – walking through traces of Edge, IE, and Chrome – and helped them improve performance of their site across all browsers. Most of this coaching involved Windows Performance Analyzer, which is a surprisingly powerful tool despite being somewhat under-documented. (Although this post by my colleague Todd Reifsteck goes a long way toward demystifying some of the trickier aspects.)

I discussed this work a bit in an interview I did for Between the Wires, but most of it is private to the teams I worked with, since performance can be a tricky subject to talk about publicly. In general, neither browser vendors nor website owners want to shout to the heavens about their performance problems, so to avoid embarrassing both parties, most of the work I did in this area will probably never be public.

Presentation slide saying "you've founda  perf issue," either "website looks bad compared to competitors," or "browser looks bad compared to competitors" with hand-darwn "websites" and browser logos

A slide from a talk I gave at the Edge Web Summit (video link, photo source)

Still, this work (we called it “Performance Clubs”) was one of my favorite parts of working at Microsoft. Being a “performance consultant,” analyzing traces, and reasoning about the interplay between browser architecture and website architecture was something I really enjoyed. It was part education (I gave a lot of impromptu speeches in front of whiteboards!) and part detective work (lots of puzzling over traces, muttering to myself “this thread isn’t supposed to do that!”). But as someone who is fond of both people and technology, I think I was well-suited for the task.

After Microsoft, I’ll continue doing this same sort of work, but in a new context. I’ll be joining Salesforce as a developer working on the Lightning Platform. I’m looking forward to the challenges of building an easy-to-use web framework that doesn’t sacrifice on performance – anticipating the needs of developers as well as the inherent limitations of CPUs, GPUs, memory, storage, and networks.

It will also be fun to apply my knowledge of cross-browser differences to the enterprise space, where developers often don’t have the luxury of saying “just use another browser.” It’s an unforgiving environment to develop in, but those are exactly the kinds of challenges I relish about the web!

For those who follow me mostly for my open-source work, nothing will change with my transition to Salesforce. I intend to continue working on Mastodon- and JavaScript-related projects in my spare time, including Pinafore. In fact, my experience with Pinafore and SvelteJS may pay dividends at my new gig; one of my new coworkers even mentioned SvelteJS as their north star for building a great JavaScript framework. (Seems I may have found my tribe!) Much of my Salesforce work will also be open-source, so I’m looking forward to spending more time back on GitHub as well. (Although perhaps not as intensely as I used to.)

Leaving Microsoft is a bit bittersweet for me. I’m excited by the new challenges, but I’m also going to miss all the talented and passionate people whose company I enjoyed on the Microsoft Edge team. That said, the web is nothing if not a big tent, and there’s plenty of room to work in, on, and around it. To everyone else who loves the web as I do: I’m sure our paths will cross again.

A tour of JavaScript timers on the web

Pop quiz: what is the difference between these JavaScript timers?

  • Promises
  • setTimeout
  • setInterval
  • setImmediate
  • requestAnimationFrame
  • requestIdleCallback

More specifically, if you queue up all of these timers at once, do you have any idea which order they’ll fire in?

If not, you’re probably not alone. I’ve been doing JavaScript and web programming for years, I’ve worked for a browser vendor for two of those years, and it’s only recently that I really came to understand all these timers and how they play together.

In this post, I’m going to give a high-level overview of how these timers work, and when you might want to use them. I’ll also cover the Lodash functions debounce() and throttle(), because I find them useful as well.

Promises and microtasks

Let’s get this one out of the way first, because it’s probably the simplest. A Promise callback is also called a “microtask,” and it runs at the same frequency as MutationObserver callbacks. Assuming queueMicrotask() ever makes it out of spec-land and into browser-land, it will also be the same thing.

I’ve already written a lot about promises. One quick misconception about promises that’s worth covering, though, is that they don’t give the browser a chance to breathe. Just because you’re queuing up an asynchronous callback, that doesn’t mean that the browser can render, or process input, or do any of the stuff we want browsers to do.

For example, let’s say we have a function that blocks the main thread for 1 second:

function block() {
  var start = Date.now()
  while (Date.now() - start < 1000) { /* wheee */ }
}

If we were to queue up a bunch of microtasks to call this function:

for (var i = 0; i < 100; i++) {
  Promise.resolve().then(block)
}

This would block the browser for about 100 seconds. It’s basically the same as if we had done:

for (var i = 0; i < 100; i++) {
  block()
}

Microtasks execute immediately after any synchronous execution is complete. There’s no chance to fit in any work between the two. So if you think you can break up a long-running task by separating it into microtasks, then it won’t do what you think it’s doing.

setTimeout and setInterval

These two are cousins: setTimeout queues a task to run in x number of milliseconds, whereas setInterval queues a recurring task to run every x milliseconds.

The thing is… browsers don’t really respect that milliseconds thing. You see, historically, web developers have abused setTimeout. A lot. To the point where browsers have had to add mitigations for setTimeout(/* ... */, 0) to avoid locking up the browser’s main thread, because a lot of websites tended to throw around setTimeout(0) like confetti.

This is the reason that a lot of the tricks in crashmybrowser.com don’t work anymore, such as queuing up a setTimeout that calls two more setTimeouts, which call two more setTimeouts, etc. I covered a few of these mitigations from the Edge side of things in “Improving input responsiveness in Microsoft Edge”.

Broadly speaking, a setTimeout(0) doesn’t really run in zero milliseconds. Usually, it runs in 4. Sometimes, it may run in 16 (this is what Edge does when it’s on battery power, for instance). Sometimes it may be clamped to 1 second (e.g., when running in a background tab). These are the sorts of tricks that browsers have had to invent to prevent runaway web pages from chewing up your CPU doing useless setTimeout work.

So that said, setTimeout does allow the browser to run some work before the callback fires (unlike microtasks). But if your goal is to allow input or rendering to run before the callback, setTimeout is usually not the best choice because it only incidentally allows those things to happen. Nowadays, there are better browser APIs that can hook more directly into the browser’s rendering system.

setImmediate

Before moving on to those “better browser APIs,” it’s worth mentioning this thing. setImmediate is, for lack of a better word … weird. If you look it up on caniuse.com, you’ll see that only Microsoft browsers support it. And yet it also exists in Node.js, and has lots of “polyfills” on npm. What the heck is this thing?

setImmediate was originally proposed by Microsoft to get around the problems with setTimeout described above. Basically, setTimeout had been abused, and so the thinking was that we can create a new thing to allow setImmediate(0) to actually be setImmediate(0) and not this funky “clamped to 4ms” thing. You can see some discussion about it from Jason Weber back in 2011.

Unfortunately, setImmediate was only ever adopted by IE and Edge. Part of the reason it’s still in use is that it has a sort of superpower in IE, where it allows input events like keyboard and mouseclicks to “jump the queue” and fire before the setImmediate callback is executed, whereas IE doesn’t have the same magic for setTimeout. (Edge eventually fixed this, as detailed in the previously-mentioned post.)

Also, the fact that setImmediate exists in Node means that a lot of “Node-polyfilled” code is using it in the browser without really knowing what it does. It doesn’t help that the differences between Node’s setImmediate and process.nextTick are very confusing, and even the official Node docs say the names should really be reversed. (For the purposes of this blog post though, I’m going to focus on the browser rather than Node because I’m not a Node expert.)

Bottom line: use setImmediate if you know what you’re doing and you’re trying to optimize input performance for IE. If not, then just don’t bother. (Or only use it in Node.)

requestAnimationFrame

Now we get to the most important setTimeout replacement, a timer that actually hooks into the browser’s rendering loop. By the way, if you don’t know how the browser event loops works, I strongly recommend this talk by Jake Archibald. Go watch it, I’ll wait.

Okay, now that you’re back, requestAnimationFrame basically works like this: it’s sort of like a setTimeout, except instead of waiting for some unpredictable amount of time (4 milliseconds, 16 milliseconds, 1 second, etc.), it executes before the browser’s next style/layout calculation step. Now, as Jake points out in his talk, there is a minor wrinkle in that it actually executes after this step in Safari, IE, and Edge <18, but let's ignore that for now since it's usually not an important detail.

The way I think of requestAnimationFrame is this: whenever I want to do some work that I know is going to modify the browser's style or layout – for instance, changing CSS properties or starting up an animation – I stick it in a requestAnimationFrame (abbreviated to rAF from here on out). This ensures a few things:

  1. I'm less likely to layout thrash, because all of the changes to the DOM are being queued up and coordinated.
  2. My code will naturally adapt to the performance characteristics of the browser. For instance, if it's a low-cost device that is struggling to render some DOM elements, rAF will naturally slow down from the usual 16.7ms intervals (on 60 Hertz screens) and thus it won't bog down the machine in the same way that running a lot of setTimeouts or setIntervals might.

This is why animation libraries that don't rely on CSS transitions or keyframes, such as GreenSock or React Motion, will typically make their changes in a rAF callback. If you're animating an element between opacity: 0 and opacity: 1, there's no sense in queuing up a billion callbacks to animate every possible intermediate state, including opacity: 0.0000001 and opacity: 0.9999999.

Instead, you're better off just using rAF to let the browser tell you how many frames you're able to paint during a given period of time, and calculate the "tween" for that particular frame. That way, slow devices naturally end up with a slower framerate, and faster devices end up with a faster framerate, which wouldn't necessarily be true if you used something like setTimeout, which operates independently of the browser's rendering speed.

requestIdleCallback

rAF is probably the most useful timer in the toolkit, but requestIdleCallback is worth talking about as well. The browser support isn't great, but there's a polyfill that works just fine (and it uses rAF under the hood).

In many ways rAF is similar to requestIdleCallback. (I'll abbreviate it to rIC from now on. Starting to sound like a pair of troublemakers from West Side Story, huh? "There go Rick and Raff, up to no good!")

Like rAF, rIC will naturally adapt to the browser's performance characteristics: if the device is under heavy load, rIC may be delayed. The difference is that rIC fires on the browser "idle" state, i.e. when the browser has decided it doesn't have any tasks, microtasks, or input events to process, and you're free to do some work. It also gives you a "deadline" to track how much of your budget you're using, which is a nice feature.

Dan Abramov has a good talk from JSConf Iceland 2018 where he shows how you might use rIC. In the talk, he has a webapp that calls rIC for every keyboard event while the user is typing, and then it updates the rendered state inside of the callback. This is great because a fast typist can cause many keydown/keyup events to fire very quickly, but you don't necessarily want to update the rendered state of the page for every keypress.

Another good example of this is a “remaining character count” indicator on Twitter or Mastodon. I use rIC for this in Pinafore, because I don't really care if the indicator updates for every single key that I type. If I'm typing quickly, it's better to prioritize input responsiveness so that I don't lose my sense of flow.

Screenshot of Pinafore with some text entered in the text box and a digit counter showing the number of remaining characters

In Pinafore, the little horizontal bar and the “characters remaining” indicator update as you type.

One thing I’ve noticed about rIC, though, is that it’s a little finicky in Chrome. In Firefox it seems to fire whenever I would, intuitively, think that the browser is “idle” and ready to run some code. (Same goes for the polyfill.) In mobile Chrome for Android, though, I’ve noticed that whenever I scroll with touch scrolling, it might delay rIC for several seconds even after I’m done touching the screen and the browser is doing absolutely nothing. (I suspect the issue I’m seeing is this one.)

Update: Alex Russell from the Chrome team informs me that this is a known issue and should be fixed soon!

In any case, rIC is another great tool to add to the tool chest. I tend to think of it this way: use rAF for critical rendering work, use rIC for non-critical work.

debounce and throttle

These two functions aren’t built in to the browser, but they’re so useful that they’re worth calling out on their own. If you aren’t familiar with them, there’s a good breakdown in CSS Tricks.

My standard use for debounce is inside of a resize callback. When the user is resizing their browser window, there’s no point in updating the layout for every resize callback, because it fires too frequently. Instead, you can debounce for a few hundred milliseconds, which will ensure that the callback eventually fires once the user is done fiddling with their window size.

throttle, on the other hand, is something I use much more liberally. For instance, a good use case is inside of a scroll event. Once again, it’s usually senseless to try to update the rendered state of the app for every scroll callback, because it fires too frequently (and the frequency can vary from browser to browser and from input method to input method… ugh). Using throttle normalizes this behavior, and ensures that it only fires every x number of milliseconds. You can also tweak Lodash’s throttle (or debounce) function to fire at the start of the delay, at the end, both, or neither.

In contrast, I wouldn’t use debounce for the scrolling scenario, because I don’t want the UI to only update after the user has explicitly stopped scrolling. That can get annoying, or even confusing, because the user might get frustrated and try to keep scrolling in order to update the UI state (e.g. in an infinite-scrolling list). throttle is better in this case, because it doesn’t wait for the scroll event to stop firing.

throttle is a function I use all over the place for all kinds of user input, and even for some regularly-scheduled tasks like IndexedDB cleanups. It’s extremely useful. Maybe it should just be baked into the browser some day!

Conclusion

So that’s my whirlwind tour of the various timer functions available in the browser, and how you might use them. I probably missed a few, because there are certainly some exotic ones out there (postMessage or lifecycle events, anyone?). But hopefully this at least provides a good overview of how I think about JavaScript timers on the web.

How to deal with “discourse”

It was chaotic human weather. There’d be a nice morning and then suddenly a storm would roll in.

– Jaron Lanier, describing computer message boards in the 1970s (source, p. 42)

Are you tired of the “discourse” and drama in Mastodon and the fediverse? When it happens, do you wish it would just go away?

Here’s one simple trick to stop discourse dead in its tracks:

Don’t talk about it.

Now, this may sound too glib and oversimplified, so to put it in other words:

When discourse is happening, just don’t talk about it.

That’s it. That’s the way you solve discourse. It’s really as easy as that.

Discourse is a reflection of the innate human desire to not only look at a car crash, but to slow down and gawk at it, causing traffic to grind to a halt so that everyone else says, “Well, I may as well see what the fuss is about.” The more you talk about it, the more you feed it.

So just don’t. Don’t write hot takes on it, don’t make jokes about it, don’t comment on how you’re tired of it, don’t try to calm everybody down, don’t write a big thread about how discourse is ruining the fediverse and won’t it please stop. Just don’t. Pretend like it’s not even there.

There’s a scene in a Simpsons Halloween episode where a bunch of billboard ads have come to life and are running amuck, destroying Springfield. Eventually though, Lisa realizes that the only power ads have is the power we give them, and if you “just don’t look” then they’ll keel over and die.

Simpsons animation of billboard ads wrecking buildings with subtitle "Just don't look"

The “discourse” is exactly the same. Every time you talk about it, even just to mention it offhand or make a joke about it, it encourages more people to say to themselves, “Ooh, a fight! I gotta check this out.” Then they scroll back in their timeline to try to figure out the context, and the cycle begins anew. It’s like a disease that spreads by people complaining about it.

This is why whenever discourse is happening, I just talk about something else. I might also block or mute anyone who is talking about it, because I find the endless drama boring.

Like a car crash, it’s never really interesting. It’s never something that’s going to change your life by finding out about it. It’s always the same petty squabbling you’ve seen a hundred times online.

Once the storm has passed, though, it’s safe to talk about it. You may even write a longwinded blog post about it. But while it’s happening, remember: “just don’t look, just don’t look.”