Archive for the ‘Web’ Category

Burnout and Twitter fatigue

I suppose it should come as no surprise that the author of “What it feels like to be an open-source maintainer” is suffering from burnout, but to be honest it caught me off-guard. Even after writing it, I thought I was just going through a rough patch, but in retrospect it’s impossible to call what I’ve been feeling by any other name.

In the six months since I wrote that post, I’ve largely abandoned my involvement in PouchDB as well as dozens of other open-source projects. I’ve stepped away from the social telephone that only brings bad news and let the GitHub notifications pile up on my doorstep. (Over a thousand now, but who’s counting?) I’ve stopped going to conferences and only written one new blog post, which had actually been in draft for several months and so barely qualifies. In short, I display all the symptoms of burnout.

I could rehash the same material from that blog post, but really it’s only part of the story. Dealing with a constant flood of negative attention on your open-source projects is enough to wear anybody down, but that’s only a proximate cause. This story is best told against a backdrop of general malaise and world-weariness, which perhaps other folks in the tech industry can identify with.

In this post, I’m going to talk about one major stage of my burnout, which was the end of my love affair with Twitter. There’s more to cover on the subject of burnout, but hopefully this will serve as a good starting point.

2016: the terrible, horrible, no good, very bad year

It’s become an internet cliché to say that 2016 was a terrible year, but last year’s US election truly shocked me. It also shook me out of a lot of my complacency over the direction of the tech industry and of the world in general. I consider myself a centrist (or perhaps center-left), but the election of a man as obviously vile and odious as Donald Trump left me, like many others, grasping for explanations.

2016 marked a worldwide rise in authoritarianism and illiberalism, and it’s hard to pin down to a single cause. (For a full review, The Retreat of Western Liberalism comes closest.) A lot of ink has already been spilled on the subject, but what I feel most qualified to talk about is the role technology played in the election. It wasn’t a starring role, but if 2016 were a theater play, technology might have fit as the mischievous trickster, sowing confusion and distrust and ultimately helping the baddies on their way.

Of course, what I’m talking about is social media. Twitter is well-known as Donald Trump’s favorite megaphone, and Facebook, for its part, also helped to spread rumors, lies, and propaganda that not only boosted extremists and conspiracy theorists, but also led to a general erosion of public trust in our media and institutions. We no longer have a common set of facts we can all agree upon; we only have bombast and hearsay, shouted across the political divide. Integrity and truth no longer matter; only who can get the sharpest zingers and the juiciest headlines that deliver clicks and eyeballs.

None of this should be surprising given what both Twitter and Facebook are. Fundamentally, they are marketing tools designed to promote the most virulent memes (in Dawkins’ sense of the word, as a catchy idea that travels well) and thus maximize “engagement” and “sharing.” Their algorithms act as a force of natural selection on a candidate population of quips, one-liners, clickbait, and repartees to select only those best-suited to tap into our basest human emotions, so that they get shared and reposted and thus infect the maximum number of brains possible.

Facebook and Twitter are platforms where subtlety and nuance get lost in a sea of paranoid accusations, wild hyperbole, vicious put-downs, and smug preaching-to-the-choir. They are platforms where the sensationalist headline gets thousands of retweets, and the sheepish retraction (if it exists) gets less than a hundred. (Guess which one people remember.) In short, they are platforms that are tailor-made for marketing, and thus for its public-sector twin, propaganda.

With social media, it’s as if the Axis powers no longer had to distribute leaflets via airplane, but could instead just whip up some catchy headlines like “You won’t believe why the Allies can’t win the war” and “11 reasons the resistance movement is a flop.” Toss enough of those into social media – it doesn’t even matter which ones are true or false, and you can tweak the headlines as you go – and you’ve got a recipe for a public discourse that sounds less like that of an informed democracy and more like the swirling cacophony of everybody’s id talking at the same time.

Screenshot from "Dunkirk" showing soldier holding leaflet saying "we surround you"

Credit: Dunkirk

I work for a browser vendor, and I’ve even contributed code to a JavaScript library that appears on Twitter’s frontend (LocalForage, used on Twitter’s mobile site). So in 2016 I came to feel implicitly responsible for the building of a technical infrastructure that, at this moment, looks to me like the grotesque monster at the end of Akira slouching its way towards Bethlehem to devour what’s left of the civilized world. It’s not exactly a vision that makes me feel chipper when I’m getting ready for work in the morning.

Twitter fatigue

More than anything, 2016 was the year when I started to question my Twitter addiction, and to resolve to use it less and less. Today I don’t even have the Twitter app installed, and I barely visit the mobile site either. Most of my idle thoughts go to Mastodon instead (which is a subject for another post).

In 2016, the tenor of Twitter seemed to me to change dramatically for the worse. Suddenly, everything became politicized. Somehow the newfound eagerness to howl bloody murder at one’s political opponents translated into an eagerness to howl at everyone about everything. Increasingly I felt I was watching people break off into rival factions and hurl insults at each other.

Perhaps I felt this most directly when I was attacked for a slide in a presentation I gave at Fronteers Conf saying “In 2016, it’s okay to build a website that doesn’t work without JavaScript.” I already covered that surreal experience in this post, but essentially a photo of me made the rounds on social media, and without even knowing who was pictured or what the context of the talk was, people decided I needed to be taken down a peg. And so I watched my Twitter feed devolve into a nasty pile-on of insults, snark, and condemnation, all because I had expressed a controversial opinion on (wait for it) a programming language.

I have nothing against Fronteers Conf, and I appreciate that many of the conference organizers and attendees defended me on Twitter when the discussion started to take a turn for the ugly. But honestly, this incident is one of the main reasons I stopped speaking at conferences.

At tech conferences, you’re encouraged to “interact” via Twitter, which is a platform not known for valuing nuance or context. So instead of my 40-minute talk being about, you know, the talk, its defining moment was a bite-sized nugget blasted out on social media, as well as the ensuing uproar. (I noticed that most of the audience had their eyes glued to their phones for the last 20 minutes after that slide, so certainly what I said in those moments didn’t matter.)

Essentially, everyone on Twitter decided that it was time for the Two Minutes Hate, and mine was the face that needed to be snarled at. I got a small taste of the Justine Sacco experience, and frankly it felt awful. Moreover, it didn’t feel like a good trade-off for the hours poured into preparing a talk, practicing it, and flying out to the conference venue.

This experience has also made me ambivalent about all the other moral crusades that seem to be carried out regularly on Twitter these days. Did the target really deserve it? Or was it just a clumsy gaffe brought about by jetlag, a half-hearted joke that landed badly, or a poorly-worded slide (mea culpa), and so maybe we should cut them some slack?

Or, even if they deserved it, is all the wailing and pearl-clutching really good for the rest of us? It certainly earns us likes and retweets, which can feel like virtue in the heat of the moment. But the more I see these dog-piles, the more I get the sense that what we’re feeling isn’t the noble élan of the virtuous, but instead the childish glee of having the unpopular kid on the playground laying crumpled at your feet and the crowd at your back. (Or the relief of being in the crowd, and not on the ground getting dirt kicked into your face.)

Breaking up with Twitter is hard to do

In the tech industry, though, quitting Twitter is no easy feat. Especially if you work in developer relations or evangelism, Twitter might be a fundamental part of your job description: promoting your company’s content, answering questions, “engaging the community,” etc. Your follower count, especially if it numbers in the thousands or tens of thousands, may also be seen as a job asset that’s hard to give up.

Twitter itself also serves as a personal identifier in an industry where schmoozing and networking can be critical career boosters. At developer conferences, speakers often introduce themselves using their Twitter handle (which might be discreetly emblazoned on every slide) and then end the talk by imploring their audience to follow them on Twitter. It’s not even necessary to say “Twitter” or use the Twitter logo; if you put an @-sign in front of it, there’s no confusion about which social network you’re referring to.

In blog posts, it’s also common to link to someone’s Twitter handle when you mention them by name. Forget your personal blog or myname.comtwitter.com/myname acts as your unique identifier, and the preferred way of discovering your work, contacting you, and judging your “influencer” status by your number of followers. Think of it as your personal baseball card, complete with your photo, team (company) affiliation, and key career stats.

Twitter is so inescapably embedded in the tech industry, especially in my corner of it (JavaScript/web/whatever you want to call it) that it can be maddening if you’ve resolved to quit it. For better or worse, it’s where news hits first, it’s where software releases and blog posts are announced, and it’s how you keep up with friends and colleagues in the industry.

The irony is not lost on me that, if I even want people to read this blog post, I’m going to need to post it to Twitter. Such is the degree of Twitter’s dominance, at least in the crowd I run with. To some extent, if you’re not on Twitter, you’re just not part of the conversation.

Alternatives do exist. For a hot moment last April, I was hoping that Mastodon (ad-free, open-source, no algorithmic timeline) might build up enough momentum to overtake Twitter, but unfortunately that wave seems to have crested. That said, I’m still betting on it in the form of running my own Mastodon server, despite having few illusions that it can eclipse Twitter anytime soon. But again, that’s a subject for another post.

One thing my personal exodus toward Mastodon did teach me, though, is that I can live without social media. Not only am I spending more time on Mastodon than on Twitter these days, but I’m spending less time on social media in general, to which I credit both sleeping better and feeling less anxious and distracted all the time. No more staying up late to read about the scandal du jour or hearing hot takes on the latest political calamity that, this time, will surely bring about the end of the world. No more watching people tear each other down over petty grievances, or hearing the sanctimonious echo-chamber sermons of those utterly convinced of their own righteousness.

With less time devoted to social media, I’m also enjoying more non-tech activities to help heal my burnout: spending time with my wife, visiting friends and family, getting back into cycling, reading books, playing guitar. I’m hoping that eventually I’ll feel well enough to hop back onto the GitHub treadmill, although it’s tough because it’s only when I stepped off that I realized how fast the dang thing was going.

Advice for Twitter expats

What are the good alternatives to Twitter? Besides Mastodon (which is really more of a friendly chat room these days), I’ve found you can get decent tech news from:

  • Hacker News (Yeah yeah, just don’t read the comments. Or read n-gate for a satirical take.)
  • Lobsters (Like HN, but more technical.)
  • EchoJS (A bit spammy, but occasionally there are gems.)
  • The webdev and javascript subreddits (Again, the comments don’t come recommended.)

I don’t find these sites to be a full supplement for Twitter, but maybe that’s a good thing. The less time I spend on Twitter, the less I expose my mind to toxic brain-viruses, the less I feel goaded by some titillating comment to respond with my own mental detritus (thereby stimulating others to do the same), and the less I contribute to the propaganda machine that is mayyybe bringing about the fall of civilization. All of which is pure goodness to a burned-out techie like me.

A brief and incomplete history of JavaScript bundlers

Ever since I read Malte Ubl’s proposal for a JavaScript bundle syntax, I’ve been fascinated by the question: does JavaScript need a “bundle” standard?

Unfortunately that question will have to wait for another post, because it’s much more complicated than what I can cover here. But to at least make the first tentative stabs at answering it, I’d like to explore some more basic questions:

  • What is a JavaScript bundler?
  • What purpose do bundlers serve in the modern webdev stack?

To try to answer these questions, I’d like to offer my historical perspective on what are (arguably) the two most important bundlers of the last five years: Browserify and Webpack.

A bundle of bamboo

A bundle of bamboo, via Wikipedia

What is a bundle?

Conceptually, a JavaScript bundle is very simple: it’s a collection of multiple scripts, combined into a single file. The original bundler was called +=, i.e. concatenation, and for a long time it was all anyone really needed. The whole point was to avoid the 6-connections-per-origin limit and the built-in overhead of HTTP/1.1 connections by simply jamming all your JavaScript into a single file. Easy-peasy.

Disregarding some interesting but ultimately niche bundlers such as GWT, RequireJS, and Closure Compiler, concatenation was still the most common bundler until very recently. Even fairly modern scaffolding tools like Yeoman were still recommending concatenation as the default bundler well into 2013, using lightweight tools such as usemin.

It was only really when Browserify hit the scene in 2013 did non-concatenation bundlers start to go mainstream.

The rise of Browserify

Interestingly, Browserify wasn’t originally designed to solve the problem of bundling. Instead, it was designed to solve the problem of Node developers who wanted to reuse their code in the browser. (It’s right there in the name: “browser-ify” your Node code!)

Screenshot of Browserify homepage from 2013

Screenshot of the original Browserify homepage from January 2013 (via the Internet Archive)

Before Browserify, if you were writing a JavaScript module that was designed to work in both Node or the browser, you’d have to do something like this:

var MyModule = 'hello world';

if (typeof module !== 'undefined' && module.exports) {
  module.exports = MyModule;
} else {
  (typeof self !== 'undefined' ? self : window).MyModule = MyModule;
}

This works fine for single files, but if you’re accustomed to Node conventions, it becomes aggravating that you can’t do something like this:

var otherModule = require('./otherModule');

Or even:

var otherPackage = require('other-package');

By 2014, npm had already grown to over 50,000 modules, so the idea of reusing those modules within browser code was a compelling proposition. The problem Browserify solved was thus twofold:

  1. Make the CommonJS module system work for the browser (by crawling the dependency tree, reading files, and building a single bundle file).
  2. Make Node built-ins and conventions (process, Buffer, crypto, etc.) work in the browser, by implementing polyfills and shims for them.

This second point is an often-overlooked benefit of the cowpath that Browserify paved. At the time Browserify debuted, many of those 50,000 modules weren’t written with any consideration for how they might run in the browser, and Node-isms like process.nextTick() and setImmediate() ran rampant. For Browserify to “just work,” it had to solve the compatibility problem.

What this involved was a lot of effort to reimplement nearly all of Node’s standard library for the browser, tackling the inevitable issues of cross-browser compatibility along the way. This resulted in some extremely battle-tested libraries such as events, process, buffer, inherits, and crypto, among others.

If you want to understand the ridiculous amount of work that had to go into building all this infrastructure, I recommend taking a look at Calvin Metcalf’s series on implementing crypto for the browser. Or, if you’re too faint of heart, you can instead read about how he helped fix process.nextTick() to work with Sinon or avoid bugs in oldIE’s timer system. (Calvin is truly one of the unsung heroes of JavaScript. Look in your bundle, and odds are you will find his code in there somewhere!)

All of these libraries – buffer, crypto, process, etc. – are still in wide use today via Browserify, as well as other bundlers like Webpack and Rollup. They are the magic behind why new Buffer() and process.nextTick() “just work,” and are a big part of Browserify’s success story.

Enter Webpack

While Browserify was picking up steam, and more and more browser-ready modules were starting to be published to npm, Webpack rose to prominence in 2015, buoyed by the popularity of React and the endorsement of Pete Hunt.

Webpack and Browserify are often seen today as solutions to the same problem, but Webpack’s initial focus was a bit different from Browserify’s. Whereas Browserify’s goal was to make Node modules run in the browser, Webpack’s goal was to create a dependency graph for all of the assets in a website – not just JavaScript, but also CSS, images, SVGs, and even HTML.

The Webpack view of the world, with multiple types of assets all treated as part of the dependency graph

The Webpack view of the world, via “What is Webpack?”

In contrast to Browserify, which was almost dogmatic in its insistence on Node compatibility, Webpack was cheerful to break Node conventions and introduce code like this:

require('./styles.css');

Or even:

var svg = require('svg-url?limit=1024!./file.svg');

Webpack did this for a few different reasons:

  1. Once all of a website’s assets can be expressed as a dependency graph, it becomes easy to define “components” (collections of HTML, CSS, JavaScript, images, etc.) as standalone modules, which can be easily reused and even published to npm.
  2. Using a JavaScript-based module system for assets means that Hot Module Replacement is easy and natural, e.g. a stylesheet can automatically update itself by injection and replacement into the DOM via script.
  3. Ultimately, all of this is configurable using loaders, meaning you can get the benefits of an integrated module system without having to ship a gigantic JavaScript bundle to your users. (Although how well this works in practice is debatable.).

Because Browserify was originally the only game in town, though, Webpack had to undergo its own series of compatibility fixes, so that existing Browserify-targeting modules could work well with Webpack. This wasn’t always easy, as a JavaScript package maintainer of the time might have told you.

Out of this push for greater Webpack-Browserify compatibility grew ad-hoc standards like the node-browser-resolve algorithm, which defines what the "browser" field in package.json is supposed to do. (This field is an extension of npm’s own package.json definition, which specifies how modules should be swapped out when building in “browser mode” vs “Node mode.”)

Closing thoughts

Today, Browserify and Webpack have largely converged in functionality, although Browserify still tends to be preferred by old-school Node developers, whereas Webpack is the tool of choice for frontend web developers. Up-and-comers such as Rollup, splittable, and fuse-box (among many others) are also making the frontend bundler landscape increasingly diverse and interesting.

So that’s my view of the great bundler wars of 2013-2017! Hopefully in a future blog post I’ll be able to cover whether or not bundlers like Browserify and Webpack demonstrate the need for a “standard” to unite them all.

Feel free to weigh in on Twitter or on Mastodon.

How to write a JavaScript package for both Node and the browser

This is an issue that I’ve seen a lot of confusion over, and even seasoned JavaScript developers might have missed some of its subtleties. So I thought it was worth a short tutorial.

Let’s say you have a JavaScript module that you want to publish to npm, available both for Node and for the browser. But there’s a catch! This particular module has a slightly different implementation for the Node version compared to the browser version.

This situation comes up fairly frequently, since there are lots of tiny environment differences between Node and the browser. And it can be tricky to implement correctly, especially if you’re trying to optimize for the smallest possible browser bundle.

Let’s build a JS package

So let’s write a mini JavaScript package, called base64-encode-string. All it does is take a string as input, and it outputs the base64-encoded version.

For the browser, this is easy; we can just use the built-in btoa function:

module.exports = function (string) {
  return btoa(string);
};

In Node, though, there is no btoa function. So we’ll have to create a Buffer instead, and then call buffer.toString() on it:

module.exports = function (string) {
  return Buffer.from(string, 'binary').toString('base64');
};

Both of these should provide the correct base64-encoded version of a string. For instance:

var b64encode = require('base64-encode-string');
b64encode('foo');    // Zm9v
b64encode('foobar'); // Zm9vYmFy

Now we’ll just need some way to detect whether we’re running in the browser or in Node, so we can be sure to use the right version. Both Browserify and Webpack define a process.browser field which returns true, whereas in Node it’s falsy. So we can simply do:

if (process.browser) {
  module.exports = function (string) {
    return btoa(string);
  };
} else {
  module.exports = function (string) {
    return Buffer.from(string, 'binary').toString('base64');
  };
}

Now we just name our file index.js, type npm publish, and we’re done, right? Well, this works, but unfortunately there’s a big performance problem with this implementation.

Since our index.js file contains references to the Node built-in process and Buffer modules, both Browserify and Webpack will automatically include the polyfills for those entire modules in the bundle.

From this simple 9-line module, I calculated that Browserify and Webpack will create a bundle weighing 24.7KB minified (7.6KB min+gz). That’s a lot of bytes for something that, in the browser, only needs to be expressed with btoa!

“browser” field, how I love thee

If you search through the Browserify or Webpack documentation for tips on how to solve this problem, you may eventually discover node-browser-resolve. This is a specification for a "browser" field inside of package.json, which can be used to define modules that should be swapped out when building for the browser.

Using this technique, we can add the following to our package.json:

{
  /* ... */
  "browser": {
    "./index.js": "./browser.js"
  }
}

And then separate the functions into two different files, index.js and browser.js:

// index.js
module.exports = function (string) {
  return Buffer.from(string, 'binary').toString('base64');
};
// browser.js
module.exports = function (string) {
  return btoa(string);
};

After this fix, Browserify and Webpack provide much more reasonable bundles: Browserify’s is 511 bytes minified (315 min+gz), and Webpack’s is 550 bytes minified (297 min+gz).

When we publish our package to npm, anyone running require('base64-encode-string') in Node will get the Node version, and anyone doing the same thing with Browserify or Webpack will get the browser version. Success!

For Rollup, it’s a bit more complicated, but not too much extra work. Rollup users will need to use rollup-plugin-node-resolve and set browser to true in the options.

For jspm there is unfortunately no support for the “browser” field, but jspm users can get around it in this case by doing require('base64-encode-string/browser') or jspm install npm:base64-encode-string -o "{main:'browser.js'}". Alternatively, the package author can specify a “jspm” field in their package.json.

Advanced techniques

The direct "browser" method works well, but for larger projects I find that it introduces an awkward coupling between package.json and the codebase. For instance, our package.json could quickly end up looking like this:

{
  /* ... */
  "browser": {
    "./index.js": "./browser.js",
    "./widget.js": "./widget-browser.js",
    "./doodad.js": "./doodad-browser.js",
    /* etc. */
  }
}

So every time you want a browser-specific module, you’d have to create two separate files, and then remember to add an extra line to the "browser" field linking them together. And be careful not to misspell anything!

Also, you may find yourself extracting individual bits of code into separate modules, merely because you wanted to avoid an if (process.browser) {} check. When these *-browser.js files accumulate, they can start to make the codebase a lot harder to navigate.

If this situation gets too unwieldy, there are a few different solutions. My personal favorite is to use Rollup as a build tool, to automatically split a single codebase into separate index.js and browser.js files. This has the added benefit of de-modularizing the code you ship to consumers, saving bytes and time.

To do so, install rollup and rollup-plugin-replace, then define a rollup.config.js file:

import replace from 'rollup-plugin-replace';
export default {
  entry: 'src/index.js',
  format: 'cjs',
  plugins: [
    replace({ 'process.browser': !!process.env.BROWSER })
  ]
};

(We’ll use that process.env.BROWSER as a handy way to switch between browser builds and Node builds.)

Next, we can create a src/index.js file with a single function using a normal process.browser condition:

export default function base64Encode(string) {
  if (process.browser) {
    return btoa(string);
  } else {
    return Buffer.from(string, 'binary').toString('base64');
  }
}

Then add a prepublish step to package.json to generate the files:

{
  /* ... */
  "scripts": {
    "prepublish": "rollup -c > index.js && BROWSER=true rollup -c > browser.js"
  }
}

The generated files are fairly straightforward and readable:

// index.js
'use strict';

function base64Encode(string) {
  {
    return Buffer.from(string, 'binary').toString('base64');
  }
}

module.exports = base64Encode;
// browser.js
'use strict';

function base64Encode(string) {
  {
    return btoa(string);
  }
}

module.exports = base64Encode;

You’ll notice that Rollup automatically converts process.browser to true or false as necessary, then shakes out the unused code. So no references to process or Buffer will end up in the browser bundle.

Using this technique, you can have any number of process.browser switches in your codebase, but the published result is two small, focused index.js and browser.js files, with only the Node-related code for Node, and only the browser-related code for the browser.

As an added bonus, you can configure Rollup to also generate ES module builds, IIFE builds, or UMD builds. For an example of a simple library with multiple Rollup build targets, you can check out my project marky.

The actual project described in this post (base64-encode-string) has also been published to npm so that you can inspect it and see how it ticks. The source code is available on GitHub.

The cost of small modules

Update (30 Oct 2016): since I wrote this post, a bug was found in the benchmark which caused Rollup to appear slightly better than it would otherwise. However, the overall results are not substantially different (Rollup still beats Browserify and Webpack, although it’s not quite as good as Closure anymore), so I’ve merely updated the charts and tables. Additionally, the benchmark now includes the RequireJS and RequireJS Almond bundlers, so those have been added as well. To see the original blog post without these edits, check out this archived version.

About a year ago I was refactoring a large JavaScript codebase into smaller modules, when I discovered a depressing fact about Browserify and Webpack:

“The more I modularize my code, the bigger it gets. 😕”
Nolan Lawson

Later on, Sam Saccone published some excellent research on Tumblr and Imgur‘s page load performance, in which he noted:

“Over 400ms is being spent simply walking the Browserify tree.”
Sam Saccone

In this post, I’d like to demonstrate that small modules can have a surprisingly high performance cost depending on your choice of bundler and module system. Furthermore, I’ll explain why this applies not only to the modules in your own codebase, but also to the modules within dependencies, which is a rarely-discussed aspect of the cost of third-party code.

Web perf 101

The more JavaScript included on a page, the slower that page tends to be. Large JavaScript bundles cause the browser to spend more time downloading, parsing, and executing the script, all of which lead to slower load times.

Even when breaking up the code into multiple bundles – Webpack code splitting, Browserify factor bundles, etc. – the cost is merely delayed until later in the page lifecycle. Sooner or later, the JavaScript piper must be paid.

Furthermore, because JavaScript is a dynamic language, and because the prevailing CommonJS module system is also dynamic, it’s fiendishly difficult to extract unused code from the final payload that gets shipped to users. You might only need jQuery’s $.ajax, but by including jQuery, you pay the cost of the entire library.

The JavaScript community has responded to this problem by advocating the use of small modules. Small modules have a lot of aesthetic and practical benefits – easier to maintain, easier to comprehend, easier to plug together – but they also solve the jQuery problem by promoting the inclusion of small bits of functionality rather than big “kitchen sink” libraries.

So in the “small modules” world, instead of doing:

var _ = require('lodash')
_.uniq([1,2,2,3])

You might do:

var uniq = require('lodash.uniq')
uniq([1,2,2,3])

Rich Harris has already articulated why the “small modules” pattern is inherently beginner-unfriendly, even though it tends to make life easier for library maintainers. However, there’s also a hidden performance cost to small modules that I don’t think has been adequately explored.

Packages vs modules

It’s important to note that, when I say “modules,” I’m not talking about “packages” in the npm sense. When you install a package from npm, it might only expose a single module in its public API, but under the hood it could actually be a conglomeration of many modules.

For instance, consider a package like is-array. It has no dependencies and only contains one JavaScript file, so it has one module. Simple enough.

Now consider a slightly more complex package like once, which has exactly one dependency: wrappy. Both packages contain one module, so the total module count is 2. So far, so good.

Now let’s consider a more deceptive example: qs. Since it has zero dependencies, you might assume it only has one module. But in fact, it has four!

You can confirm this by using a tool I wrote called browserify-count-modules, which simply counts the total number of modules in a Browserify bundle:

$ npm install qs
$ browserify node_modules/qs | browserify-count-modules
4

What’s going on here? Well, if you look at the source for qs, you’ll see that it contains four JavaScript files, representing four JavaScript modules which are ultimately included in the Browserify bundle.

This means that a given package can actually contain one or more modules. These modules can also depend on other packages, which might bring in their own packages and modules. The only thing you can be sure of is that each package contains at least one module.

Module bloat

How many modules are in a typical web application? Well, I ran browserify-count-modules on a few popular Browserify-using sites, and came up with these numbers:

For the record, my own Pokedex.org (the largest open-source site I’ve built) contains 311 modules across four bundle files.

Ignoring for a moment the raw size of those JavaScript bundles, I think it’s interesting to explore the cost of the number of modules themselves. Sam Saccone has already blown this story wide open in “The cost of transpiling es2015 in 2016”, but I don’t think his findings have gotten nearly enough press, so let’s dig a little deeper.

Benchmark time!

I put together a small benchmark that constructs a JavaScript module importing 100, 1000, and 5000 other modules, each of which merely exports a number. The parent module just sums the numbers together and logs the result:

// index.js
var total = 0
total += require('./module_0')
total += require('./module_1')
total += require('./module_2')
// etc.
console.log(total)
// module_0.js
module.exports = 0
// module_1.js
module.exports = 1

(And so on.)

I tested five bundling methods: Browserify, Browserify with the bundle-collapser plugin, Webpack, Rollup, and Closure Compiler. For Rollup and Closure Compiler I used ES6 modules, whereas for Browserify and Webpack I used CommonJS, so as not to unfairly disadvantage them (since they would need a transpiler like Babel, which adds its own overhead).

In order to best simulate a production environment, I used Uglify with the --mangle and --compress settings for all bundles, and served them gzipped over HTTPS using GitHub Pages. For each bundle, I downloaded and executed it 15 times and took the median, noting the (uncached) load time and execution time using performance.now().

Bundle sizes

Before we get into the benchmark results, it’s worth taking a look at the bundle files themselves. Here are the byte sizes (minified but ungzipped) for each bundle (chart view):

100 modules 1000 modules 5000 modules
browserify 7982 79987 419985
browserify-collapsed 5786 57991 309982
webpack 3955 39057 203054
rollup 1265 13865 81851
closure 758 7958 43955
rjs 29234 136338 628347
rjs-almond 14509 121612 613622

And the minified+gzipped sizes (chart view):

100 modules 1000 modules 5000 modules
browserify 1650 13779 63554
browserify-collapsed 1464 11837 55536
webpack 688 4850 24635
rollup 629 4604 22389
closure 302 2140 11807
rjs 7940 19017 62674
rjs-almond 2732 13187 56135

What stands out is that the Browserify and Webpack versions are much larger than the Rollup and Closure Compiler versions (update: especially before gzipping, which still matters since that’s what the browser executes). If you take a look at the code inside the bundles, it becomes clear why.

The way Browserify and Webpack work is by isolating each module into its own function scope, and then declaring a top-level runtime loader that locates the proper module whenever require() is called. Here’s what our Browserify bundle looks like:

(function e(t,n,r){function s(o,u){if(!n[o]){if(!t[o]){var a=typeof require=="function"&&require;if(!u&&a)return a(o,!0);if(i)return i(o,!0);var f=new Error("Cannot find module '"+o+"'");throw f.code="MODULE_NOT_FOUND",f}var l=n[o]={exports:{}};t[o][0].call(l.exports,function(e){var n=t[o][1][e];return s(n?n:e)},l,l.exports,e,t,n,r)}return n[o].exports}var i=typeof require=="function"&&require;for(var o=0;o<r.length;o++)s(r[o]);return s})({1:[function(require,module,exports){
module.exports = 0
},{}],2:[function(require,module,exports){
module.exports = 1
},{}],3:[function(require,module,exports){
module.exports = 10
},{}],4:[function(require,module,exports){
module.exports = 100
// etc.

Whereas the Rollup and Closure bundles look more like what you might hand-author if you were just writing one big module. Here’s Rollup:

(function () {
        'use strict';
        var module_0 = 0
        var module_1 = 1
        // ...
        total += module_0
        total += module_1
        // etc.

The important thing to notice is that every module in Webpack and Browserify gets its own function scope, and is loaded at runtime when require()d from the main script. Rollup and Closure Compiler, on the other hand, just hoist everything into a single function scope (creating variables and namespacing them as necessary).

If you understand the inherent cost of functions-within-functions in JavaScript, and of looking up a value in an associative array, then you’ll be in a good position to understand the following benchmark results.

Results

Update: as noted above, results have been re-run with corrections and the addition of the r.js and r.js Almond bundlers. For the full tabular data, see this gist.

I ran this benchmark on a Nexus 5 with Android 5.1.1 and Chrome 52 (to represent a low- to mid-range device) as well as an iPod Touch 6th generation running iOS 9 (to represent a high-end device).

Here are the results for the Nexus 5:

Nexus 5 results

And here are the results for the iPod Touch:

iPod Touch results

At 100 modules, the variance between all the bundlers is pretty negligible, but once we get up to 1000 or 5000 modules, the difference becomes severe. The iPod Touch is hurt the least by the choice of bundler, but the Nexus 5, being an aging Android phone, suffers a lot under Browserify and Webpack.

I also find it interesting that both Rollup and Closure’s execution cost is essentially free for the iPod, regardless of the number of modules. And in the case of the Nexus 5, the runtime costs aren’t free, but they’re still much cheaper for Rollup/Closure than for Browserify/Webpack, the latter of which chew up the main thread for several frames if not hundreds of milliseconds, meaning that the UI is frozen just waiting for the module loader to finish running.

Note that both of these tests were run on a fast Gigabit connection, so in terms of network costs, it’s really a best-case scenario. Using the Chrome Dev Tools, we can manually throttle that Nexus 5 down to 3G and see the impact:

Nexus 5 3G results

Once we take slow networks into account, the difference between Browserify/Webpack and Rollup/Closure is even more stark. In the case of 1000 modules (which is close to Reddit’s count of 1050), Browserify takes about 400 milliseconds longer than Rollup. And that 400ms is no small potatoes, since Google and Bing have both noted that sub-second delays have an appreciable impact on user engagement.

One thing to note is that this benchmark doesn’t measure the precise execution cost of 100, 1000, or 5000 modules per se, since that will depend on your usage of require(). Inside of these bundles, I’m calling require() once per module, but if you are calling require() multiple times per module (which is the norm in most codebases) or if you are calling require() multiple times on-the-fly (i.e. require() within a sub-function), then you could see severe performance degradations.

Reddit’s mobile site is a good example of this. Even though they have 1050 modules, I clocked their real-world Browserify execution time as much worse than the “1000 modules” benchmark. When profiling on that same Nexus 5 running Chrome, I measured 2.14 seconds for Reddit’s Browserify require() function, and 197 milliseconds for the equivalent function in the “1000 modules” script. (In desktop Chrome on an i7 Surface Book, I also measured it at 559ms vs 37ms, which is pretty astonishing given we’re talking desktop.)

This suggests that it may be worthwhile to run the benchmark again with multiple require()s per module, although in my opinion it wouldn’t be a fair fight for Browserify/Webpack, since Rollup/Closure both resolve duplicate ES6 imports into a single hoisted variable declaration, and it’s also impossible to import from anywhere but the top-level scope. So in essence, the cost of a single import for Rollup/Closure is the same as the cost of n imports, whereas for Browserify/Webpack, the execution cost will increase linearly with n require()s.

For the purposes of this analysis, though, I think it’s best to just assume that the number of modules is only a lower bound for the performance hit you might feel. In reality, the “5000 modules” benchmark may be a better yardstick for “5000 require() calls.”

Conclusions

First off, the bundle-collapser plugin seems to be a valuable addition to Browserify. If you’re not using it in production, then your bundle will be a bit larger and slower than it would be otherwise (although I must admit the difference is slight). Alternatively, you could switch to Webpack and get an even faster bundle without any extra configuration. (Note that it pains me to say this, since I’m a diehard Browserify fanboy.)

However, these results clearly show that Webpack and Browserify both underperform compared to Rollup and Closure Compiler, and that the gap widens the more modules you add. Unfortunately I’m not sure Webpack 2 will solve any of these problems, because although they’ll be borrowing some ideas from Rollup, they seem to be more focused on the tree-shaking aspects and not the scope-hoisting aspects. (Update: a better name is “inlining,” and the Webpack team is working on it.)

Given these results, I’m surprised Closure Compiler and Rollup aren’t getting much traction in the JavaScript community. I’m guessing it’s due to the fact that (in the case of the former) it has a Java dependency, and (in the case of the latter) it’s still fairly immature and doesn’t quite work out-of-the-box yet (see Calvin’s Metcalf’s comments for a good summary).

Even without the average JavaScript developer jumping on the Rollup/Closure bandwagon, though, I think npm package authors are already in a good position to help solve this problem. If you npm install lodash, you’ll notice that the main export is one giant JavaScript module, rather than what you might expect given Lodash’s hyper-modular nature (require('lodash/uniq'), require('lodash.uniq'), etc.). For PouchDB, we made a similar decision to use Rollup as a prepublish step, which produces the smallest possible bundle in a way that’s invisible to users.

I also created rollupify to try to make this pattern a bit easier to just drop-in to existing Browserify projects. The basic idea is to use imports and exports within your own project (cjs-to-es6 can help migrate), and then use require() for third-party packages. That way, you still have all the benefits of modularity within your own codebase, while exposing more-or-less one big module to your users. Unfortunately, you still pay the costs for third-party modules, but I’ve found that this is a good compromise given the current state of the npm ecosystem.

So there you have it: one horse-sized JavaScript duck is faster than a hundred duck-sized JavaScript horses. Despite this fact, though, I hope that our community will eventually realize the pickle we’re in – advocating for a “small modules” philosophy that’s good for developers but bad for users – and improve our tools, so that we can have the best of both worlds.

Bonus round! Three desktop browsers

Normally I like to run performance tests on mobile devices, since that’s where you see the clearest differences. But out of curiosity, I also ran this benchmark on Chrome 54, Edge 14, and Firefox 48 on an i5 Surface Book using Windows 10 RS1. Here are the results:

Chrome 54

Chrome 54 Surfacebook results

Edge 14 (tabular results)

Edge 14 Surfacebook results

Firefox 48 (tabular results)

Firefox 48 Surfacebook results

The only interesting tidbits I’ll call out in these results are:

  1. bundle-collapser is definitely not a slam-dunk in all cases.
  2. The ratio of network-to-execution time is always extremely high for Rollup and Closure; their runtime costs are basically zilch. ChakraCore and SpiderMonkey eat them up for breakfast, and V8 is not far behind.

This latter point could be extremely important if your JavaScript is largely lazy-loaded, because if you can afford to wait on the network, then using Rollup and Closure will have the additional benefit of not clogging up the UI thread, i.e. they’ll introduce less jank than Browserify or Webpack.

Update: in response to this post, JDD has opened an issue on Webpack. There’s also one on Browserify.

Update 2: Ryan Fitzer has generously added RequireJS and RequireJS with Almond to the benchmark, both of which use AMD instead of CommonJS or ES6.

Testing shows that RequireJS has the largest bundle sizes but surprisingly its runtime costs are very close to Rollup and Closure. (See updated results above for details.)

Update 3: I wrote optimize-js, which alleviates some of the performance costs of parsing functions-within-functions.

On joining Microsoft Edge and moving to Seattle

TL;DR: I work for Microsoft now. Hit me up to tell me what bugs you about Edge – I want to hear it!

My relationship with the web has had a funny trajectory. It took me a long time to figure out what this weird nebulous thing called “the web” even was, and why it’s so remarkable.

As it turns out, I wrote a lot of Android apps long before I began to tinker with the web. Why Android? Well, I had an Android phone, and I knew Java, so it seemed like the sensible choice. I had just graduated from university in 2008, with limited programming experience, and I wanted to practice my craft with some hobby projects.

Most of the Android apps I wrote were little one-off sketches, designed to scratch a personal itch. I wrote a Japanese transliterator, a Pokédex (no, not that one), a debug logger, and many others. Looking back, they form a pretty motley portfolio.

For instance, I loved playing guitar, but my vocal range is Ringo-esque at best, so I wrote an app to transpose chord charts, shifting the key into a more comfortable range. Another app was born of a night playing board games at the pub, where my friends and I found there weren’t any good scorekeeping apps on the Play Store. So I wrote one.

Board games at the Royal Oak in Ottawa, where I used to hang. Source: Lauren Rockburn.

Board games at the pub. Source: Lauren Rockburn.

These apps were fun to write, and I often got positive feedback from friends, colleagues, and countless folks on the Internet. The feeling of creating something, seeing it in use by hundreds of thousands of people, and then hearing their stories about how it impacted their lives is something I can’t adequately describe.

From Android to the web

However, I always had this nagging thought in the back of my head: sure, I could write apps for Android, and that was fine because I had an Android phone, as did most of my friends. But what about people on iPhones, or Windows Phones, or desktops? Often I’d get a feature request to support some other platform, but the idea of learning Objective-C or C# was a daunting proposition.

So I started turning my attention to the web – that one platform that truly is “write once, run anywhere.” The web as a platform had always scared me: I imagined it as this big amorphous thing with vestigal junk jutting out everywhere, compared to the smooth linear path of writing an Android app.

The web, as I imagined it. Source: Katamari Damacy

The web, as I imagined it. Source: Katamari Damacy

However, around 2012 Android was already started to accumulate its own evolutionary baggage, as Ice Cream Sandwich added Fragments, Action Bars, and a panoply of new features that, from my perspective, only served to aggravate the fragmentation problem. It seemed like a good time to give the web a go.

So I started building web apps, often with Cordova, but sometimes just as pure web sites. And I discovered that, yes, although the web was messy, it was amazing! My friends with iPhones could use my app just as easily as my Android friends. And “installing” it was as simple as clicking a link.

The web: still messy

However, my experience with Android often led me to be frustrated and dissatisfied with the tools available on the web. Things that are easy in native apps – storing data locally, animating at 60FPS, smooth scrolling – proved to be a challenge for web apps. Sometimes the APIs were there, but they were deprecated, or half-baked, or inconsistent across browsers.

But I didn’t give up. Instead, I followed a progression that might be familiar to many folks who work with the web:

  1. Build an app, become dissatisfied that something doesn’t work cross-browser.
  2. Use a library or polyfill, become dissatisfied due to bugs or missing features.
  3. Contribute to a library or write a new one, become dissatisfied that the solution isn’t performant or elegant.
  4. File issues on browser vendors, become dissatisfied at the pace of adoption.
  5. Go work for a browser vendor. [1]

In 2016, I find myself at step #5. I love the web, I want to see it grow in new and exciting ways, and I want to be a part of that transformation. That’s why I’ve decided to join Microsoft on the Edge team. Starting next week, I’ll be a Program Manager with a focus on the Web Platform.

Going to Microsoft is a big decision, which may surprise some folks given my cred in the open-source community. So it’s worth explaining my thought process.

Why Microsoft?

I maintain a lot of open-source projects, mostly in the JavaScript and Node.js communities. As part of that crowd, I frequently interact with folks from various browser vendors: Mozilla, Google, Microsoft, Opera, even Apple. In fact, the person I collaborate with the most – PouchDB co-maintainer Dale Harvey – is a Mozillian working on Firefox.

I admire the work that all of the browser vendors are doing, and I’ve shared drinks, code, and conversation with many of them. However, when I thought about where I could go to have the biggest impact on the web, I found myself drawn to the same conclusions as Christian Heilmann, and I turned toward the browser vendor that puts the big blue “e” in “Redmond.”

To add to what Christian already said, Microsoft has come a long way since the dark days of IE6. They’ve licked their wounds, acknowledged their mistakes, and are doubling down on the web platform with a renewed zeal. They’ve open-sourced the Chakra JavaScript engine, signaling a new commitment to openness. In terms of HTML5 support, Edge is now neck-and-neck with Firefox, and at the rate it’s been improving with each release, I wouldn’t be shocked if it surpassed Chrome this year or the next.

Web standards are about more than just scoring points on HTML5Test, though. Hard work has to be done at the fringes, in order to make the web platform a truly painless experience for developers. When writing JavaScript libraries, I often find nasty little bugs in Edge (as well as other browsers) that either call for elaborate workarounds or force me to just forgo some useful feature. I’ve tried to solve a lot of these problems at the library and bugtracker levels, but I want to go deeper.

How will this affect your open-source projects?

If anything, I’m hoping this new direction will deepen my relationship with the open-source community. Rather than just filing bugs on Edge, I’ll be in a position to actually fix those bugs, or at least to vote internally for the kinds of improvements I think are important. (As always, IndexedDB is top of my list, but everyone has their own pet API.)

To be an effective browser vendor, I believe it’s important to keep an eye on what’s cooking over in Library-Framework-Polyfill Land, listening to both developers and users, and then figure out which features and bugfixes ought to be prioritized. To that end, I hope my friends in the open-source world will let me know when a bug in Edge is blocking them, or when there’s some unsupported feature that would just really be a home run for their use case.

I know browser vendors can often seem distant and aloof. But having filed many bugs on browsers in the past, I can tell you from personal experience that if you just come to them on their turf, they’re usually very receptive.

Are you going to switch to Windows?

This is a tough one for me. I was an ardent Linux user from 2007 on, until I finally relented to the programmer hive-mind and switched to a Mac in 2012. Phonewise, I’ve been an Android user since the very first one – the HTC Dream in 2008.

However, even though Microsoft doesn’t require employees to use any particular operating system, I plan on switching over to Windows. I’ll probably get a Surface Book and a Lumia 950, since both run Windows 10 and the latest version of Edge. The craftsmanship on both devices seems really great, and the recent unveiling of Bash on Windows eases the transition quite a bit.

For me, though, switching to Windows is a matter of principle rather than of convenience. My buddy Nick Hehr likes to talk a lot about empathy, and to add to the points he’s already made, I believe this is just a case of showing empathy for the people who use my software. I’m simply not going to understand the day-to-day pain points and frustrations of Edge users unless I become one myself.

Also, I’ve been inspired by Dave Rupert’s quest to go Windows, and, like him, I worry that our current Mac monoculture is driving us to a homogeneity of tools and products. During my interview at Microsoft, I saw Jacob Rossi type into his keyboard and then seamlessly flick the screen to scroll down a list. How many web developers are totally unaware that such a UI paradigm even exists, and how many consider it when coding for their Windows users? (Who still account for 90% of desktop browser share, by the way.)

Web browsers and diversity

Furthermore, I think that using Edge is a good act of web citizenship. I’ve been a Firefox user (on both desktop and mobile!) for the past couple of years, both because I admire Mozilla as a company, and because I think it’s important to get an alternative perspective on the web.

At a previous job, my coworkers would sometimes rib me for not using the One True Browser (or at least its respectable cousin, Safari), but honestly, being a Firefox user gave me a superpower: I could immediately discover bugs in our product, usually due to improper use of nonstandard WebKit features. For instance, someone might decide to use -webkit-background-clip: text; on a gradient background, which made the text invisible on Firefox and IE. Oops! These kinds of problems are incredibly easy to miss when you live in a Blink/WebKit bubble.

This also points back to why I’m joining Microsoft in the first place. I think the web is healthiest when there is a diversity of browsers, each bringing their unique perspective to the table. Web developers who sigh and say, “Ugh, everything would be so much easier if everyone was using Chrome” would be wise to remember that people were saying the same thing back in 2001 about IE6. The web succeeds when there’s competition, and it stagnates without it.

Now to be sure, Chrome is an excellent browser, and Google is taking the web in some exciting new directions. In particular, I think folks like Alex Russell and Jake Archibald are 1000% correct about Progressive Web Apps, and I’ll be gunning hard for those features to land in Edge. (Spoiler alert: it’s on the roadmap!) Progressive Web Apps are, in my opinion, just a consummation of everything HTML5 was meant to be – a pure web experience that’s fast, immersive, and reliable. It can’t land soon enough.

However, I don’t believe it’s the duty of browser vendors to blindly follow the Chrome Consensus. Web standards shouldn’t be about one browser dominating and everybody else playing catch-up. This is why I’m excited to join up on the side of a smaller player like Microsoft (how weird is to be calling them that?). I want to help influence the future direction of the web platform, and Edge – being a browser with a little something to prove – seems like the perfect place to do that.

Leaving New York

I’m also moving from New York back to my home city of Seattle. To be honest, my decision was primarily for family and relationship reasons – my stepdad is undergoing serious health issues, and my girlfriend (another Seattleite) agreed it was better to settle here than in New York. Seeing as I was already moving back to the Emerald City, Microsoft was an easy choice.

I’m going to miss Squarespace, to which I’m grateful for contributing to my personal growth and for giving me a relaxed yet challenging work environment. I hope to keep in close contact with my former coworkers, so they can let me know how Edge can best improve the web experience for Squarespace and its users. (I’ve already been told that mix-blend-mode is high on the wishlist!)

Most of all, though, I’m going to miss BoroJS – the family of NYC JavaScript meetups that include BrooklynJS, ManhattanJS, QueensJS, JerseyScript, NodeBots NYC, and probably another one by the time you finish this sentence. It’s an amazing group of talented people, and the community is constantly growing thanks to a welcoming environment, a grassroots vibe, and a focus on fun.

I was the first to speak at four different BoroJS meetups – superfecta!

I was the first to speak at four different BoroJS meetups – superfecta! Source: @brooklyn_js

I could never adequately describe the magic of BoroJS, but Jed Schmidt has already done an excellent job, so go read that. Suffice it to say that the BoroJS community meant a lot to me, and I’m leaving it with a heavy heart.

Conclusion

The web is the largest open platform (or medium!) for expression that human beings have ever created. It isn’t owned by any one individual or organization, but it brings direct benefit to the lives of billions of people. It is a wondrous and precious thing, which gives a global voice to everyone, from indie bloggers and hobby-app creators to multibillion-dollar businesses.

As Anne van Kesteren recently said, the web is a public good. I look forward to serving it on the Microsoft Edge team.

Many thanks to Nick Hehr and Jan Lehnardt for reviewing a draft of this blog post.

Footnotes

[1] Note that I’m not saying I think everyone needs to follow this progression. If you feel comfortable at step 1, you should stay there, and keep building awesome stuff for the web! However, this flowchart seems to match the careers of lots of folks that I see working for browser vendors.