The struggles of publishing a JavaScript library

If you’ve done any web development in the past few years, then you’ve probably typed something like this:

$ bower install jquery

Or maybe even:

$ npm install --save lodash

For anyone who remembers the dark days of combing Github for jQuery plugins, this is a miracle. But as with all software, somebody had to write that code in order for you to be able to download it. And in the case of tools like Bower and npm, somebody also had to do the legwork to publish it. This is one of their stories.

The Babelification of JavaScript

I tweeted this recently:

I got some positive feedback, but I also saw some incredulous responses from people telling me I only need to support npm and CommonJS, or more snarkily, that supporting “just JavaScript” is good enough. As a fairly active open-source JavaScript author, though, I’d like to share my thoughts on why it’s not so simple.

The JavaScript module ecosystem is a mess these days. For module definitions, we have AMD, UMD, CommonJS, globals, and ES6 modules 1. For distribution, we have npm, Bower, and jspm, as well as CDNs like cdnjs, jsDelivr, and Github itself. For translating between Node and browser code, we have Browserify, Webpack, and Rollup.

Supporting each of these categories comes with its own headaches, but before I delve into that, here’s my take on how we got into this morass in the first place.

What is a JS module?

For the longest time, JavaScript didn’t have any commonly-accepted module system, so the most straightforward way to distribute your code was as a global variable. jQuery plugins also worked this way – they would just look for the global window.$ or window.jQuery and hook themselves onto that.

But thanks largely to Node and the influx of people who care about highfalutin computer-sciencey stuff like “not polluting the global namespace,” we now have a lot more ways of modularizing our code. npm is famous for using CommonJS, with its module.exports and require(), whereas other tools like RequireJS use an alternative format called AMD, known for its define() and asynchronous loading. (It’s never ceased to confuse me that RequireJS is the one that doesn’t use require().) There’s also UMD, which seeks to harmonize all of them (the “U” stands for “universal”).

In practice, though, there’s no good “universal” way to distribute your code. Many libraries try to dynamically determine at runtime what kind of environment they’re in (here’s a pretty gnarly example), but this makes modularizing your own code a headache, because you have to repeat that boilerplate anywhere you want to split up your code into separate files.

More recently, I’ve seen a lot of modules migrate to just using CommonJS everywhere, and then bundling it up for distribution with Browserify. This can be fraught with its own difficulties though, if you aren’t aware of the subtleties of how your code gets consumed. For instance, if you use Browserify’s --standalone flag (-s), then your code will get built as an AMD-ready, UMD-ready, and globals-ready bundle file, but you might not think to add it as a build step, because the stated use of the --standalone flag is to create a global variable 2.

However, my new personal policy is to use this flag everywhere, even when I can’t think of a good global variable name, because that way I don’t get issues filed on me asking for AMD support or UMD support. (Speaking of which, it still tickles me that someone had to actually open an issue asking me to support a supposedly “universal” module system. Not so universal after all, is it!)

Package managers and pseudo-package managers

So let’s say you go the CommonJS + Browserify route: now you have an interesting problem, which is that you have both a “source” version and a “distributed” version of your code. (Commonly these are organized into a src/lib folder and a dist folder, but those are just conventions.) How do you make sure your users get the right one?

npm is a package manager that expects CommonJS modules, so typically in your package.json, you set the "main" key to point to whatever your source "src/index.js" file is. Bower, however, expects a bundle file that can be directly included as a <script> tag, so in that case you’ll want to set the "main" inside the bower.json to point instead to your "dist/mypackage.js" or "dist/mypackage.min.js" file. jspm complicates things further by defaulting to npm’s package.json file while actually expecting non-CommonJS modules, but you can override that behavior by including a {"jspm": "main": "dist/mypackage.js"}} in your package.json. Whew! We’re all done, right?

Not so fast. As it turns out, Bower isn’t really a package manager so much as a CLI over Github. What that means is that you actually need to check your bundle files into Git, to ensure that those dist/ files are available to Bower users. At the same time, you’ll have to be very cognizant not to check in anything you don’t want people to download, because Bower’s "ignore" list doesn’t actually avoid downloading anything; it just deletes the ignored files after they’re downloaded, which can lead to some enormous Bower downloads. Couple this with the fact that you’re probably also juggling .gitignore files and .npmignore files, and you can end up with some fairly complicated release scripts!

Of course, many users will also just download your bundle file from Github. So it’s important to be consistent with your Git tags, so that you can have a nice tidy Github releases page. As it turns out, Bower will also depend on those Git tags to determine what a “release” is – actually, it flat-out ignores the "version" field in bower.json. To make sense of all this complexity, our policy with PouchDB is to just do an explicit commit with the version tag that isn’t even a part of the project’s main master branch, purely as a “release commit” for Bower and Github.

What about CDNs?

Github discourages using their hosted JavaScript files directly from <script> tags (in fact their HTTP headers make it impossible), so often users will ask if they can consume your library via a CDN. CDNs are also great for code snippets, because you can just include a <script> tag pointing to the latest CDN release. So lots of libraries (including PouchDB) also support jsDelivr and cdnjs.

You can add your library manually, but in my experience this is a pain, because it usually involves checking out the entire source for the CDN (which can be many gigabytes) and then opening a pull request with your library’s code. So it’s better to follow their automated instructions so that they can automatically update whenever your code updates. Note that both jsDelivr and cdnjs rely on Git tags, so the above comments about Github/Bower also apply.

Correction: Both jsDelivr and cdnjs can be configured to point to npm instead of Github; my mistake! The same applies to jspm.

Browser vs Node

For anyone who’s written a popular JavaScript library, the situation inevitably arises that someone tries to use your Node-optimized library in the browser, or your browser-optimized library in Node, and invariably they run into issues.

The first trick you might employ, if you’re working with Browserify, is to add if/else switches anytime you want to do something differently in Node or the browser:

function md5(str) {
  if (process.browser) {
    return require('spark-md5').hash(str);
  } else {
    return require('crypto').createHash('md5').update(str).digest('hex');

This is convenient at first, but it causes some unexpected problems down the line.

First off, you end up sending unnecessary Node code to the browser. And especially if the Browserified version of your dependencies is very large, this can add up to a lot of bytes. In the example above, Browserifying the entire crypto library comes out to 93KB (after uglify+gzip!), whereas spark-md5 is only 2.6KB.

The second issue is that, if you are using a tool like Istanbul to measure your code coverage, then properly measuring your coverage in Node can lead to a lot of /* istanbul ignore next */ comments all over the place, so that you can avoid getting penalized for browser code that never runs.

My personal method to avoid this conundrum is to prefer the "browser" field in package.json to tell Browserify/Webpack which modules to swap out when building. This can get pretty complicated (here’s an example from PouchDB), but I prefer to complicate my configuration code rather than my JavaScript code. Another option is to use Calvin Metcalf’s inline-process-browser, which can automatically strip out process.browser switches 3.

You’ll also want to be careful when using Browserify transforms in your code; any transforms need to be a regular dependency rather than a devDependency, or else they can cause problems for library users.

Wait, you tried to run my code where?

After you’ve solved Node/browser switching in your library, the next hurdle you’ll likely encounter is that there is some unexpected bug in an exotic environment, often due to globals.

One way this might manifest itself is that you expect a global window variable to exist in the browser – but oh no, it’s not there in a web worker! So you check for the web worker’s self as well. Aha, but NW.js has both a Node-style global and browser-style window as global variables, so you can’t know in advance which other globals (such as Promise or console) are attached to which! Then you can get into even stranger environments like iOS’s JSCore (which is used by React Native), or Electron, or Qt WebKit, or Rhino/Nashorn, or Java FXWebView, or Adobe Air…

If you want to see what kind of a mess this can create, check out these lines of code from Lodash, and weep for poor John-David Dalton!

My own solution to this issue is to never ever check for window or global or anything like that if I can avoid it, and instead use typeof whatever === 'undefined' to check. For instance, here’s my typical Promise shim:

function PromiseShim() {
  if (typeof Promise !== 'undefined') {
    return Promise;
  return require('lie');

Trying to access a global variable that doesn’t exist is a runtime error in most JavaScript environments, but using the typeof check will prevent the error.

Browserify vs Webpack

Most library authors I know tend to prefer Browserify for building JavaScript modules, but especially with the rise of React and Flux, Webpack is increasingly becoming a popular option.

Webpack is mostly consistent with Browserify, but there are points of divergence that can lead to unexpected errors when people try to require() your library from Webpack. The best way to test is to simply run webpack on your source CommonJS file and see if you get any errors.

In the worst case, if you have a dependency that doesn’t build with Webpack, you can always tell users to specify a custom loader to work around the issue. Webpack tends to give more control to the end-user than Browserify does, so the best strategy is to just let them build up your library and dependencies however they need to.

Enter ES6

This whole situation I’ve described above is bad enough, but once you add ES6 to the mix, it gets even more complicated. ES6 modules are the “future-proof” way of authoring JavaScript, but as it stands, there are very few tools that can consume ES6 directly, including most versions of Node.

(Yes, even if you are using Node 4.x with its many lovely ES6 features like Promises and arrow functions, there are still some missing features, like spread arguments and destructuring, that are not supported by V8 yet.)

So, what many ES6 authors will do is add a "prepublish" script to build the ES6 source into a version consumable by Node/npm (here’s an example). (Note that your "main" field in package.json must point to the Node-ready version, not the ES6 version!) Of course, this adds a huge amount of additional complexity to your build script, because now you have three versions of your code: 1) source, 2) Node version, and 3) browser version.

When you add an ES6 module bundler like Rollup, it gets even hairier. Rollup is a really cool bundler that offers some big benefits over Browserify and Webpack (such as smaller bundle sizes), but to use it, it expects your library’s dependencies to be exported in the ES6 format.

Now, because npm normally expects CommonJS, not ES6 modules, there is an informal “jsnext:main” field that some libraries use to point to their ES6 source. Usage is not very widespread, though, so if any of your dependencies don’t use ES6 or don’t have a "jsnext:main", then you’ll need to use Rollup’s --external flag when bundling them so that it knows to ignore them.

"jsnext:main" is a nice hack, but it also brings up a host of unanswered questions, such as: which features of ES6 are supported? Is it a particular stage of recommendation for the spec, ala Babel? What about popular ES7 features that are already starting to creep into codebases that use Babel, such as async/await? It’s not clear, and I don’t think this problem will be resolved until npm takes a stance one way or the other.

Making sense of this mess

At the end of the day, if your users want your code bad enough, then they will find a way to consume it. In the worst case scenario, they can just copy-paste your code from Github, which is how JavaScript was consumed for many years anyway. (StackOverflow was a decent package manager long before cooler kids like npm and Bower came along!)

Many folks have advised me to just support npm and CommonJS, and honestly, for my smaller modules I’m doing just that. It’s simply too much work to try to support everything at once. As an example of how complicated it is, I’ve created a hello-javascript module that only contains the code you need to support all the environments above. Hopefully it will help someone trying to figure out how to publish to multiple targets.

If you happen to be thinking about hopping into the world of JavaScript library authorship, though, I recommend starting with npm’s publishing guide and working your way up from there. Trying to support every JavaScript user on the planet is an ambitious proposition, and you don’t want to wear yourself out when you’re having enough trouble testing, writing documentation, checking code coverage, triaging issues, and hey – at some point, you’ll also need to write some code!

But as with everything in software, the best advice is to focus on the user and all else will follow. Don’t listen to the naysayers who tell you that Bower users are “wrong” and you’re doing them a favor by “educating” them 4. Work with your users to try to support their use case, and give them alternatives if they’re unsatisfied with your current publishing approach. (I really like for on-demand Browserification.)

To me, this is somewhat like accessibility. Some users only know Bower, not npm, or maybe they don’t even understand the difference between the two! Others might be unfamiliar with the command line, and in that case, a big reassuring “Download” button on a page might be the best way to accommodate them. Still others might be power users who will try to include your ES6 code directly and then Browserify it themselves. (Ask those users for a pull request!)

At the end of the day, you are giving away your labor for free, so you shouldn’t feel obligated to bend over backwards for anybody. But if your driving motivation is to make your code as usable as possible for other people, then I’d say you can’t go wrong by supporting the two most popular options: direct downloads for casual users, and npm/CommonJS for power users. If your library grows in popularity, you can always worry about the thousand and one other methods later. 5

Thanks to Calvin Metcalf, Nick Colley, and Colin Skow for providing feedback on a draft of this post.


1. I’ve seen no compelling reason to call it “ES2015,” except to signal my own status as a smarty-pants. So I don’t.

2. Another handy tool is derequire, which can remove all require()s from your bundle to ensure it doesn’t get re-interpreted as a CommonJS module.

3. Calvin Metcalf pointed out to me that you can also work around this issue by using crypto sub-modules, e.g. require('crypto-hash'), or by fooling Browserify via require('cryp' + 'to').

4. With npm 3, many developers are starting to declare Bower to be obsolete. I think this is mostly right, but there are still a few areas where Bower beats npm. First off, for isomorphic libraries like PouchDB, an npm install can be more time-consuming and error-prone than a bower install, due to native LevelDB dependencies that you’ll never need if you’re only using PouchDB on the frontend. Second, not all libraries are publishing their dist/ code to npm, meaning that former Bower users would have to learn the whole Browserify/Webpack stack rather than just include a <script> tag. Third, not all Bower modules are even on npm – Ionic framework is a popular one that springs to mind. Fourth, there’s the social cost of migrating folks from Bower to npm, throwing away a wealth of tutorials and accumulated knowledge in the process. It’s not so simple to just tell people, “Okay, now start using npm instead of Bower.”

5. I’ve ragged a lot on the JavaScript community in this post, but I still find authoring for JavaScript to be a very pleasurable experience. I’ve been a consumer of Python, Java, and Perl modules, as well as a publisher of Java modules, and I still find npm to be the nicest to work with. The fact that my publish process is as simple as npm version patch|minor|major plus a npm publish is a real dream compared to the somewhat bureaucratic process for asking permission to publish to Maven Central. (If I ever have to see the Sonatype Nexus web UI again, I swear I’m going to hurl.)

IndexedDB, WebSQL, LocalStorage – what blocks the DOM?

When it comes to databases, a lot of people just want to know: which one is the fastest?

Never mind things like memory usage, the CAP theorem, consistency, read vs write speed, test coverage, documentation – just tell me which one is the fastest, dammit!

This mindset is understandable. A single number is easier to grasp than a big table of features, and it’s fun to make grand statements like “Redis is 20x faster than Mongo.” (N.B.: I just made that up.)

As someone who spends a lot of time on browser databases, though, I think it’s important to look past the raw speed numbers. On the client side especially, the way you use a database, and how it interacts with the JavaScript environment, has a big impact on something more important than performance: how your users perceive performance.

In this post, I’m going to take a look at various browser databases with regard not only to their speed, but to how much they block the DOM.

TLDR: IndexedDB isn’t nearly the performance home-run that many in the web community think it is. In my tests, I found that it blocked the DOM significantly in Firefox and Chrome, and was slower than both LocalStorage and WebSQL for basic key-value insertions.

Browser database landscape

For the uninitiated, the world of browser databases can be a confusing one. Lawnchair, PouchDB, LocalForage, Dexie, Lovefield, LokiJS, AlaSQL, MakeDrive, ForerunnerDB, YDN-DB – that’s a lot of databases!

As it turns out, though, the situation is much simpler than it appears on the surface. In fact, there are only three ways of storing data in the browser:

Every “database” listed above uses one of those three under the hood (or they operate in-memory). So to understand browser storage, you only really need to understand LocalStorage, WebSQL, and IndexedDB 1.

LocalStorage is a lightweight way to store key-value pairs. The API is very simple, but usage is capped at 5MB in many browsers. Plus the API is synchronous, so as we’ll see later, it can block the DOM. Browser support is very good.

WebSQL is an API that is only supported in Chrome and Safari (and Android and iOS by extension). It provides an asynchronous, transactional interface to SQLite. Since 2010, it has been deprecated in favor of IndexedDB.

IndexedDB is the successor to both LocalStorage and WebSQL, designed to replace them as the “one true” browser database. It exposes an asynchronous API that supposedly avoids blocking the DOM, but as we’ll see below, it doesn’t necessarily live up to the hype. Browser support is extremely spotty, with only Chrome and Firefox having fully usable implementations.

Now, let’s run a simple test to see when and how these APIs block the DOM.

Thou shalt not block the DOM

JavaScript is a single-threaded programming environment, meaning that synchronous operations are blocking. And since the DOM is synchronous, this means that when JavaScript blocks, the DOM is also blocked. So if any operation takes longer than 16ms, it can lead to dropped frames, which users experience as slowness, “stuttering,” or “jank.”

This is the reason that JavaScript has so many asynchronous APIs. Just imagine if your entire page was frozen during every AJAX request – wouldn’t the web be an awful user experience if it worked that way! Hence the profusion of programming constructs like callbacks, promises, event listeners, and the like.

To demonstrate DOM blocking, I’ve put together a simple demo page with an animated GIF. Whenever the DOM is blocked, Kirby will stop his happy dance and freeze in place.

Try this experiment: go to that page, open up the developer tools, and enter the following code:

for (var i = 0; i < 10000; i++) {console.log('blocked!')}

You’ll see that Kirby freezes for the duration of the for-loop:


This affects more than just animated GIFs; any JavaScript animation or DOM operation, such as adding or modifying elements, will also be blocked. You can’t even select a radio button; the page is totally unresponsive. The only animations that are unaffected are hardware-accelerated CSS animations.

Using this demo page, I tested four ways of of storing data: in-memory, LocalStorage, WebSQL, and IndexedDB. The test inserts a given number of “documents,” which are just unstructured JSON keyed by a string ID. I made a YouTube video showing my results, but the rest of the article will summarize my findings.


Not surprisingly, since any synchronous code is blocking, in-memory operations are also blocking. You can test this in the demo page by choosing “regular object” or “LokiJS” (which is an in-memory database). The DOM blocks during long-running inserts, but unless you’re dealing with a lot of data, you’re unlikely to notice, because in-memory operations are really fast.

To understand why in-memory is so fast, a good resource is this chart of latency numbers every programmer should know. Or I can give you the TLDR, which I’m happy to be quoted on:

“Disk is about a bazillion times slower than memory, and the network is about a bazillion times slower than that.”

— Nolan Lawson

Of course, the tradeoff with in-memory is that your data isn’t saved. So let’s look at some ways of writing data that will actually survive a browser refresh.


In all three of Chrome, Firefox, and Edge, LocalStorage fully blocks the DOM while you’re writing data 2. The blocking is a lot more noticeable than with in-memory, since the browser has to actually flush to disk.

This is pretty much the banner reason not to use LocalStorage. Even if the API only takes a few hundred milliseconds to return after inserting 10000 records, you’ll notice that the DOM might block for a long time after that. I assume this is because these browsers cache LocalStorage to memory and then batch their write operations (here’s how Firefox does it), but in any case the UI still ends up looking janky.

In Safari, the situation is even worse. Somehow the DOM isn’t blocked at all during LocalStorage operations, but on the other hand, if you insert too much data, you’ll get a spinning beach ball of doom, and the page will be permanently frozen. I’ve filed this as a bug on WebKit.


We can only test this one in Chrome and Safari, but it’s still pretty instructive. In Chrome, WebSQL actually blocks the DOM quite a bit, at least for heavy operations. Whereas in Safari, the animations all remain buttery-smooth, no matter what WebSQL is doing.

This should fill you with a sense of foreboding, as we start to move on to the supposed savior of client-side databases, IndexedDB. Aren’t both WebSQL and IndexedDB asynchronous? Don’t they have nothing to do with the DOM? Why should they block DOM rendering at all?

I myself was pretty shocked by these results, even though I’ve worked extensively with these APIs over the past two years. But let’s keep going further and see how deep this rabbit hole goes…


If you try that demo page in Chrome or Firefox, you may be surprised to see that IndexedDB actually blocks the DOM for nearly the entire duration of the operation 3. In Safari, I don’t see this behavior at all (although IndexedDB is painfully slow), whereas in Edge I see the occasional dropped frame.

In both Firefox and Chrome, IndexedDB is slower than LocalStorage for basic key-value insertions, and it still blocks the DOM. In Chrome, it’s also slower than WebSQL, which does blocks the DOM, but not nearly as much. Only in Edge and Safari does IndexedDB manage to run in the background without interrupting the UI, and aggravatingly, those are the two browsers that only partially implement the IndexedDB spec.

This was a pretty shocking find, so I promptly filed a bug both on Chrome and on Firefox. It saddens me to think that this is just one more reason web developers will have to ignore IndexedDB – what with the shoddy browser support and the ugly API, we can now add the fact that it doesn’t even deliver on its promise of beating LocalStorage at DOM performance.

Web workers FTW

I do have some good news: IndexedDB works swimmingly well in a web worker, where it runs at roughly the same speed but without blocking the DOM. The only exception is Safari, which doesn’t support IndexedDB inside a worker.

So that means that for Chrome and Firefox, you can always offload your expensive IndexedDB operations to a worker thread, where there’s no chance of blocking the UI thread. In my own tests, I didn’t see a single dropped frame when using this method.

It’s also worth acknowledging that IndexedDB is the only storage option inside of a web worker (or a service worker, for that matter). Neither WebSQL nor LocalStorage are available inside of a worker for any of the browsers I tested; the localStorage and openDatabase globals just aren’t there. (Support for WebSQL used to exist in Chrome and Safari, but has since been removed.)

Test results

I’ve gathered these results into a consolidated table, along with the time taken in milliseconds as measured by a simple comparison. All tests were on a 2013 MacBook Air; Edge was run in a Windows 10 VirtualBox. “In-memory” refers to a regular JavaScript object (“regular object” in the demo page). Between each test, all browser data was cleared and the page refreshed.

Take these raw numbers with the grain of salt. They only account for the time taken for the API in question to return successfully (or finish the transaction, in the case of IndexedDB and WebSQL), and they don’t guarantee that the data was durably written or that the DOM wasn’t blocked after the operation completed. However, it is interesting to compare the speed across browsers, and it’s pretty consistent with what I’ve seen from working on PouchDB over the past couple of years.

Number of insertions   1000     10000     100000     Blocks?     Notes  
Chrome 47
   In-memory 4 10 217 Yes
   LocalStorage 18 527 4725 Yes
   WebSQL 45 213 1927 Partially Blocks a bit at the beginning
   IndexedDB 64 572 5372 Yes
     in a worker
66 604 6108 No
Firefox 43
   In-memory 1 12 152 Yes
   LocalStorage 19 177 1950 Yes Froze significantly after loop finished
   IndexedDB 114 823 8849 Yes
     in a worker
132 1006 9264 No
Safari 9
   In-memory 2 8 100 Yes
   LocalStorage 6 41 418 No 10000 and 100000 crashed the page
   WebSQL 26 173 1557 No
   IndexedDB 1093 10658 117790 No
Edge 20
   In-memory 7 19 331 Yes
   LocalStorage 198 4624 N/A Yes 100000 crashed the page
   IndexedDB 315 5657 28662 Slightly A few frames lost at the beginning
     in a worker
985 2881 24236 No


Key takeaways from the data:

  1. WebSQL is faster than IndexedDB in both Chrome (~2x) and Safari (~100x!) even though I’m inserting unstructured JSON with a string key, which should be IndexedDB’s bread and butter.
  2. LocalStorage is slightly faster than IndexedDB in all browsers (disregarding the crashes).
  3. IndexedDB is not significantly slower when run in a web worker, and never blocks the DOM that way.

Again, these numbers wasn’t gathered in a super rigorous way (I only ran the tests once; didn’t average them or anything), but it should give you an idea of what kind of behavior you can expect from these APIs in different browsers. You can run the demo page yourself to try to reproduce my results.


Running IndexedDB in a web worker is a nice workaround for DOM slowness, but in principle it ought to run smoothly in either environment. Originally, the whole selling point of IndexedDB was that it would improve upon both LocalStorage and WebSQL, finally giving web developers the same kind of storage power that native developers have enjoyed for the past several years.

IndexedDB’s awkward asynchronous API was supposed to be a bitter medicine that, if you swallowed it, would pay off in terms of performance. But according to my tests, that just isn’t the case, at least with IndexedDB’s two flagship browsers, Chrome and Firefox.

I’m still hopeful that browser vendors will resolve all these issues with IndexedDB, although with the spec being over five years old, it sure feels like we’ve been waiting a long time. As someone who does both native and web development for a living, I’m tired of reciting a list of reasons why the web “isn’t quite there yet.” And IndexedDB has been too high on that list for too long.

IndexedDB was the web’s chance to finally get local storage right. It was the chosen one. It was supposed to lead us out of the morass of half-baked solutions and provide the best and fastest way to work with data on the client side. It’s come a long way, but I’m still waiting for it to make good on that original promise.


1: Yes, I’m ignoring cookies, the File API,, SessionStorage, the Service Worker cache, and probably a few other oddballs. There are actually lots of ways of storing data (too many in my opinion), but all of them have niche use cases except for LocalStorage, WebSQL, and IndexedDB.

2: In this article, when I say “Chrome,” “Firefox,” “Edge,” and “Safari”, I mean Chrome Canary 47, Firefox Developer Edition 43, Edge 20, and WebKit Nightly 10600.8.9. All tests were run on a 2013 MacBook Air; Edge was run in Windows 10 using Virtual Box.

3: This blog post used to suggest that the DOM was blocked for the entire duration of the transaction, but after an exchange with Ben Kelly I changed the wording to “nearly the entire duration.”

Thanks to Dale Harvey for providing feedback on a draft of this blog post.

Safari is the new IE 2: Revenge of the Linkbait

The things I write rarely have broad appeal. I tend to write about weird esoteric stuff like IndexedDB and WebSQL, maybe throwing the normals a bone with something about CSS animations. I’m not some kind of thought-leader.

My “Safari is the new IE” article, however, got shared incessantly, was picked up by Ars Technica, and attracted the attention of people a lot smarter than me on both sides of the ensuing debate. Having no less than Don Melton call me out on Twitter has pretty much been the highlight of my career so far. It’s been overwhelming, to say the least.

I thoroughly enjoyed the resulting debate, though, and yes, there are valid arguments on both sides. So I’d like to pick up where I left off, and respond to my detractors while doubling-down on my claim that Safari is acting as an anchor in the web community. I’ll also end on a hopeful note, with some suggestions for reconciliation with Apple.

And since, judging by the response, people tend not to read much further than the first few paragraphs (or even the title!), I’ll keep the most important points up top, so nobody can miss them.

“Linkbait,” Safari != IE, etc.

I wrote that article in a mad rush the morning after I got back from EdgeConf. It was the culmination of a lot of frustration I’ve had with Safari over the past year or so, compounded by the fact that I just got back from a conference where I would have loved to pick the brain of somebody from Apple about it.

To be honest, I penned the whole thing in one go before settling on the title, which was based on something funny Calvin Metcalf had said in a presentation. Yes, the title was snappy, and yes, it was a bit sensational. But hey, nobody would be talking about it if I had called it “Meditations on the uncanny parallels between…” I choose headlines that grab attention; welcome to journalism 101.

So I find the “linkbait” accusations to be a boring argument, because it’s an argument that can be had without discussing the content of the article at all. More interesting were the arguments that the IE analogy is flawed, because Safari doesn’t have anything like ActiveX and Apple didn’t walk away from Safari for 5 years. All very good points. There’s also a case to be made that Android is more reminiscent of the old IE days, what with so many devices frozen in time on ancient versions of WebKit. (Although Lollipop, default Chrome, and Crosswalk have helped a lot lately.)

That being said, my point was to compare Safari to IE in terms of 1) not keeping up with new standards, 2) maintaining a culture of relative secrecy, and 3) playing a monopolistic role, by not allowing other rendering engines on iOS. Those accusations are pretty undeniable.

Web Components, Shadow DOM

I basically pulled these two examples out of a hat, because they were oft-discussed at EdgeConf, but I couldn’t find where Apple had even commented on them. Of course, such information is buried in mailing list discussions where I should have done the research to find it. (“Read the mailing list” is the “RTFM” of the W3C community.)

So yes, it was unfair of me to say that Apple “has shown no public interest” in those specs. I corrected the article and personally apologized to Ryosuke Niwa, who has been very active in the design of Web Components and Shadow DOM. Mea culpa; I made a dumb mistake.

The “user-centric web”

This point was made in a rebuttal article by Rene Ritchie, and although I enjoyed reading it, I don’t find the argument very persuasive. It boils down to a false dichotomy, saying that Apple can either focus on user-facing features (like speed and battery life) or new web APIs, but it can’t do both.

Other browser vendors, though, seem to be able to keep up with Google’s relentless pace (another counterargument), so I don’t understand how Apple, with its heaps of moolah, can’t do the same. And to those saying “it’s not the money, it’s finding the right people,” well, then maybe there’s a valid question about why Apple can’t find enough good browser developers. Something must have driven Google away when they forked WebKit into Blink (taking most of the committers with them), and I wonder if that same thing is keeping away new talent. But that’s a story I suspect only the WebKit/Blink developers really know.

From my own interactions with the WebKit developers, I can only conclude that they are sincere folks who take an extreme pride in their work. However, it’s become clear to me that Apple has different priorities, and that those priorities are limiting the potential of the WebKit team. One need only glance at the Safari 9 release notes to see both a paucity of new features, as well as a focus on proprietary and user-facing features. In terms of standards, a one-year release from Safari is comparable to two releases from Chrome, representing about 3 months of work. That’s a shockingly deep deficit.

(Proprietary or user-facing: force touch events, AirPlay, Picture in Picture, pinned tab icons, secure extensions, shared links, content blocking. Standards: scroll snapping, filters, ES6, unprefixing. Other: responsive design mode, Web Inspector overhaul, SFSafariViewController.)

As for Ritchie’s other argument that Apple is shying away from “native-hopeful” web features that “don’t make sense,” given the miraculous speed boosts that both WebKit and Chrome have demonstrated recently, I doubt anyone on either team actually shares that opinion. The web can reach 60 FPS, and where it can’t, that’s an argument for more progress in the web, not less.

IndexedDB, my idée fixe

Since nobody else brought this up, I’ll offer what I think is the best counterpoint to my article. What I basically did was take Apple’s utter bungling of IndexedDB, add in their lack of clear signals and iron grip on iOS, and extrapolate that Safari is dragging us back into some kind of new web Dark Ages. Okay! It sounds a bit hyperbolic, I’ll agree.

However, you’ll have to forgive me if I fixate on IndexedDB. While CSS features like scroll snap points and position:sticky are nice, I happen to think the thing that stores user data is pretty damned important. Imagine writing an iOS/Android app without SQLite/CoreData, and you’ll understand how badly web devs need consistent IndexedDB support. The fact that Apple messed it up so catastrophically shows a carelessness in their approach to the new, “appy” web.

Of course, that statement exposes my own bias in this debate, which I’ll gladly divulge. Namely: I’m an Android developer who’s tired of writing apps for a single platform, and wants webapps that can compete with native apps. That’s why I contribute to PouchDB – because although the IndexedDB spec is 5 years old, you still need a mountain of shims to get anything done with it. It’s a nasty situation, but PouchDB makes it bearable.

Even with tools like PouchDB, though, it’s still maddeningly difficult for web developers to create anything resembling a native app. And for that, the blame usually falls square on Safari, and especially mobile Safari. For instance, when PouchDB users ask me how much data they can store on iOS, and they find out it’s capped at 50MB with a modal popup after 5MB, they often say, “Thanks, but I’ll write a native app instead.” Yet when I complain about this stuff to Apple, they mostly shrug their shoulders.

Working on mobile webapps, I often find myself reaching for Safari polyfills (e.g. FastClick, which I cannot believe we still need), or relying on years-old standards that are in sore need of a replacement (WebSQL, AppCache, touch icons). Whereas at the same time, I’m seeing constant innovation from other browsers to improve the state of the “appy” web: Service Worker from Google, pointer events from Microsoft, and just about everything Mozilla has done on Firefox OS.

Gestures (touch-action). Vibration API. Ambient Light API. WebRTC (or Microsoft’s version). Device Orientation API. Permissions API. There’s a laundry list of things that would make my life easier as a webapp developer, and the one unifying feature is that their CanIUse page has two big red columns under “iOS” and “Safari.” You can add Web Manifests and Push API if you want to talk about stuff that’s still in the planning phase (not by Apple, though).

So yeah, by the raw HTML5Test numbers, Safari isn’t so far behind. But in the stuff that matters to me as a mobile and webapp developer, they’ve got a lot of catching up to do. And based on their priorities and release cycle, I’m not confident that they’re even keeping pace.

Engaging the community

The response to my article from web developers has been telling. Amidst all the “hear hear”s and their own tales of woe with Safari, you could detect that web developers just have an intense distrust of Apple. It’s revealing how quickly they started a petition against Apple, which to be honest made me kinda uncomfortable.

Let’s explore those feelings a bit, though. Why are web developers so wary of Apple? I have my own answer, and it touches on some of the most important points I raised in the article.

The web is increasingly becoming an open community. JavaScript is the most popular language on Github, and web developers are drowning in conferences, meetups, and hackathons. I regularly attend several meetups in New York City, many of which did not even exist a year ago. There are so many of these things nowadays, I can attend nearly one a week.

Through these community events and my activity on Github, I’ve had the pleasure of meeting and collaborating with people from Mozilla, Microsoft, and Google. Sometimes they’ve even come out of the blue to propose a pull request to PouchDB, or ask for feedback on IndexedDB. And yet, I’ve never once seen an Apple employee at any of these events, nor have I ever worked with any of them on an open-source project. My sole interaction with Apple has been through their bugtracker (or on Twitter, poking them to look at their bugtracker).

With even Microsoft being all chummy these days, the web has become an open party, but Apple is frequently tossing their invitation in the trash. This extends beyond the official discussions in the W3C mailing lists. As IE engineer Jacob Rossi put it, conferences like EdgeConf are “a crucial part of participating in standards.” As with any human endeavor, some of the most important discussions about the web platform are happening in hallways, at restaurants, and in face-to-face meetings. Apple can’t complain about getting cut out of the standards process, if they won’t even show up to join the conversation.

Apple’s lack of engagement with the broader web community has also been damaging to their reputation. By not making the effort to attend meetups, write blog posts, or set up forums for feedback, it’s no wonder developers are left with the impression that Apple doesn’t care about the web. Even just sending a developer evangelist to a few meetups to smile, nod, and answer questions politely would do wonders for their public perception.

Furthermore, Apple’s lack of boots on the ground at everyday developer soirées means that they’re increasingly out of touch with what developers want from the web platform. The fact that so many meetups and conferences have sprung up recently, and the fact that the web is fast becoming the world’s most advanced cross-platform application runtime are not isolated incidents. Developers like myself are getting excited about the web precisely because it’s supplanting all the old application paradigms. But as I pointed out above, the “appy” aspects of the web are exactly where Safari tends to falter.


Personally what I want out of this whole debate is for Apple to realize that the web is starting to move on without them, and that their weird isolationism and glacial release cycle are not going to win them any favors in this new, dynamic web community. I don’t even want them to open up iOS to other browsers nearly as much as I want them to just start coming to events and talking with web developers. Once they hear our gripes and see the frustration in our eyes as we describe how much we’ve struggled to support their browser, I think they’ll be motivated by empathy alone to start fixing these problems.

I do regret the generalizations and errata in my original post. However, I don’t regret starting the debate, because clearly I’ve touched a nerve at Apple, while casting a light on the widening divide between them and the rest of the web community. Maybe a harsh diatribe is exactly what Apple needs to shake them out of their complacency, even if it loses me a few friends in Cupertino.

So what I’m saying is this: next time I’m at a conference, I hope someone from Apple comes up to me, takes off a glove, and slaps me right in the face. Number one, because I kinda deserve it for being a jerk to them, and number two, because that would mean they’re finally coming to conferences.

Thanks to Jan Lehnardt, Chris Gullian, and Dale Harvey for reviewing a draft of this post.

Safari is the new IE

Last weekend I attended EdgeConf, a conference populated by many of the leading lights in the web industry. It featured panel talks and breakout sessions with a focus on technologies that are just now starting to emerge in browsers, so there was a lot of lively discussion around Service Worker, Web Components, Shadow DOM, Web Manifests, and more.

EdgeConf’s hundred-odd attendees were truly the heavy hitters of the web community. The average Twitter follower count in any given room was probably in the thousands, and all the major browser vendors were represented – Google, Mozilla, Microsoft, Opera. So we had lots of fun peppering them with questions about when they might release such-and-such API.

There was one company not in attendance, though, and they served as the proverbial elephant in the room that no one wanted to discuss. I heard them referred to cagily as “a company in California” or “a certain fruit company.” Their glowing logo illuminated nearly every laptop in the room, and yet it seemed like nobody dared speak their name. Of course I’m talking about Apple.

I think there is a general feeling among web developers that Safari is lagging behind the other browsers, but when you go to a conference like EdgeConf, it really strikes you just how wide the gap is. All of the APIs I mentioned above are not implemented in Safari, and Apple has shown no public interest in them. (Correction: actually, they have.) When you start browsing, the list goes on and on.

Even when Apple does implement newer APIs, they often do it halfheartedly. To take an example close to my heart, IndexedDB was proposed more than 5 years ago and has been available in IE, Firefox, and Chrome since 2012. Apple, on the other hand, didn’t release IndexedDB until mid-2014, and when they did, they unveiled a bafflingly incompetent implementation that was so bad, it’s been universally derided as unusable. (LocalForage, PouchDB, and YDN-DB, the major IndexedDB wrappers, all ignore Safari’s version and fall back to WebSQL.)

Now, after one year, Apple has fixed a whopping two bugs in IndexedDB (out of several), and they’ve publicly stated that they don’t find much value in working on it, because they don’t see “a huge use.” Well duh, nobody’s going to use IndexedDB if the browser support is completely broken. (Microsoft, I’m looking at you too.)

It’s hard to get insight into why Apple is behaving this way. They never send anyone to web conferences, their Surfin’ Safari blog is a shadow of its former self, and nobody knows what the next version of Safari will contain until that year’s WWDC. In a sense, Apple is like Santa Claus, descending yearly to give us some much-anticipated presents, with no forewarning about which of our wishes he’ll grant this year. And frankly, the presents have been getting smaller and smaller lately.

In recent years, Apple’s strategy towards the web can most charitably be described as “benevolent neglect.” Although performance has been improving significantly with JSCore and the new WKWebView, the emerging features of the web platform – offline storage, push notifications, and “installable” webapps – have been notably absent on Safari. It’s tempting to interpret this as a deliberate effort by Apple to sabotage any threats to their App Store business model, but a conspiracy seems unlikely, since that part of the business mostly breaks even. Another possibility is that they’re just responding to the demands of iOS developers, which largely amount to 1) more native APIs and 2) Swift, Swift, Swift. But since Apple is pretty good at keeping a lid on their internal process, it’s anyone’s guess.

The tragedy here is that Apple hasn’t always been a web skeptic. As recently as 2010, back when Steve Jobs famously skewered Flash while declaring that HTML5 is the future, Apple was a fierce web partisan. Lots of the early features that helped webapps catch up to native apps – ApplicationCache, WebSQL, touch events, touch icons – were enthusiastically implemented by WebKit developers, and many even originated at Apple.

Around that same time, when WebSQL was deprecated in favor of IndexedDB, you’ll even find mailing list arguments where Apple employees vigorously defended WebSQL as a must for performant web applications. Reading the debates, I sense a lot of bitterness from Apple after IndexedDB won out. The irony here is that Apple nearly gave us the tools to undermine their own proprietary platform, but by rejecting WebSQL, we gave them an opportunity to rethink their strategy and put the brakes on any new progress in web APIs.

I find Application Cache, which will probably soon be deprecated in favor of Service Worker, to be a similar story. It gained wide browser support at a time when Apple was still interested in the web, but unfortunately it turned out to be a rushed, half-baked solution to the problem. I worry that Service Worker might suffer the same fate as IndexedDB, if Apple continues to lag behind the pack.

At this point, we in the web community need to come to terms with the fact that Safari has become the new IE. Microsoft is repentant these days, Google is pushing the web as far as it can go, and Mozilla is still being Mozilla. Apple is really the one singer in that barbershop quartet hitting all the sour notes, and it’s time we start talking about it openly instead of tiptoeing around it like we’re going to hurt somebody’s feelings. Apple is the most valuable company in the world; they can afford to take a few punches.

So what can we do, when one of the major browser vendors is stuck in the 2010 model, and furthermore has a total monopoly on iOS (because no, “Chrome for iOS” is not really Chrome), showing a brazenness even beyond that of 90’s-era Microsoft? I see three major coping mechanisms:

  1. Stick with what worked in 2010, and use polyfills to support Safari. This is a strategy I highlighted in my opening talk for the frontend data panel, where I showed that you can get nearly the same features as Service Worker by using AppCache and PouchDB (which falls back to WebSQL on Safari). This approach should appeal to the vast majority of web developers, who tend to hit the snooze button on new technologies until they’re available cross-browser. On the other hand, it’s also a good way to coddle Apple and give them no incentive to step up their game.

  2. Use technologies like Service Worker that don’t work on Safari, and consider it a progressive enhancement. Alex Russell made a great point about this during the “installable webapps” breakout, arguing that if we create a large body of free webapps that use Service Worker, and which work fabulously well on Android but only meh on iOS, then it will be in Apple’s interest to suck it up and support the API. Unfortunately, while this would be the best outcome for the web community as a whole, it’s going to be hard to convince developers to write code that only reaches half their audience.

  3. Contribute to WebKit. The core of Safari is still, after all, an open-source project, so there’s no practical reason why anyone with the C++ chops couldn’t roll up their sleeves and implement the new APIs themselves. (The absurdity of giving free labor to the richest company on earth isn’t lost on me, but we’re talking desperate times here.) The major problem I see with this approach is that WebKit is not Safari, and Apple could still decide to not deploy WebKit features to their flagship browser. To circle back to IndexedDB again, it was fully implemented by Google well before the Blink fork, and for several years, Apple could have just flipped a bit in their build script to include Google’s implementation, but instead they decided to waffle for a few years and then replace it with their own broken version. (Update: see note on IndexedDB below.) There’s no guarantee they wouldn’t do the same thing for other outside contributions.

So in summary, I don’t know what the right solution is. I’ve engaged many of the WebKit developers on Twitter, and I’ve even done the hard work of writing reproducible test cases and trying out their beta software so I can give them early warning. (Yes, I fork over $200 a year to Apple, for the privilege of testing their own software for them.) And frankly I’ve grown bitter, because most of the bugs I’ve reported have languished, with little response other than a link to their internal Radar tracker.

I appreciate the work that many of the WebKit developers have been doing (Brady Eidson has been particularly patient and helpful), but at this point it seems to me that the best strategy toward Apple may be the stick rather than the carrot. So I’m inclined to take up Alex Russell’s solution outlined in #2 above, and to start promoting the adoption of new web technologies – Safari support be damned.

If we can start building a vibrant ecosystem of web applications where Apple is not invited, then maybe they’ll be forced to pull a Microsoft and make their own penitent walk to Canossa. Otherwise we’ll have to content ourselves with living in the web of 2010, with Safari replacing IE as the blue-tinged icon that fills web developers with dread.

Update on IndexedDB: I’m told by Ryosuke Niwa at Apple that Google’s IndexedDB was not so easily usable in Safari, even pre-fork. So it wasn’t a build flag. However, in a private discussion with a Google employee, I’m told that the IPC layer was abstracted to a degree that it shouldn’t have been too difficult for Safari to use. In any case, it’s true that Apple had something close to a fully working implementation years before they shipped their inferior version.

You can comment on Hacker News, on Ars Technica, and on Twitter. Thanks to Jan Lehnardt, Dave Myers, Beckie Choi, and Julian Applebaum for providing feedback on a draft of this blog post.

The state of binary data in the browser

I recently wrote a pseudo-post as a GitHub readme, mostly because I couldn’t be bothered to turn it into a real blog post. It had a nice side effect, though, which was that I got a pull request correcting some errata. It’s kinda neat to get a pull request on a blog post.

For the sake of completeness, though, I’m re-blogging it here. Here’s a link:

The state of binary data in the browser, or: “So you wanna store a Blob, huh?”

Offline-first is people-first

A lot of the advice we get as programmers comes with an expiration date. It’s valuable for exactly the lifespan of a particular framework or tool, and then we can safely ignore it when the next framework rolls around.

Other advice is timeless. I consider Joel Spolsky’s blog, Joel on Software, and his associated books on UI design, to fall into this category.

Even though Joel worked at Microsoft on such now-ancient products as Excel 97, his advice rings as true today as it did the last century.

Some of my favorite bits of wisdom from Joel’s blog:

One of my favorite pieces of advice from Joel is one based on empathy. Let’s call it the bathtub principle.

The bathtub principle

Joel says:

Hotel bathtubs have big grab bars. They’re just there to help disabled people, but everybody uses them anyway to get out of the bathtub. They make life easier even for the physically fit.

In the same way, Joel argues that we should design UIs for the least-capable among us – those with poor sight, or limited motor skills, or limited linguistic capacity. The reason being: if you can design a UI that your grandparent can use, then you’ve probably designed a UI that’s pleasant for you to use as well.

As a concrete example, consider the familiar “File Edit …” menu at the top of most Mac OS X windows. These menu items are “half an inch wide and a mile high,” because you can keep scrolling your mouse up past the top, and still be able to click on them.

This UI pattern is a godsend for arthritic folks. They no longer have to struggle with a stubborn mouse that just won’t point at the right spot. But even those of us who are adept with mice will appreciate this feature. It’s just easier to use.

Microsoft only belatedly realized the value of this design, and early versions of Windows forced you to position your mouse at a very precise distance from the edge of the screen in order to hit that Start button. Later versions of Windows fixed this by allowing you to jam your mouse all the way to the corner.


When I try to convince my peers that offline-first is a valuable design principle to embrace, I’m often faced with the response, “But people are rarely offline! Our users might spend a fraction of their time in the subway, or in an airplane, or on the road. Why should we code for an edge case?”

This perspective is badly mistaken. If you focus on the “offline” part of “offline-first,” you’re missing the point.

Offline-first is about more than just users who are literally offline – instead, it’s a corollary of the bathtub principle. If you design your UIs for people who are disconnected or only infrequently connected, you create better interfaces for everyone, including those who are online with fast connections.

That’s because, in the offline-first mindset, your primary data interaction is with the local data store, rather than a remote data store. Local data stores are always faster than remote data stores, so this leads to snappier, and therefore better, user experience. (Don’t believe the marketers trying to sell you their cloud service du jour. The speed of light happens to be a fixed constant in our universe.)

As a real-world example, consider You may notice that the autosuggestion box is ridiculously fast – much faster than we’re used to seeing with, say, Google’s autosuggestions. You type, and the words appear as quick as your keystrokes. How about that.

If you want to know what enables this otherworldly speed, it’s simple: they’re just using localStorage. When you first visit, a fat 1-megabyte wad of JSON is immediately downloaded into localStorage. By the time you’ve absorbed the UI and clicked the search box, all the APIs are available locally for you to query. That’s it.

Web vs. native

Web developers should take note of this mentality. It’s a trick we’ve been using in the native app space for a long time, and at the risk of sounding smug, we’ve been doing pretty well as a result of it.

I consider myself both an Android developer and a web developer. When I write an Android app, one of the first things I think about is how to design the tables and schemas for the SQLite database. From there, I translate that vision into UI elements that the user actually interacts with, and then only later do I think about how to update the local data store with data from the server.

This is offline-first in a nutshell, and many native developers are already doing it. It’s second nature to us. It’s in our blood. To a lot of native developers I talk to, “offline-first development” is just “development.”

Given the recent success of native apps vs. web apps, this is an area where web developers could really benefit by taking a lesson from native developers. And fortunately, we no longer have to give up on the web as a platform when we give up on the notion of an ever-present Internet connection.

Today, there are a variety of tools that web developers can take advantage of to adopt an offline-first mindset. Most notably, we can take advantage of new in-browser databases like Web SQL and IndexedDB. Here’s a list of some tools that can ease this process.

Just remember: offline-first isn’t only about those who are offline. It’s about making a faster and smoother experience for everyone, regardless of whether they’re offline, intermittently online, online with a poor connection, or surfing on a cool 75 mbps FiOS connection.

Offline-first is for everyone. Offline-first is people-first.

The limitations of semantic versioning

With the recent underscore 1.7.0 brouhaha, there’s been a lot of discussion about the value of semantic versioning. Most of the JavaScript community seems to take the side of Semver, with Dominic Tarr even offering a satirical Sentimental Versioning spec.

Semver is so deeply entrenched in the Node community, that it’s hard to question it without making yourself an easy target for ridicule. Plus, much of the value of Semver comes from everybody collectively agreeing on it, so as with vaccines, dissenters risk being labeled as a danger to the community at large.

To me, though, most of this discussion is missing the point. The issue is not semantic versioning, but rather the build systems we’ve created that assume and promote automatic updates based on semantic versioning – i.e. npm. We wouldn’t be so worried about a breaking change in underscore 1.7.0 if thousands of projects weren’t primed to auto-update their underscore dependencies.

As a developer, I divide my time pretty evenly between Java and JavaScript, so I may have unique perspective here. I love the npm and Node communities, and I’ve been happily using and publishing modules for the past year or so. But as a community, I think it’s time we started being honest with ourselves about what Semver and auto-updating are actually buying us.

Recap of auto-updating

Recently, npm changed its default settings to automatically add dependencies like this:

"some-dependency" : "^1.2.1"

instead of like this:

"some-dependency" : "~1.2.1"

The caret ^ means the dependency will automatically update to the latest minor version relative to 1.2.1 (e.g. 1.3.5), whereas the tilde ~ means the dependency will update to the latest patch version releative to 1.2.1 (e.g. 1.2.7).

So that means that when you do npm install some-dependency, by default your package.json will be modified to do caret-updating rather than the more humble tilde-updating. But in any case, the default has traditionally been some flavor of auto-updating.

Life in Javaland

A comparison with Java is useful. The standard packaging/dependency system in the Java world is Maven, where it’s always been recommended to nail down your dependencies to a very precise version in your pom.xml.


This is partly a reflection of Java’s enterprisey-ness, where the thought of auto-updating would probably horrify the local Chief Security Officer (or whatever). But it’s also a reflection of the fact that Maven predates Semver’s current boom in popularity, back in the bad old days when there were few expectations about what might change between versions.

However, if you actually want to use auto-updating, Maven does have so-called “snapshot” dependencies (e.g. 1.2.1-SNAPSHOT), which can change anytime, according to the whims of the publisher. Usually these are only used for internal development, with the big uppercase letters SNAPSHOT designed to warn you in a stern voice that what you’re doing is dangerous.

Google actually flirted with auto-updating for awhile with the + modifier in Android dependencies (e.g. 1.2.1+), but now they’ve shied away from it, and if you try to add a + dependency in Android Studio, it’ll throw a big warning at you to let you know you should nail down your dependencies.

So okay, we have one community where auto-updating is a dirty word, and another where it’s the default. They can’t both be right, so as Node developers, let’s consider why we might want to be suspicious of auto-updating.

Drawback 1: minor/patch versions can still break something

Since Node was one of the the first communities to really embrace Semver, it’s tempting to say “it’s different this time,” and that that’s why we can get away with auto-updating but nobody else can. However, humans make mistakes, and Semver isn’t as airtight of a guarantee as we’d like to believe.

When we publish a patch or a minor release, we try our darnedest not to include breaking changes, but sometimes a bug slips through. Or sometimes a change that we consider to be non-breaking actually turns out to be breaking for somebody else further downstream.

I help maintain a fairly large open-source project (PouchDB), so I’ve seen this play out plenty of times. It’s not common, but it happens. One day I push a new pull request, and suddenly the Travis tests are failing due to some mysterious error. My first instinct is to assume the bug is in the pull request, but after some digging I realize that the master branch is failing too. What gives?

Well, the author of the foo module, which is depended upon by the bar module, which is depended upon by PouchDB, decided to change something in a patch release, but we weren’t prepared for it upstream, so now our tests are broken. This is an annoying situations to debug, because a git bisect is not enough. The same exact code that worked yesterday is broken today.

Typically what I do in these cases is step through the code to identify the offending module, check to see when the last master branch was pushed, try to figure out what versions of that module were published in the interim, and then either file a bug on that module or write new code to work around it.

It’s a tedious process. And it can be especially irritating, because when you’re writing a PR, you’re usually trying to fix some unrelated issue. Hunting down bugs in somebody else’s module is just an unwelcome distraction.

Drawback 2: it’s a security problem

This is such a big issue, I’m surprised nobody else in the Node community has mentioned it, to my knowledge.

npm has made it trivially easy to publish modules, which is awesome. I love that when I want to publish a new Node module, it’s just an npm publish away. Whereas if I want to publish a Java project to Maven Central, there’s a lot of ceremony in configuring my Maven credentials, doing a gradle uploadArchives, and then clicking around in the Sonatype Nexus interface. It’s a pain.

npm’s ease-of-use has a weakness, though. Given that the majority of Node projects use caret- or tilde-versioning, and given that it’s so easy to npm publish, what’s to stop some nogoodnik from stealing a prolific Node developer’s laptop (let’s say Substack or Mikeal Rogers), and then publishing some malware as a patch release to all their popular libraries? Bam, suddenly everybody’s continuous integration systems are downloading malware from npm and pushing it out to thousands of running systems.

You may trust Substack, but do you trust that he’s secured his laptop?

Of course, if you avoid caret- and tilde-versioning in your package.json, then this isn’t a problem. You can already inspect the code you’re running, and make sure you trust it. One might argue that this is the “more secure” approach, but that would negate one of auto-updating’s main selling points, which is that patch releases can supposedly contain security patches.

Drawback 3: auto-updating has a limited shelf life

This is the point I would really like to get across to other Node developers.

Right now it’s mostly fine for dependencies to break upstream, because we can remain pretty confident that if we file a bug on a project, the author will respond quickly and fix it.

For instance, a month ago I found a bug in Express, and not only did the maintainer (the awesome Doug Wilson) fix it in a matter of minutes, he also took it upon himself to come into the express-pouchdb project and submit a bunch of PRs. Experiences like that really exemplify what’s great about OSS development.

However, right now Node and npm are in their heyday. Changes are coming fast and furious, the community is active and engaged, and so of course caret- and tilde-versioning are pretty low-risk. Even if a bug is introduced in a minor or patch version upstream, it’ll probably get resolved quickly.

Imagine a future after the current boom, though, where npm occupies a position more like CPAN – still useful, but long in the tooth. Popular modules have fallen into disrepair, GitHub issues go unresolved. Maybe everyone’s moved on to Go.

In this post-apocalpytic future for Node, I can easily imagine developers saying, “Oh yeah, npm? That’s that thing where whenever you require() something, you have to immediately go in and remove all the tildes and carets.” Or worse, maybe someone will have to write a proxy in front of npm to act as a sort of Wayback Machine, shrinkwrapping each module to the dependencies it had when it was last published.

Don’t kid yourself, Noders – someday this future will be upon us. Project maintainers will eventually lose interest, move on to other projects, or maybe find that the obligations of family/work/whatever have reduced their ability to respond to bugs on GitHub. Maintainers will even die – yes, young coder, you too are mortal – and ideally whatever software we write should remain useful even after we’re gone. Ideally.

I don’t have the answers, but I do know that we as a community need to start preparing for eventualities like this. Right now it may feel like the party’s never going to end, but eventually the booze will run out, the music will stop, and we will have to make a sober evaluation of our software’s legacy. Recognizing the limitations of Semver and caret- and tilde-versioning are a step in that direction.

Postscript: why do people rebel against Semver?

I have a hunch about this: I think it’s because the larger culture hasn’t adjusted to Semver yet.

For someone steeped in Node practices, it may be obvious that version 3.0.0 of a module has introduced breaking changes since 2.0.0. To the average layman, though, a major version change indicates some big overhaul of the software, along with a slew of new features. This is a holdover from the shrinkwrap era, when a new major version meant a shiny new box in the store, and it’s still the prevailing view in popular understanding: “web 2.0,” “government 2.0,” etc.

What Semver ignores is that bumping a major version has marketing value.

We definitely experienced that recently in PouchDB. I found it funny that after we released version 3.0.0, we suddenly got a lot more traffic and stargazers, and we were even featured in JavaScript Weekly. However, the biggest change in 3.0.0 was subtracting features – that’s why we incremented the major version!

By contrast, the previous version, 2.2.3, constituted a huge internal restructuring that brought better stability, but you wouldn’t really know it, since it was just a patch version. And it got much less attention than 3.0.0.

I suppose the payoffs from incrementing a major version may dwindle once you get into Chrome-like territory with version 36, 37, etc., but for the low versions, it definitely seems to help boost your project’s public visibility.


Get every new post delivered to your Inbox.

Join 887 other followers