Archive for October, 2016

Progressive enhancement isn’t dead, but it smells funny

Update: this blog post sparked a lively debate. You may want to read the responses from Laurie Voss, Jeremy Keith, Aaron Gustafson, and Christian Heilmann.

Progressive enhancement is a touchy subject. It can be hard to discuss dispassionately because, like accessibility, it’s often framed as an issue of empathy and compassion:

The insinuation is that if you don’t practice progressive enhancement, then maybe you’re just a careless elitist, building slick websites for the wealthy 1% who have modern browsers and fast connections, and offering only a sneering “let them eat cake” to everybody else. Using progressive enhancement can be seen as a moral decision as much as a technical one.

So what exactly is progressive enhancement? At the risk of grossly oversimplifying things, here are the two major interpretations I’ve seen:

  1. Broad version: start with a baseline of functionality, enhance upwards based on capabilities.
  2. Narrow version: start with HTML, then add CSS, then add JavaScript.

In this post, I’d like to argue that, while the broad version of progressive enhancement is still enormously useful, the narrow version doesn’t make much sense in the modern context of web applications running on smartphones in evergreen browsers. It doesn’t make sense for the western world, it doesn’t make sense for the developing world, and it doesn’t make sense given the state of web tooling. It is a holdover from a bygone era, repeated endlessly by those who have not recognized that the world has moved on without it, and publicly unchallenged by those who have already privately (although perhaps guiltily) abandoned it.

Before making my case, though, let’s explore the meaning of “progressive enhancement” a bit more.

What even is progressive enhancement?

Ask 10 different people, and you’ll likely get 10 different definitions of progressive enhancement. One of the main points of contention, though, is around whether or not a website should work without JavaScript.

In a poll by Remy Sharp, he says, “out of 800 responses, 25% said that progressive enhancement was making the site work without JavaScript.” This viewpoint is apparently shared by PE advocates who disable JavaScript and are disturbed by what they see. (Spoiler alert: many top websites do not bother to make their core functionality work without JavaScript.)

There are plenty of progressive enhancement “moderates,” though, who don’t take such a hard-line stance. Jake Archibald says “each phase of the enhancement needs a user,” and that sometimes a no-JS version wouldn’t have any users at all. Paul Lewis is a big fan of progressive rendering for performance reasons, and given the popularity of server-side React, Ember FastBoot, and Angular 2 universal JavaScript, I’d say plenty of web developers agree with them.

For many proponents of progressive enhancement, however, the issue of JavaScript remains a “magic line that must not be crossed.” I discovered this myself when I somewhat clumsily crossed this line, live on stage at Fronteers Conference in Amsterdam. I had a slide in my talk that read:

In 2016, it’s okay to build a website that doesn’t work without JavaScript.

To me, and to the kind of JavaScript-focused crowd I run with, this isn’t such a controversial statement. For the majority of websites I’ve built in my career, the question of how it functions without JavaScript has been largely irrelevant (except from a performance perspective).

However, it turned out that Fronteers was perhaps the crowd least likely to be amenable to this particular message. When I showed this slide, all hell broke loose:

The condemnation was as swift as it was vocal. Many prominent figures in the web community – Eric Meyer, Sara Soueidan, Jen Simmons – felt compelled not only to disagree with me, but to disagree loudly and publicly. Subtweets and dot-replies ran rampant. As one commentator put it, “you’d swear you had killed a baby panda the way some people react.”

Now, I have nothing against these folks personally. (In fact, I’m a big fan of Jen Simmons’ Web Ahead podcast, and of Sara Soueidan’s articles.) But the fact that their reaction wasn’t just of disagreement but of anger or exasperation is worth dissecting. I believe it harks back to what I said earlier about progressive enhancement being conflated with access – the assumption is that I’m probably just some privileged white dude advocating for a kind of web design that leaves anyone who’s less fortunate out in the cold.

Is that really true, though? Is JavaScript actually harmful for a certain segment of web users? As Jake Archibald pointed out, it’s not really about users who have disabled JavaScript, so who exactly are we helping when we make our websites work without it?

Progressive enhancement for the next billion

Tom Dale (who once famously declared progressive enhancement to be dead, but has softened since then) has a fabulous talk that pretty much cemented my thinking on progressive enhancement, so this next section owes a huge debt to him.

As Benedict Evans has noted, the next billion people who are poised to come online will be using the internet almost exclusively through smartphones. And if Google’s plans with Android One are any indication, then we have a fairly good idea of what kind of devices the “next billion” will be using:

  • They’ll mostly be running Android.
  • They’ll have decent specs (1GB RAM, quad-core processors).
  • They’ll have an evergreen browser and WebView (Android 5+).
  • What they won’t have, however, is a reliable internet connection.

In a world where your lowest common denominator is a very capable browser with a modern JavaScript engine, running on a smartphone that would have been classified as desktop-class ten years ago, but the network is now the bottleneck, what does that mean for progressive enhancement?

Simple: it means that, if you care about those users, you should be focusing on offline-first, i.e. treating the network as an enhancement. After the first load (which yes, should be server-rendered via isomorphic JavaScript), you’ll want to run as much logic as possible on the user’s device so that it can operate autonomously – regardless of whether the network conditions are good, bad, or nonexistent. And today, the way we accomplish this on the web is by using IndexedDB and Service Workers, i.e. with JavaScript.

Personally, I’ve found this method remarkably effective for building performant progressive web apps. I find that, by starting with a baseline of a low-end Android phone, throttled to 2G or 3G, and using that as my primary test device, I can build a web application that works quite well on a variety of hardware, browser, and network conditions. Optimizing for such devices tends to naturally lead to a heavily client-side approach, because by avoiding network round-trips the UI interactions become snappy and app-like. And thanks to advances in JavaScript frameworks, it’s easier than ever to move UI logic from the client to the server (using Node.js), in order to achieve a fast initial render.

The insight of offline-first is that, when you optimize for conditions where the network is unavailable, you end up making a better experience for everyone, even those on blazing-fast connections. The local cache is nearly always faster than the network, and even users on supposed “4G” connections will occasionally experience some amount of 2G speeds or offline, so the local cache is a good bet for them as well.

Offline-first is a form of progressive enhancement that directly targets the baseline experience that a high-quality progressive web app ought to support, rather than focusing on the more reductionist mindset of “first HTML, then CSS, then JavaScript.”

Truly robust web apps

Tom Dale and I aren’t the only ones who have come to this conclusion. The Chrome team has been pushing both for offline-first and the app shell architecture, which advocates for a server-rendered “shell” that then manages most of the actual app content using JavaScript. This is the way that most progressive web apps are being built these days, including applications designed for folks in developing countries, by folks in developing countries.

To demonstrate, here are screenshots of the progressive web apps Housing.com (India), Konga (Nigeria), and Flipkart (India), each with JavaScript deliberately disabled. What you’ll notice is that the authors of these apps have elected to show their script-less users an endless loading state. The “no-JS” case is clearly irrelevant to them, even if the offline case is not. (Each of these apps uses a Service Worker to cache data offline, and works fabulously well when JavaScript is enabled.)

Screenshots of Housing.com, Konga, and Flipkart without JavaScript

Screenshots of Housing.com, Konga, and Flipkart without JavaScript.

Now, you might argue, as Jeremy Keith has in “Regressive web apps,” that maybe these folks have been led astray by Google’s cheerleading for the app-shell model, and in fact it’d be nice to see some examples of PWAs that don’t require JavaScript. In his words:

“I hope we’ll see more examples of Progressive Web Apps that don’t require JavaScript to render content.”

My question to Jeremy, however, is: why? Why is it considered an unqualified good to make a website that works without JavaScript? Is it possible that this mindset – “start with HTML, add CSS, sprinkle on JavaScript” – is only a best practice in a world of incapable browsers (such as IE6) hooked up to stable desktop connections, and now that we’re in a world of smart, JavaScript-enabled browsers with unreliable connections, we need to re-evaluate our thinking?

I help maintain an open-source project called PouchDB, which enables offline storage and synchronization using (you guessed it) JavaScript. One of the more interesting projects PouchDB has been involved in was the fight against Ebola in West Africa, where it was used in an Angular app to store patient data offline (including symptom and treatment details, as well as photos), which were then synced to the server whenever a connection was re-established. In a region of the world where the network was neither fast nor reliable, this was a key feature.

Now, even with some very clever usage of AppCache, there’s no way the authors of this app could have built such an offline experience without JavaScript. And yet, it was perfectly adapted for the task at hand, and I’m proud that PouchDB actually played a role in stamping out Ebola. For anyone who is convinced that “first HTML, then CSS, then JavaScript” is the best approach for users in developing countries, I’d encourage them to actually talk to folks building apps for that demographic, and ask them if they don’t find offline-first (with JavaScript!) to be a more effective strategy.

My assertion is that, because of the reality of network and device conditions in those countries, the “HTML-first” approach has become almost completely obsolete (with the minor exception of server-side rendering), and the offline-first approach now reigns supreme. In those markets, PWAs as currently promoted are a big hit, which is clear from a fascinating Opera interview with developers in Nigeria, a Google report by Flipkart on their increased engagements with PWAs, and similar feedback from Konga.

The web is a big tent

On the other hand, I wouldn’t be so presumptuous as to say that I’ve unlocked The One True Way™ to build websites. I don’t believe every website needs to be an offline-first JavaScript app, any more than I believe every website needs to be an HTML5 canvas game, or a static blog, or a WebGL sandbox world, or whatever weirdo WebVR thing I might be strapping onto my face in the future.

When I said that in 2016 it’s okay to build a site that requires JavaScript, what I’m getting at is this: by 2016, the web has fundamentally changed. It’s expanded. It’s blossomed. There are more people building more kinds of websites than ever before, and no one-size-fits-all set of “best practices” can cut the mustard anymore.

There’s still plenty of room on the web for sites that rely primarily on HTML and CSS; in many cases, it’s still the best choice! Especially if it’s better suited to the skill set of your team, or if you’re primarily focused on static content, then “traditional” progressively-enhanced sites are a no-brainer. Such sites are certainly easier to manage and maintain than client-side webapps, and plus you often get accessibility and performance for free. Declarative languages like HTML and CSS are also easier to learn than imperative ones like JavaScript, and in many cases they’re also more robust. There are lots of good reasons to choose this architecture.

Different sites are optimized for different use cases, and I wouldn’t be so presumptuous as to tell folks that they all need to be building websites exactly the way I like them built. I certainly don’t think we should be chiding people for not building websites that work without JavaScript, or to make damning statements like this one:

“Pages that are empty without JS: dead to history, unreliable for search results, and thus ignorable. No need to waste time reading or responding.

This attitude, and others like it, stem from a set of myths about JavaScript that web developers have internalized based on conditions of the past. These days, JavaScript actually does run on search engines, in screen readers, and even in Opera Mini (for a strict but fair 5 seconds). JavaScript is a well-established part of the web platform – and unlike Flash (to which it’s often unflatteringly compared) JavaScript is standardized to ensure it will pass the test of time. Expending extra effort to make your website work without JavaScript is often not only fruitless; in the cases I mentioned above with PWAs, it can actually lead to a poorer user experience.

But besides just finding these attitudes wrong, I find them toxic. Any community that’s eager to tear each other down at the slightest whiff of unorthodoxy is not a community that I want to be a part of. I want the web to be a place where we celebrate each other’s accomplishments, where we remain ever curious and inquisitive and questioning, and where above all else, we make newcomers (who might not know everything already!) feel welcome. That’s the web I love – a big tent that’s always growing bigger.

Final thoughts

We as a community need to realize that the question of “JavaScript – yes or no?” is less about access and ubiquity, and more about performance and robustness. Only then can we stop all this ugly shaming and vitriol towards those who happen to choose JavaScript as their primary tool for building for the web. I believe that, once the moral and emotional dimension is removed, the issue of JavaScript can be more clearly viewed as just another tradeoff among the many tradeoffs we inevitably make when we build websites. So next time your gut instinct is to blame and shame: try to ask and understand instead.

And to the advocates of progressive enhancement: if you still believe requiring JavaScript is a universally bad idea, then don’t hesitate to voice that opinion! Any idea deserves to be evaluated in a public forum – that’s the only way we can suss out the good ideas from the bad ones. But purely on a strategic level, I would invite you to make your case in a less caustic and condescending way. Communities that welcome outsiders with open arms are usually better at winning converts than those that sneer and denigrate (something about flies and honey and vinegar and all that).

So if you believe in empathy – if you believe that the web is about building up good experiences for everyone, regardless of their background or ability – then I would encourage you to demonstrate that empathy, especially towards those you disagree with. Thankfully, I will admit that even those at Fronteers Conference who disagreed with me were perfectly polite and respectful in person; often these things only get out of hand on Twitter, which is not famous for enabling subtlety or nuance. So keep that in mind, and try to remember the human behind the keyboard.

The web is for everyone. The web platform is for everyone. Let’s keep it that way.

Thanks to Tom Dale, Jan Lehnardt, and Addy Osmani for reviewing a draft of this blog post.

Also, apologies to folks whose tweets I called out in this post, but I consider turnabout to be fair play. 😉 And to clarify: Sara Soueidan was perfectly courteous and respectful towards me online; I wouldn’t lump any of her comments in with the “caustic” ones.

The title is a reference to Frank Zappa.