Progressive enhancement isn’t dead, but it smells funny

Update: this blog post sparked a lively debate. You may want to read the responses from Laurie Voss, Jeremy Keith, Aaron Gustafson, and Christian Heilmann.

Progressive enhancement is a touchy subject. It can be hard to discuss dispassionately because, like accessibility, it’s often framed as an issue of empathy and compassion:

The insinuation is that if you don’t practice progressive enhancement, then maybe you’re just a careless elitist, building slick websites for the wealthy 1% who have modern browsers and fast connections, and offering only a sneering “let them eat cake” to everybody else. Using progressive enhancement can be seen as a moral decision as much as a technical one.

So what exactly is progressive enhancement? At the risk of grossly oversimplifying things, here are the two major interpretations I’ve seen:

  1. Broad version: start with a baseline of functionality, enhance upwards based on capabilities.
  2. Narrow version: start with HTML, then add CSS, then add JavaScript.

In this post, I’d like to argue that, while the broad version of progressive enhancement is still enormously useful, the narrow version doesn’t make much sense in the modern context of web applications running on smartphones in evergreen browsers. It doesn’t make sense for the western world, it doesn’t make sense for the developing world, and it doesn’t make sense given the state of web tooling. It is a holdover from a bygone era, repeated endlessly by those who have not recognized that the world has moved on without it, and publicly unchallenged by those who have already privately (although perhaps guiltily) abandoned it.

Before making my case, though, let’s explore the meaning of “progressive enhancement” a bit more.

What even is progressive enhancement?

Ask 10 different people, and you’ll likely get 10 different definitions of progressive enhancement. One of the main points of contention, though, is around whether or not a website should work without JavaScript.

In a poll by Remy Sharp, he says, “out of 800 responses, 25% said that progressive enhancement was making the site work without JavaScript.” This viewpoint is apparently shared by PE advocates who disable JavaScript and are disturbed by what they see. (Spoiler alert: many top websites do not bother to make their core functionality work without JavaScript.)

There are plenty of progressive enhancement “moderates,” though, who don’t take such a hard-line stance. Jake Archibald says “each phase of the enhancement needs a user,” and that sometimes a no-JS version wouldn’t have any users at all. Paul Lewis is a big fan of progressive rendering for performance reasons, and given the popularity of server-side React, Ember FastBoot, and Angular 2 universal JavaScript, I’d say plenty of web developers agree with them.

For many proponents of progressive enhancement, however, the issue of JavaScript remains a “magic line that must not be crossed.” I discovered this myself when I somewhat clumsily crossed this line, live on stage at Fronteers Conference in Amsterdam. I had a slide in my talk that read:

In 2016, it’s okay to build a website that doesn’t work without JavaScript.

To me, and to the kind of JavaScript-focused crowd I run with, this isn’t such a controversial statement. For the majority of websites I’ve built in my career, the question of how it functions without JavaScript has been largely irrelevant (except from a performance perspective).

However, it turned out that Fronteers was perhaps the crowd least likely to be amenable to this particular message. When I showed this slide, all hell broke loose:

The condemnation was as swift as it was vocal. Many prominent figures in the web community – Eric Meyer, Sara Soueidan, Jen Simmons – felt compelled not only to disagree with me, but to disagree loudly and publicly. Subtweets and dot-replies ran rampant. As one commentator put it, “you’d swear you had killed a baby panda the way some people react.”

Now, I have nothing against these folks personally. (In fact, I’m a big fan of Jen Simmons’ Web Ahead podcast, and of Sara Soueidan’s articles.) But the fact that their reaction wasn’t just of disagreement but of anger or exasperation is worth dissecting. I believe it harks back to what I said earlier about progressive enhancement being conflated with access – the assumption is that I’m probably just some privileged white dude advocating for a kind of web design that leaves anyone who’s less fortunate out in the cold.

Is that really true, though? Is JavaScript actually harmful for a certain segment of web users? As Jake Archibald pointed out, it’s not really about users who have disabled JavaScript, so who exactly are we helping when we make our websites work without it?

Progressive enhancement for the next billion

Tom Dale (who once famously declared progressive enhancement to be dead, but has softened since then) has a fabulous talk that pretty much cemented my thinking on progressive enhancement, so this next section owes a huge debt to him.

As Benedict Evans has noted, the next billion people who are poised to come online will be using the internet almost exclusively through smartphones. And if Google’s plans with Android One are any indication, then we have a fairly good idea of what kind of devices the “next billion” will be using:

  • They’ll mostly be running Android.
  • They’ll have decent specs (1GB RAM, quad-core processors).
  • They’ll have an evergreen browser and WebView (Android 5+).
  • What they won’t have, however, is a reliable internet connection.

In a world where your lowest common denominator is a very capable browser with a modern JavaScript engine, running on a smartphone that would have been classified as desktop-class ten years ago, but the network is now the bottleneck, what does that mean for progressive enhancement?

Simple: it means that, if you care about those users, you should be focusing on offline-first, i.e. treating the network as an enhancement. After the first load (which yes, should be server-rendered via isomorphic JavaScript), you’ll want to run as much logic as possible on the user’s device so that it can operate autonomously – regardless of whether the network conditions are good, bad, or nonexistent. And today, the way we accomplish this on the web is by using IndexedDB and Service Workers, i.e. with JavaScript.

Personally, I’ve found this method remarkably effective for building performant progressive web apps. I find that, by starting with a baseline of a low-end Android phone, throttled to 2G or 3G, and using that as my primary test device, I can build a web application that works quite well on a variety of hardware, browser, and network conditions. Optimizing for such devices tends to naturally lead to a heavily client-side approach, because by avoiding network round-trips the UI interactions become snappy and app-like. And thanks to advances in JavaScript frameworks, it’s easier than ever to move UI logic from the client to the server (using Node.js), in order to achieve a fast initial render.

The insight of offline-first is that, when you optimize for conditions where the network is unavailable, you end up making a better experience for everyone, even those on blazing-fast connections. The local cache is nearly always faster than the network, and even users on supposed “4G” connections will occasionally experience some amount of 2G speeds or offline, so the local cache is a good bet for them as well.

Offline-first is a form of progressive enhancement that directly targets the baseline experience that a high-quality progressive web app ought to support, rather than focusing on the more reductionist mindset of “first HTML, then CSS, then JavaScript.”

Truly robust web apps

Tom Dale and I aren’t the only ones who have come to this conclusion. The Chrome team has been pushing both for offline-first and the app shell architecture, which advocates for a server-rendered “shell” that then manages most of the actual app content using JavaScript. This is the way that most progressive web apps are being built these days, including applications designed for folks in developing countries, by folks in developing countries.

To demonstrate, here are screenshots of the progressive web apps Housing.com (India), Konga (Nigeria), and Flipkart (India), each with JavaScript deliberately disabled. What you’ll notice is that the authors of these apps have elected to show their script-less users an endless loading state. The “no-JS” case is clearly irrelevant to them, even if the offline case is not. (Each of these apps uses a Service Worker to cache data offline, and works fabulously well when JavaScript is enabled.)

Screenshots of Housing.com, Konga, and Flipkart without JavaScript

Screenshots of Housing.com, Konga, and Flipkart without JavaScript.

Now, you might argue, as Jeremy Keith has in “Regressive web apps,” that maybe these folks have been led astray by Google’s cheerleading for the app-shell model, and in fact it’d be nice to see some examples of PWAs that don’t require JavaScript. In his words:

“I hope we’ll see more examples of Progressive Web Apps that don’t require JavaScript to render content.”

My question to Jeremy, however, is: why? Why is it considered an unqualified good to make a website that works without JavaScript? Is it possible that this mindset – “start with HTML, add CSS, sprinkle on JavaScript” – is only a best practice in a world of incapable browsers (such as IE6) hooked up to stable desktop connections, and now that we’re in a world of smart, JavaScript-enabled browsers with unreliable connections, we need to re-evaluate our thinking?

I help maintain an open-source project called PouchDB, which enables offline storage and synchronization using (you guessed it) JavaScript. One of the more interesting projects PouchDB has been involved in was the fight against Ebola in West Africa, where it was used in an Angular app to store patient data offline (including symptom and treatment details, as well as photos), which were then synced to the server whenever a connection was re-established. In a region of the world where the network was neither fast nor reliable, this was a key feature.

Now, even with some very clever usage of AppCache, there’s no way the authors of this app could have built such an offline experience without JavaScript. And yet, it was perfectly adapted for the task at hand, and I’m proud that PouchDB actually played a role in stamping out Ebola. For anyone who is convinced that “first HTML, then CSS, then JavaScript” is the best approach for users in developing countries, I’d encourage them to actually talk to folks building apps for that demographic, and ask them if they don’t find offline-first (with JavaScript!) to be a more effective strategy.

My assertion is that, because of the reality of network and device conditions in those countries, the “HTML-first” approach has become almost completely obsolete (with the minor exception of server-side rendering), and the offline-first approach now reigns supreme. In those markets, PWAs as currently promoted are a big hit, which is clear from a fascinating Opera interview with developers in Nigeria, a Google report by Flipkart on their increased engagements with PWAs, and similar feedback from Konga.

The web is a big tent

On the other hand, I wouldn’t be so presumptuous as to say that I’ve unlocked The One True Way™ to build websites. I don’t believe every website needs to be an offline-first JavaScript app, any more than I believe every website needs to be an HTML5 canvas game, or a static blog, or a WebGL sandbox world, or whatever weirdo WebVR thing I might be strapping onto my face in the future.

When I said that in 2016 it’s okay to build a site that requires JavaScript, what I’m getting at is this: by 2016, the web has fundamentally changed. It’s expanded. It’s blossomed. There are more people building more kinds of websites than ever before, and no one-size-fits-all set of “best practices” can cut the mustard anymore.

There’s still plenty of room on the web for sites that rely primarily on HTML and CSS; in many cases, it’s still the best choice! Especially if it’s better suited to the skill set of your team, or if you’re primarily focused on static content, then “traditional” progressively-enhanced sites are a no-brainer. Such sites are certainly easier to manage and maintain than client-side webapps, and plus you often get accessibility and performance for free. Declarative languages like HTML and CSS are also easier to learn than imperative ones like JavaScript, and in many cases they’re also more robust. There are lots of good reasons to choose this architecture.

Different sites are optimized for different use cases, and I wouldn’t be so presumptuous as to tell folks that they all need to be building websites exactly the way I like them built. I certainly don’t think we should be chiding people for not building websites that work without JavaScript, or to make damning statements like this one:

“Pages that are empty without JS: dead to history, unreliable for search results, and thus ignorable. No need to waste time reading or responding.

This attitude, and others like it, stem from a set of myths about JavaScript that web developers have internalized based on conditions of the past. These days, JavaScript actually does run on search engines, in screen readers, and even in Opera Mini (for a strict but fair 5 seconds). JavaScript is a well-established part of the web platform – and unlike Flash (to which it’s often unflatteringly compared) JavaScript is standardized to ensure it will pass the test of time. Expending extra effort to make your website work without JavaScript is often not only fruitless; in the cases I mentioned above with PWAs, it can actually lead to a poorer user experience.

But besides just finding these attitudes wrong, I find them toxic. Any community that’s eager to tear each other down at the slightest whiff of unorthodoxy is not a community that I want to be a part of. I want the web to be a place where we celebrate each other’s accomplishments, where we remain ever curious and inquisitive and questioning, and where above all else, we make newcomers (who might not know everything already!) feel welcome. That’s the web I love – a big tent that’s always growing bigger.

Final thoughts

We as a community need to realize that the question of “JavaScript – yes or no?” is less about access and ubiquity, and more about performance and robustness. Only then can we stop all this ugly shaming and vitriol towards those who happen to choose JavaScript as their primary tool for building for the web. I believe that, once the moral and emotional dimension is removed, the issue of JavaScript can be more clearly viewed as just another tradeoff among the many tradeoffs we inevitably make when we build websites. So next time your gut instinct is to blame and shame: try to ask and understand instead.

And to the advocates of progressive enhancement: if you still believe requiring JavaScript is a universally bad idea, then don’t hesitate to voice that opinion! Any idea deserves to be evaluated in a public forum – that’s the only way we can suss out the good ideas from the bad ones. But purely on a strategic level, I would invite you to make your case in a less caustic and condescending way. Communities that welcome outsiders with open arms are usually better at winning converts than those that sneer and denigrate (something about flies and honey and vinegar and all that).

So if you believe in empathy – if you believe that the web is about building up good experiences for everyone, regardless of their background or ability – then I would encourage you to demonstrate that empathy, especially towards those you disagree with. Thankfully, I will admit that even those at Fronteers Conference who disagreed with me were perfectly polite and respectful in person; often these things only get out of hand on Twitter, which is not famous for enabling subtlety or nuance. So keep that in mind, and try to remember the human behind the keyboard.

The web is for everyone. The web platform is for everyone. Let’s keep it that way.

Thanks to Tom Dale, Jan Lehnardt, and Addy Osmani for reviewing a draft of this blog post.

Also, apologies to folks whose tweets I called out in this post, but I consider turnabout to be fair play. 😉 And to clarify: Sara Soueidan was perfectly courteous and respectful towards me online; I wouldn’t lump any of her comments in with the “caustic” ones.

The title is a reference to Frank Zappa.

65 responses to this post.

  1. I’m in the camp of those like Eric Meyer and Sara Soueidan, but I will admit that this is a tremendously compelling argument. I’m an advocate for web performance, and I see performance as a critical user experience problem in that until a page loads, there is no experience. My problem with the state of JavaScript as it is today is that we’re front loading a ton of assets on the user the first time they visit a page. I want content to be as accessible and usable as quickly as possible, and if JavaScript is getting in the way of that, I’m loathe to embrace the added weight.

    That said, there’s room for what you’re talking about. I think it’s pretty great that we’re using technology like service workers to manage offline experiences. it’s still something I want to do for my own blog as it matures. What I want to challenge, however, is the “huge JavaScript framework by default” mindset that is pervasive in this industry. App-like websites are great, and in your Ebola use case, they’re immensely useful. We just need to be cognizant of poor internet infrastructure quality, and of users on restricted data plans. Poor internet infrastructure quality and access to new features is often correlated. A case in point of this (albeit not related to JavaScript) is HTTP/2: https://jeremywagner.me/blog/http2-in-developing-nations

    Things are getting better all the time, though, and the fact that we’re having these arguments now means that a shift in how we approach these problems is upon us. I feel as though I’m astride the chasm, but still leaning toward minimalist experiences that I know will just work, over “app-like” user experiences that may be prone to breakage.

    Reply

    • I agree; the JS community still has a lot of work to do. When I analyze the performance of large webapps, front-loading a huge amount of JavaScript is often the performance killer. I see a few potential solutions:

      • optimize-js, a tool I wrote that can improve the JS parsing time by an order of magnitude in some cases
      • web worker rendering, divides JS bundles into a worker component (which runs on a BG thread) and a UI component
      • service worker-heavy architecture, which is mentioned in my Fronteers talk
      • PRPL, an interesting pattern being promoted by Polymer
      • Better education on how to do things like Webpack code-splitting and Browserify factor-bundles

      All in all, I think there are solutions to the problem without throwing the JS baby out with the bathwater. The JS community still does need to learn a lot of discipline, though; I agree. :)

      Reply

      • Part of the problem isn’t just the onerous execution phase, it’s also the sheer weight of JavaScript in today’s typical website. Deferred loading of scripts can help alleviate the problem, but I still believe that a proactive solution in the way of asking “does feature [x] really need JavaScript?” could prove more useful at the outset, as well as providing noscript fallbacks where possible and practical. But alas, now I’m just rehashing my old arguments. Don’t mind me.

        By the way, I didn’t know you wrote optimize-js. I’m about to start using on that on my own site. If I had discovered it prior to my book going into production, I would have mentioned it. It’s a great tool, and it deserves more press than I’ve seen it get. You’re doing a lot to help solve performance issues around JavaScript, and I thank you for that.

      • optimize-js is just a first stab. :) I’d really like to see more solutions that 1) research how JS engines actually work, and 2) try to exploit well-established optimizations. This is a hugely unexplored field and I’m excited to see more optimize-js competitors.

        JS page weight can be caused by tons of things. Sometimes it’s just the sheer number of modules, but it seems like Webpack 2 will have some solutions to that, with built-in tree-shaking and module-inlining. Other times, as you say, it’s because of overusing JS when a simple CSS/HTML solution will do. Anyway, I’m glad that the JS community is finally starting to pay attention to bundle bloat.

    • Developing in SouthAfrica, i know all about data caps and plans and how they charge you for every MB you spend… and I can say without a doubt, caps are there and used up because of VIDEO. full stop. javascript makes up practically FUCKALL of data caps in a world of the video hypnotised…

      Reply

  2. Posted by Baldur Bjarnason (@fakebaldur) on October 13, 2016 at 8:48 AM

    What is exasperating, and the reason why so many people have an emotional response to this subject, isn’t the existence of frameworks like React or the increasing popularity of a variety of web apps.

    Web apps in general are an unadulterated good because they are in many ways more flexible and accessible than native apps, especially if you have a slow device with limited storage space.

    So JS-required web apps are a good thing. They are generally replacing less accessible native apps that have less reach and a substantially higher storage footprint.

    The problem—what annoys so many—is how popular it is to use heavy frameworks like React or Angular to make normal websites. These are websites that are nothing more than a collection of interlinked pages with a small set of forms and the cost in terms of performance, reliability, and speed of implementing these sites as JS-required single page web apps using React or Angular is phenomenal.

    This is not an uncommon practice. In fact, it feels like the tech industry has completely abandoned the basic HTML+links+forms practice even when it is objectively better and can be made offline-first with very simple use of a service worker.

    For example, I personally know of a couple of ecommerce sites that have been implemented as React/Angular apps even though doing so had an objective (and substantial) impact on the site’s conversion rate and therefore the whole company’s ROI. But the developers won’t listen and management buys into the React/Angular hype. Meanwhile the sites of these companies are either underperforming or losing money. And when everything falls apart, the web is blamed and the hype shifts back to native.

    That’s why people get emotional.

    Reply

    • I believe the reason this has happened is because the browser has become a really great development environment thanks to advances in things like the Chrome dev tools. So nowadays it’s incredibly easy to get started with a client-side Angular and React app, because the tooling around it is so ergonomic, changes are shown instantly within the browser, etc.

      The solution in my mind is to promote server-side rendering (aka isomorphic JavaScript) rather than rush to a purely server-side framework. These days it’s fairly easy to just find a breakpoint in your JS codebase and render any percentage of it on the server-side. Angular 2 is also doing a great job of this by making “ahead of time” (AoT) compilation the default for production mode, which in the experiments I’ve seen can cut the JS bundle size by as much as 90%.

      Or yes, in cases where a static site makes the sense, you should argue for it! :) JS frameworks are trendy, but they aren’t always the best option, and I think folks are waking up to that as well.

      Reply

      • Posted by Baldur Bjarnason (@fakebaldur) on October 13, 2016 at 9:57 AM

        The solution in my mind is to promote server-side rendering (aka isomorphic JavaScript) rather than rush to a purely server-side framework.

        Isomorphic Javascript is much more complicated and therefore more expensive to use in production for a small to medium-sized software company than most established server-side frameworks. (Obviously YMMV. Large organisations have access to more resources and capabilities.)

        Many of these server-side frameworks have, literally, more than a decade’s worth of testing, tooling, and documentation to build on. They are both easier to develop and—for these kinds of sites—faster. And service workers make the prospect of progressively enhancing a server-side framework to be offline much simpler than it is to make Angular, React, or Ember perform well for a non-app website.

        (Most institutional and corporate websites and web services are largely just links and simple forms and are a much higher percentage of the web than web apps are ever likely to be.)

        A lot of new web developers (especially those who work for VC-funded companies) don’t seem to appreciate just how well tested and mature frameworks like Rails, Flask, or Django are for developing the small and medium-sized websites that make up a large part of the web.

        So it’s the other way around: we have 10-15 year old server-side frameworks with mature tooling and extensive documentation and instead of sitting down and getting to know boring old tech that works, new web developers are rushing to use unstable and immature isomorphic frameworks instead. Or worse yet, they build everything as a JS-only web app with no server-rendered HTML to bootstrap from.

        And, I’d like to repeat: I love web apps, even when they require JS. They are a huge improvement IMO over most native apps. I just think it’s a very costly mistake to implement every site as a web app. Constantly avoiding the ‘boring old’ is very expensive practice in software development.

      • I agree with you! Client-side JS frameworks aren’t the solution to every problem, and the server-side frameworks are certainly more battle-tested. Everything is situation- and requirements-dependent. :)

  3. Hi,

    I snarked a bit during the exchange on Twitter around this slide, (specifically about how if a screen reader can’t work with properly written JS then it’s the screen reader’s problem), so I wanted to stop by and thank you for writing this article. Building bridges and all that. Thanks especially for providing some context for the slide that made people so angry. Context gets difficult to follow on Twitter.

    Reply

    • No problem; it’s okay. :) It feels good to show our solidarity with the folks we agree with online, but sometimes a casualty is reasoned debate with the opposing side. All of us can make an effort to try to do better.

      Reply

  4. Hi Nolan,

    Great meeting you in Amsterdam. Thanks for highlighting Tom’s Responsive Field Day talk. It was one of my favorites from our conference.

    One thing I wanted to ask you last week, but never got a chance to, was to ask if you had seen Jeremy Keith’s talk from the same conference. We scheduled the two talks back-to-back on purpose. Many of the questions you pose are things he talked about:

    Cheers,

    Jason

    Reply

    • Nice to meet you, too! :) I haven’t, but I’ll check it out, thanks.

      Reply

    • BTW, I wanted to thank you for the fact that your comment form continued to work while it appears CSS and JS failed for some reason.

      And yes, I did get a little chuckle out of the fact that when I submitted a comment that it fit right into the topic of this post.

      Reply

      • BTW, if it isn’t clear, that was not staged at all. I simply wanted to pass on the video and stay out of the middle of the conversation. I took a screenshot because it tickled me. :)

      • Heh yeah, hosted WordPress trolls me a lot with that too. :) Dunno if it’s the cache policy on the CSS/JS, or the bundling policy, or what.

        In any case, yes, I would feel very silly if my static blog relied on JS for rendering, and it’s certainly nice that you could still use it when it failed. :)

  5. Posted by Jeremy on October 13, 2016 at 9:42 AM

    I sincerely hope all those exasperated Tweeters read and respond to this blog (and post those responses on sites like EchoJS so I can find them). I’ve been in total agreement with the points you made here for awhile, but I’d be curious to see if those on the other side can translate all of their passion in to reasonable arguments for their position.

    Reply

  6. First of all, thanks for writing down your thoughts. While I agree with some points I also can’t really agree with some.

    (Spoiler alert: many top websites do not bother to make their core functionality work without JavaScript.)

    Yes, many top websites only work with JavaScript, which I don’t think is a practise we should all follow. At least not, if we define JavaScript as client-side JavaScript and don’t involve sever-side JavaScript. Many big sites are also not optimized, eg https://twitter.com/molily/status/783767319470374916 and, I think you agree here, that we shouldn’t build our sites by following that.

    As Jake Archibald pointed out, it’s not really about users who have disabled JavaScript, so who exactly are we helping when we make our websites work without it?

    Everyone who uses an ad blocker, which may prevent your JavaScript from running, everyone on a poor connection where the JavaScript takes to long on the first load and many more. It is not about “no JavaScript”, it is about, what happens when the JavaScript doesn’t work for the user.

    Offline-first is a form of progressive enhancement that directly targets the baseline experience that a high-quality progressive web app ought to support, rather than focusing on the more reductionist mindset of “first HTML, then CSS, then JavaScript.”

    I am all for offline-first (although I don’t really like the term, as it is only true for second visit) but this doesn’t mean we shouldn’t focus on the the core principles of PE anymore. I really like Service Worker for example, one of the reasons why I really like it is that this is a perfect example of PE for me. If a browser supports Service Worker you get a performance boost if done right, if the browser doesn’t support it they will get the same experience as before. Nobody harmed, but some (many more in the future) get an enhanced experience.

    My question to Jeremy, however, is: why? Why is it considered an unqualified good to make a website that works without JavaScript? Is it possible that this mindset – “start with HTML, add CSS, sprinkle on JavaScript” – is only a best practice in a world of incapable browsers (such as IE6) hooked up to stable desktop connections, and now that we’re in a world of smart, JavaScript-enabled browsers with unreliable connections, we need to re-evaluate our thinking?

    From what I understand, Jeremy doesn’t mean “works without JavaScript”, but rather “render the core content if JavaScript fails”. For me this means, that on first load you should always render the core content server-side (be it static HTML, via PHP or via Node.js). A smart, JavaScript-enabled browser is only smart if the JavaScript works perfectly fine, otherwise the browser is rather dump. I am all for optimizing for unreliable connections, and rendering the core content server-side on first load will always be faster so why not use this instead of rendering it client-side with JavaScript.

    We as a community need to realize that the question of “JavaScript – yes or no?” is less about access and ubiquity, and more about performance and robustness.

    Yes, it is about performance and robustness but more important about accessibility and being prepared for failures. And JavaScript often fails and all optimizations won’t help the user in that case as they simple can’t use the site anymore if you are not prepared for that scenario.

    To sum up my thoughts:

    1) Render the core part of the site server-side (and of course this can also be done with JavaScript, eg. Node.js) on first view. From there on you can render additional content with JavaScript. And, of course you can enhance the regular links on your site and can use JavaScript to transitions between pages.
    2) Never try to imitate native HTML with JavaScript.
    3) Accessility should always be more important than performance or other constraints.
    4) Use JavaScript, but use it wisely and don’t assume it will always work.
    3) Be nice :-)

    Thanks again for writing this and keep up this nice conversation between different mindsets.

    Reply

    • Thank you for articulating these viewpoints. :) I feel like all these points largely been covered by the existing literature on PE, though; my counterpoint is that they’re not always applicable in every use case, nor is it some kind of high moral imperative as much of the rhetoric seems to suggest. Especially for the case of poor connections, I actually feel like the “HTML, then CSS, then JS” technique does a poor job of serving those users.

      Reply

      • Posted by justmarkup on October 13, 2016 at 11:13 AM

        I am curious, why you think “HTML, then CSS, then JS” technique does a poor job of serving those users.

        I haven’t seen any example where rendering HTML (and the core content) client-side via JavaScript is faster than rendering it server-side. Yes, rendering content after first visit via JavaScript is faster but not on the first view.

  7. Posted by adactio on October 13, 2016 at 10:36 AM

    Nolan, I think we are in complete agreement. I am very much a proponent of what you describe as the “broad version” of progressive enhancement, i.e. progressive enhancement as a process:

    “start with a baseline of functionality, enhance upwards based on capabilities.”

    In fact, I don’t know of anyone who espouses the “narrow” version you describe.

    In answer to your question:

    “My question to Jeremy, however, is: why? Why is it considered an unqualified good to make a website that works without JavaScript?”

    That framing makes it sound like it’s a binary choice: either the website works or it doesn’t. That’s not what I’m suggesting. I’m advocating that the core functionality of a website (which can be as simple as reading some text) should be available without JavaScript (because, let’s face it, that core functionality doesn’t require JavaScript). But there are plenty of enhancements from there that can and should require JavaScript. In the case of the app shell model, it’s really, really close to providing the core functionality (displaying content), but as you demonstrate in your examples, they stop just short, instead rendering only the site furniture but leaving the content to be pulled in via JavaScript. I’m suggesting that line could moved ever so slightly for the initial render.

    My talk from Responsive Field Day (the same one that Tom spoke at), that Jason linked to above, goes into more detail:

    In short, I’m not saying that everything should work without JavaScript, just the core functionality …because that core functionality doesn’t require JavaScript and be done at the more resilient declarative layer. Declarative languages like HTML and CSS are more forgiving and more resilient than an imperative language like JavaScript—I won’t go into that in too much detail here but you can check out this video for more on that:

    https://vimeo.com/166140718

    I’m not suggesting we don’t use JavaScript, simply that we don’t rely on JavaScript for tasks better suited to HTML.

    “My assertion is that, because of the reality of network and device conditions in those countries, the “HTML-first” approach has become almost completely obsolete (with the minor exception of server-side rendering)”

    I don’t think that’s a minor exception. I think it’s crucial. Fortunately, as you point out, most frameworks are also coming to that conclusion: Ember FastBoot, Angular 2 Universal, React, etc.

    I agree completely that we should be focusing our efforts on offline-first approaches that don’t rely on an inherently fragile network. But I don’t believe we should swap out reliance one fragile part of our stack (the network) for a different fragile part of our stack (client-side JavaScript). But the good news is that we don’t have to. Your post makes it sound like we need to choose between progressive enhancement and offline-first but that’s not the case at all.

    I’m excited about offline-first precisely because I see it as a form of progressive enhancement. I don’t think the app-shell model—as currently popularised—demonstrates that, but it’s close. Very close.

    Basically, everything Michael says in the preceding comment: that.

    Reply

    • Thanks for the response! I haven’t seen those videos but will check them out now. :) And FWIW I tried really hard not to turn you into a strawman. ;)

      I believe there are fundamental technical reasons why it’s difficult to do both offline-first and “full functionality without JS,” which is why most PWAs are opting for the app-shell model. What it boils down to IMO is that offline sync is a notoriously tricky problem – from working on PouchDB, I’ve learned that the most reliable method is something along the lines of Git (where conflicting changes are propagated to all nodes), but many people don’t recognize this inherent complexity of sync and opt for naïve methods like last-write-wins instead (which is a recipe for random data loss, although perfectly valid in some cases).

      Now, if we assume an autonomous node that may or may not be out-of-sync with the server, the question of “how do we render the state both from the offline cache and as a server-render” becomes nearly impossible to answer. Send the whole state as a cookie? Accept that the server and the client may have different answers? I think maybe Ember FastBoot is closest to solving this problem, but I’m not sure it’s a silver bullet for every situation.

      I definitely think that the app-shell model can be improved upon, and that many of the PWAs with “infinite loading” screens for no-JS could perhaps progressively render the content as well as the shell. (E.g. for Pokédex, the data is largely static, so I do indeed just render both the content and the shell in the first view.) However, I still think it’s rarely worth it to make the full functionality work without JavaScript; it’s all about tradeoffs, and it seems to me that in many cases, building up an entire fork of the codebase where you have basically a server-side version as well as a client-side version is just not worth it.

      I also prefer for these tradeoffs to be discussed dispassionately, not as a moral imperative, which is the main point I was trying to get across in my post. FWIW you don’t seem to be one of the PE advocates who has been focusing on the “blame and shame” approach, which I appreciate a lot. :) Thanks again for the thoughtful reply!

      Reply

      • Posted by adactio on October 13, 2016 at 4:17 PM

        “I believe there are fundamental technical reasons why it’s difficult to do both offline-first and “full functionality without JS,””

        You’re absolutely right …which is why it’s a good thing nobody is suggesting “full functionality without JS”. Core functionality without JS, on the other hand: that’s a different story.

        Can we please acknowledge this nuance? Nobody, but nobody, is suggesting that all functionality needs to be available without JavaScript. If that were the case, then of course offline-first would be impossible without JavaScript. But as multiple people keep repeating over and over here in the comments, the question is just about having something be available before JavaScript kicks in (or doesn’t kick in, depending on the network/device/browser/operator conditions).

        Please stop framing this a binary choice—it’s far more subtle than that.

      • Fair! I should choose my wording better. :) I still think though that the app-shell model is really hard to reconcile even with “core functionality available without JS,” but maybe some clever app developer will prove me wrong. ;)

      • Posted by kethinov on October 14, 2016 at 10:44 AM

        A range of issues I see here.

        First, of course it’s hard to progressively enhance the app shell model. The app shell model is an antipattern. The first page load should, in almost all cases, be server-rendered. Then the client takes over if JS is available. Not only does this make it easier to do PE, it also improves time to first render.

        Next, offline is an enhancement, not core functionality, as Jeremy Keith said. Obviously it’s fine to require JS for offline features, but you should not require JS if someone doesn’t bring service worker support to the table.

        Moreover, as so often has to be repeated, this isn’t necessarily about users who bring old browsers to the table or turn off JS. That’s just a shorthand for talking about this. It’s also about people who bring evergreen browsers to the table and still want to use your site when your CDN that serves the JS goes down. (Jake Archibald has a somewhat infamous story about this at Google.) Plus a range of other similar scenarios.

        Falling back on server rendering and doing PE by default provides a layer of fault tolerance that PE advocates generally argue should always be done unless there is a compelling reason not to.

        So, as I asked on twitter, the right question is not “why do a HTML-first model,” the right question is “why not do a HTML-first model?” We need to preserve PE as the default and construct good, metrics-based reasons to abandon it. Instead, seemingly as a matter of cargo cult inertia, half of us seem to have abandoned PE as the default and demand reasons to bother. That is wrong. PE is the default. Not using it needs justifications on a per app basis.

        And for what it’s worth, I’ve seen good justifications before for certain apps (e.g. Google Maps), but I’ve never seen a justification that’s convinced me that PE-busting techniques like app shell ought to be the default for your standard run of the mill CRUD apps that most people are bashing out with Angular, React, Ember, etc.

  8. I do ongoing work on an ecommerce website that is used by about 80,000 people per month in Africa who access it with Opera Mini on feature phones.

    Perhaps surprisingly Opera Mini features strongly in our stats for London too!

    Last time I heard a figure it was 300,000,000 Opera Mini users worldwide.

    So where I disagree with you, and why for me PE has to mean HTML, CSS then JS – is the baseline.

    I think it’s wishful thinking to imagine the next billion people to go online buying nothing worse than a half decent Android.

    Honestly, if our competitors build an Angular app that relies on JS to show everything after the first load I’ll be delighted.

    Reply

    • This is a good point, thank you. So you’ve found that the JS capabilities of Opera Mini are just not up to snuff, or that users have been burned too badly by the non-Mini experience that they wouldn’t trust your site if you build a progressive web app? I kind of wonder if the current crop of PWAs doesn’t demonstrate that a responsible use of JavaScript could convince more users to ditch the proxy browsers and just allow a regular mobile browser to do its job (but maybe I’m being naïve! :)).

      Reply

      • The site is actually in the progress of becoming a progressive web app, should be finished in the next couple of days, which is very exciting for us!

        But by PWA I don’t mean one server render then hand everything to the client, I mean some pre-caching of static assets, read through caching of HTML and static content that’s not already cached, reading from the cache where possible, a default offline page and a manifest file.

        That means my Nexus 6P gets an awesome PWA using a service worker, and my Nokia 215 can be used to read content and pay for things (core experience) using the network.

        I’m assuming the reason people use Opera Mini is because it comes bundled with feature phones, and feature phones are cheap. If I’m right no amount of tech is going to convince those people to move to a smart phone.

        Re. Geoffrey’s point below, I don’t fully understand it, but we have Opera Mini on roughly the same numbers as Chrome in Uganda, and 5 times the Chrome usage in Kenya.

        Horrifyingly, there’s a long tail containing a couple of hundred every month on Blackberry ;)

      • That’s pretty fascinating – maybe the “static site, plus ServiceWorker for offline caching” will become a more common pattern due to Opera Mini and other low-JS/no-JS browsers. I have a slide in my talk suggesting maybe we’ll go back to an “MVC server delivering HTML, CSS, and a little jQuery” approach, but the MVC server will live in a ServiceWorker instead.

        So far, though, a lot of the PWAs I’ve seen have not been architected this way. Diversity is good, though, so I’m interested to see if this pattern takes off. :)

    • The Opera Mini stats could just be down to what cluster serves that specific request, and the different locations of the clusters.

      Reply

  9. One thing we also need to start questioning more is responsibility. A lot of the criticism that JavaScript gets is based on people relying blindly on frameworks or not caring about error cases at all. If we start seeing JavaScript as a given in our stack, then we also can point out that it is the responsibility of the developer to cover the error cases. Paranoid JavaScript is a good idea. In the same vein, it is time to reconsider the stability of HTML, too. Monica Dinculescu’s CSS day talk about how broken input really is was eye-opening there for me. It is great that an input type range becomes a plain text input in older browsers, but when it is unusable in modern browsers, we have to consider JS enhancement. A lot has happened since we defined the holy trinity of separation of concerns and it is time to add newer concerns.

    Reply

    • Yep, I agree. Unfortunately I think that the finger-waving tone of many PE advocates is causing the good bits of wisdom they have to share – about accessibility, about robustness, about performance – to fall on deaf ears, because JS fans are just tired of getting shamed and blasted for seemingly everything they do.

      I would love to see more calm and reasoned debate between the two camps, which is something I was hoping to help promote in this post.

      Reply

      • Posted by john on October 13, 2016 at 11:55 AM

        “because JS fans are just tired of getting shamed and blasted for seemingly everything they do.”

        Well, on the other hand, there are some JS fans, especially bleeding-edge front-end frameworks practitioners, who actually write articles in which skeptics are described as human crap. No more, no less.

        And that hurts big time, because it’s all about ego—and they won’t admit that maybe they were disrespectful to human beings while dozens are actually telling them they were.

        Unfortunately, I’ve come across a lot of those articles lately. And it’s not a reaction to being shamed, it’s just people thinking they are superior to others because they use framework X, Y or Z and they consider themselves pioneers.

        So when you write “Any community that’s eager to tear each other down at the slightest whiff of unorthodoxy is not a community that I want to be a part of.”, I can relate to that

      • Yep, I think there is far too much bashing and snarkiness on both sides. Many folks on the JS side of things could stand to tone down the rhetoric as well.

  10. Well done. This is a great example of employing the engineering process, although instead of producing a solution, you have produced a design tenet that says “JS is not inherently evil for PWAs.” Wonderful. I appreciated the follow up commentary still making very clear that it’s still playing with fire if your design & deployment strategies aren’t carefully crafted.

    Reply

  11. I’m with Eric Meyer on this, though less on PE and Accessibility, than on actual web breakage.

    The key concept of the web is that every web object has an accessible URI (yes, you have to be logged in or otherwise privileged to see some of them).

    When you break the web (by, f.e. requiring a programmatic rendering of core content, or eliminating or polluting unique URIs), you break any number of applications you did_not_write that might search your content and make it globally knowable, linkable, searchable. You break people’s ability to create web browser extensions and web services like IFTTT that allow people to make the web their own. You likely break your OWN ability to reflect on your site as a content API, requiring major refactoring to make your content do something wildly different without migration.

    You break core content for bots that don’t have built-in Javascript interpreters (apparently Google has kludged around this and now runs Javascript as it crawls the web). You may break core content for assistive technologies, or maybe for just the next new thing like an Echo, or the next-big-thing-you-don’t-know-about (like the iPhone that nobody requiring enterprise web apps to only support IE6 saw coming).

    Srsly, the way to build a web site or app so it works anywhere is already well-known, and positions like yours – as kindly and professionally stated as it is – fog the headspace of new designers and developers who haven’t come up with the web the hard way.

    I am moved a bit by your concerns about offline content – though I was moved by the same concerns at the dawn of the web as well, and history has proven to me that that concern, while valid, is a passing one: the web, as it’s become the essential component of a society with any productivity at all, will drive better connectivity in bandwidth backwaters over the next 10-20 years, aided by the developing world’s lack of legacy wired infrastructure – one tower to upgrade, vs. hundreds, thousands of front yards to dig up.

    The moral equivalent of the “many big corporations are creating apps that require Javascript to work, so we should all feel OK about making more” is “Pandora, ITMS and the radio are full of horsesh*t auto-tuned pop so we should all feel OK about recording more”. No. Why not, in either case, make something fine?

    Developing well for the web is about doing the job well , not just for your own app. Forget for a moment the requirements doc, or if you must not, imagine that requirement #0 is “this really shouldn’t suck for everyone trying to use it in ways I haven’t foreseen”.

    Reply

  12. Posted by Brian Jameson on October 14, 2016 at 12:47 AM

    I have tremendous respect for the folks you list who expressed anger at your talk. Their work helped me get a foothold in a career on the web. I don’t want to oversimplify this discussion, but I think it’s significant that the reaction was angry and bitter. In my albeit limited experience, anger and bitterness often arise when one feels threatened, when something feels personal. And so I ask, truly with respect, am I the only one who notices that all of these folks expressing outrage are known for their work in design, HTML and CSS (not JavaScript)? I can’t help but wonder if some of the outrage here may be related to frustration with working in a field that is increasingly abstracting away the need for their expertise. I certainly believe that the conversation about PE is an important conversation to be had, but I believe it is one that should be had dispassionately, and with empathy, and without all the fingerpointing and moral grandstanding.

    Reply

  13. […] couple of good long(-ish) think pieces: Nolan Lawson’s Progressive enhancement isn’t dead, but it smells funny and a nuanced, compelling reply by Laurie Voss: Web development has two flavors of graceful […]

    Reply

  14. Posted by Steve Wright on October 14, 2016 at 9:10 AM

    I have no issue with websites that only work with JS. For me, PE is about establishing a minimum baseline for the user community that will visit that website and then providing a better experience where possible.

    One thing to watch out for when establishing the minimum baseline is how it will look to the search engines when they visit. (I tend to work with webapps on an intranet so that’s not a concern for me)

    Reply

  15. Posted by G Mariani on October 14, 2016 at 7:17 PM

    Hahhahahahahahahah

    The day has come… JavaScript has taken the mantle passed down from Flash. Remember the loading screens? The “this site requires the flash plugin” messages? All those Flash haters traded in one poison for another without even realizing it. I kept telling people JavaScript would become the next Flash and no one believed me… This article hits the nail on the head.

    Old Flash Developer (Sorry, couldn’t help it)

    Reply

  16. Posted by alex on October 14, 2016 at 8:30 PM

    Wow, this is a very long article that is great to waste your time. Amazing!

    Reply

  17. Posted by ichon on October 14, 2016 at 8:34 PM

    One can always build a house without inner frame.
    And tell others that’s the right way for the future.

    Reply

  18. Posted by Jonne on October 17, 2016 at 10:49 PM

    I’m not sure if you know, but in developing countries people use Opera Mini a lot since it saves their bandwidth which is expensive and limited. JS ofcourse “works” on Mini, but not in any decent way. So if your site uses a lot of JS, it probably doesn’t work so good on Opera Mini.

    Reply

  19. Great post, a great developer will understand the need for different architectures and their use cases. Having tunnel vision when it comes to developing for the web, in my opinion, is what ruins and stains wanting to learn more and contribute.

    Reply

  20. Fascinating read, the comments too:) I like the idea of not shaming people for their approach. Building html first is great for accessibility and structural simplicity, kind of like how googles accelerated media pages impose strictness. But some people will come at it from the messy other side. Flash was great for artists but when actionscript 3 arrived it was better from a logical programmer point of view, but it made quick artistic iteration more difficult. I still feel like there wasn’t enough of a compromise there and that it contributed somewhat to flash’s demise. Anyways your end point is what was most important to me – appreciate the diversity and that websites are made many different ways with many different goals.

    Reply

  21. I can see why this is a touchy subject to talk about. Consider that you are building a web application and now you will need external investments. Try to pitch anything to an investor without having a target audience and you will most likely get cut off.

    Point is that, like any other software, you must have a target audience first so then you can start talking about progressive enhancement and might as well be that your target audience won’t need an application that can be supported to all.

    I have been working on the field for a while and, just thinking about mobile applications, would you have both a responsive application and a wap application to handle all mobile devices ? I don’t know, maybe what we need is to start really talking about it and making progressive enhancements so we can get into a common ground and really have an inclusive web.

    Reply

  22. ““JavaScript – yes or no?” is less about access and ubiquity, and more about performance and robustness”

    Yes, exactly. The conclusion that PE was the best thing was because it provided the best accessibility and indirectly helped with performance. Things have changed though, and accessibility is not hindered by JS anymore (if people are mindful of it), and so long as performance is cared for, where is the problem?

    So I suppose the conclusion is that we as a community should question best practices more often.

    Great article, well articulated, thanks for writing it.

    Reply

  23. Posted by Florin Onciu on October 25, 2016 at 3:49 AM

    I think those apps that were not working without javascript were some bad examples. The minimum I would expect is to see a warning message telling me: “Hey, you need javascript enabled to use this app!” It should be easy to achieve and the app just works! At least says something! That makes a huge difference even though I cannot use the app!

    I also wanted to thank you, for your work on PWA, offline storage etc.

    Reply

  24. Posted by Jared A Smith on October 27, 2016 at 7:04 AM

    There’s an elephant in the room that no one’s talking about… namely that HTML and CSS suck. Hard. And JavaScript, despite its flaws, is the most malleable, user-extensible, and frankly best part of building for the web. JavaScript, once free of Microsoft’s stalling, has gotten sooo much better and CSS… hasn’t. Ditto for HTML. So much so that its now totally feasible to write JavaScript for a large SPA by hand, while trying to do the same without a CSS preprocessor is all but doomed. The hardcore PE crowd may be right, but not acknowledging just how much it sucks to make things work without JS, and not making any moves to ameliorate the pain, seems willfully ignorant.

    Reply

  25. I’m a very amateur website designer…I use php/CSS when I code. I did not like using a client side app like JavaScript but it was the only way to get some things done. Having said that, my personal experience viewing heavily loaded JavaScript pages is nauseating. Some can lock up your browser while they script all of their desires…and I personally hate that.

    Oh and another thing, your quote: “In 2016, it’s okay to build a website that doesn’t work without JavaScript.” I get your drift but that is the same thing as saying “In 2016, it’s okay to build a website that does work with JavaScript.” The English language can be tricky. Maybe you meant to say “In 2016, it’s okay to build a website that does work without JavaScript.”

    Reply

  26. Posted by Tim Etler on October 31, 2016 at 10:50 AM

    We as a community need to realize that the question of “JavaScript – yes or no?” is less about access and ubiquity, and more about performance and robustness

    Yes! I 100℅ agree with that, but I disagree with the conclusion because achieving that is best done with having a base that can run on HTML and CSS only. It adds restrictions that decreases surface area of which you can make mistakes. Google’s own initiative to robustness and performance, AMP adds even further restrictions on what you can do to provide even further protection. Of course AMP does use JavaScript, which shows us the true reason to do HTML and CSS first isn’t for accessibility, it’s too restrict ourselves to a subset of functionality that is safer to run without the risk of pushing a bug that either hangs or kills the site.

    We can’t all create a framework with the robustness that HTML plus CSS has or that AMP ha because we don’t have the resources to put our sites and builds through anywhere near as much testing as those go through. Despite my best intentions, I have pushed obscure JS bugs to production before, but instead of taking down the site, it just meant breaking some interactive functionality, but navigating through href links in the html still worked.

    And that’s just for robustness. As far as performance goes, I do not see how a site that isn’t server rendered can perform as well as one that is. It means before you can display anything at all, even with http2 server push to prevent a round trip, it still needs you to download the JavaScript, parse the JavaScript which needs to include an html renderer on top of all that, then download and parse the API data, then render the html, then push it to the Dom, then have the browser go through the actual layouting and rendering. Meanwhile with the HTML and CSS first site, the page does layouting and rendering as soon as it can, and only set up the JS in the background while the user is reading content. I just don’t see how a JS first site can compete.

    As for all the other great stuff that JS only you were talking about doing, do that to! But there’s nothing preventing you from doing that after you present some content to distract the user while you do extra work under the hood to make the next page load even faster. When it comes to performance, latency is a bigger concern than bandwidth, and you need to make a data request anyways, so why not throw in some html with it while you’re at it? Cached HTML will always be more performant than rendering it, and the server has access to things that the client cannot cache.

    Reply

  27. Posted by Gopnik McBlyat on December 14, 2016 at 10:16 PM

    Social Justice Warriors everywhere, what a time to be alive. Sigh.

    Reply

  28. […] and responses from great names like Jeremy Keith and Christian Heilmann, all summarized in a post by Nolan. I’m starting to think “crazy” is an understatement. 2016 was […]

    Reply

  29. Great article.

    In the last paragraph, however, you apologize to those who commented in wholly inappropriate ways. Your should never, ever apologize for calling people out on their unacceptable behavior.

    Reply

  30. Posted by Alex Bell on January 19, 2017 at 11:55 AM

    Great points here Nolan, as usual.

    But there’s one viewpoint that’s always missing from the PE discussion, and that is the question of privacy and security in relationship to JavaScript. Building a website that doesn’t work with JavaScript, and then confirming this architectural design decision by pointing to low usage data by TOR users is circular reasoning of the lamest sort. Of course TOR users aren’t frequenting your loading-graphic-only site! I am always really surprised to see extremely intelligent people recite such a weak piece of logic. Of course the users aren’t there. Your app excluded them from the ecosystem.

    Unfortunately, the TOR browser is really the only way to use the web today with any fundamental security. It’s true that even TOR won’t protect you from state-level actors who are determined to hack your system. But it’s a high enough fence to keep out most corporate snoops and garden variety hackers. This is partly the result of a corporate standards process that is heavily influenced by an advertising company that is determined to siphon and permanently store as much information as possible about the every user in every possible way, in the name of customization and performance. It is also the result of a fundamental asymmetry between the advertising industry and the shoestring budgets of privacy-oriented organizations and garage-band open source projects. Why are we only hearing about the good sides of these dynamics, and none of the bad?

    As a business decision, building stateful JS-driven apps is typically an obvious move. This a lot of developers’ bread and butter, myself included. It’s equally clear that most consumers aren’t appropriate paranoid enough about the authoritarian surveillance state that is forming around them. I’d like to think the email hacking debacles of the last election cycle may be harbingers of a new mood of caution. There is a clear, urgent need for education on the subject.

    In general, I agree that it would be nice to see more aspirational thinking and pattern design around TOR-friendly applications, and less moralizing on this topic. But allowing the currently infinitesimal fraction of privacy-concerned users to experience a minimally-functioning web apps (especially news and information-retrieval apps) with JS disabled is arguably really important to any future prospect of a free society. On that level, it is very much a moral, yes moral, decision.

    Reply

  31. I’m agree, that not every web app should work without JS. There are tons of examples of such web apps: starting from b2b-like apps, ending entertainment apps like web-based games and VR.

    Still in case of News, Search, Articles and such web app should work without JS.

    Reply

  32. Its not the web that needs to be dummied down. Its the special devices, readers, viewers … that need to be bumped up.

    Reply

  33. without JS almost impossible for interactive websites now days

    Reply

  34. […] that doesn’t work without JavaScript.” I already covered that surreal experience in this post, but essentially a photo of me made the rounds on social media, and without even knowing who was […]

    Reply

  35. […] read this book well after my own breakup with Twitter, but a lot of what I wrote in those three blog posts is echoed in this book. […]

    Reply

  36. […] worse, progressive enhancement doesn’t seem to be doing very well these days. In retrospect, this blog post was more about Twitter shaming (see above for my thoughts on Twitter), but I think the larger point […]

    Reply

  37. […] Lawson has written this article to explain the cases in more detail. To […]

    Reply

  38. […] and responses from great names like Jeremy Keith and Christian Heilmann, all summarized in a post by Nolan. I’m starting to think “crazy” is an understatement. 2016 was […]

    Reply

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.