AI tribalism

“Heartbreaking: The Worst Person You Know Just Made a Great Point” – ClickHole

“When the facts change, I change my mind. What do you do, sir?” – John Maynard Keynes, paraphrased

2025 was a weird year for me. If you had asked me exactly a year ago, I would have said I thought LLMs were amusing toys but inappropriate for real software development. I couldn’t fathom why people would want a hyperactive five-year-old to grab their keyboard every few seconds and barf some gobbledygook into their IDE that could barely compile.

Today, I would say that about 90% of my code is authored by Claude Code. The rest of the time, I’m mostly touching up its work or doing routine tasks that it’s slow at, like refactoring or renaming.

By now the battle lines have been drawn, and these arguments are getting pretty tiresome. Every day there’s a new thinkpiece on Hacker News about how either LLMs are the greatest thing ever or they’re going to destroy the world. I don’t write blog posts unless I think I have something new to contribute though, so here goes.

What I’ve noticed about a lot of these debates, especially if you spend a lot of time on Mastodon, Bluesky, or Lobsters, is that it’s devolved into politics. And since politics long ago devolved into tribalism, that means it’s become tribalism.

I remember when LLMs first exploded onto the scene a few years ago, and the same crypto bros who were previously hawking monkey JPEGs suddenly started singing the praises of AI. Meanwhile upper management got wind of it, and the message I got (even if they tried to use euphemisms, bless their hearts) was “you are expendable now, learn these tools so I can replace you.” In other words, the people whose opinions on programming I respected least were the ones eagerly jumping from the monkey JPEGs to these newfangled LLMs. So you can forgive me for being a touch cynical and skeptical at the start.

Around the same time, the smartest engineers I knew were maybe dabbling with LLMs, but overall unimpressed with the hallucinations, the bugs, and just the overall lousiness of these tools. I remember looking at the slow, buggy output of an IDE autocomplete and thinking, “I can type faster than this. And make fewer mistakes.”

Something changed in 2025, though. I’m not an expert on this stuff, so I have no idea if it was Opus 4.5 or reinforcement learning or just that Claude Code was so cleverly designed, but some threshold was reached. And I noticed that, more and more, it just didn’t make sense for me to type stuff out by hand (and I’m a very fast typist!) when I could just write a markdown spec, work with Claude in plan mode to refine it, and have it do the busywork.

Of course the bugs are still there. It still makes dumb mistakes. But then I open a PR, and Cursor Bugbot works its magic, and it finds bugs that I never would have thought of (even if I had written the code myself). Then I plug it back into Claude, it fixes it, and I start to wonder what the hell my job as a programmer even is anymore.

So that’s why, when I read about Steve Yegge’s Gas Town or Geoffrey Huntley’s Ralph loops (or this great overview by Anil Dash), I no longer brush it off as pure speculation or fantasy. I’ve seen what these tools can do, I’ve seen what happens when you lash together some very stupid barnyard animals and they’ve suddenly built the Pyramids, so I’m not surprised when smart engineers say that the solution to bad AI is to just add more AI. This is already working for me today (in my own little baby systems I’ve built), and I don’t have to imagine some sci-fi future to see what’s coming next.

The models don’t have to get better, the costs don’t have to come down (heck, they could even double and it’d still be worth it), and we don’t need another breakthrough. The breakthrough is already here; it just needs a bit more tinkering and it will become a giant lurching Frankenstein-meets-Akira-meets-the-Death-Star monster, cranking out working code from all 28 of its sub-agent tentacles.

I can already hear the cries of protest from other engineers who (like me) are clutching onto their hard-won knowledge. “What about security?” I’ve had agents find security vulnerabilities. “What about performance?” I’ve had agents write benchmarks, run them, and iterate on solutions. “What about accessibility?” Yeah they’re dumb at that – but if you say the magic word “accessibility,” and give them a browser to check their work, then suddenly they’re doing a better job than the median web dev (which isn’t saying much, but hey, it’s an improvement).

And honestly, even if all that doesn’t work, then you could probably just add more agents with different models to fact-check the other models. Inefficient? Certainly. Harming the planet? Maybe. But if it’s cheaper than a developer’s salary, and if it’s “good enough,” then the last half-century of software development suggests it’s bound to happen, regardless of which pearls you clutch.

I frankly didn’t want to end up in this future, and I’m hardly dancing on the grave of the old world. But I see a lot of my fellow developers burying their heads in the sand, refusing to acknowledge the truth in front of their eyes, and it breaks my heart because a lot of us are scared, confused, or uncertain, and not enough of us are talking honestly about it. Maybe it’s because the initial tribal battle lines have clouded everybody’s judgment, or maybe it’s because we inhabit different worlds where the technology is either better or worse (I still don’t think LLMs are great at UI for example), but there’s just a lot of patently unhelpful discourse out there, and I’m tired of it.

To me, the truth is this: between the hucksters selling you a ready-built solution, the doomsayers crying the end of software development, and the holdouts insisting that the entire house of cards is on the verge of collapsing – nobody knows anything. That’s the hardest truth to acknowledge, and maybe it’s why so many of us are scared or lashing out.

My advice (and I’ve already said I know nothing) would just be to experiment, tinker, and try to remain curious. It certainly feels to me like software development is unrecognizable from where it was 3 years ago, so I have no idea where it will be 3 years from now. It’s gonna be a bumpy ride for everyone, so just try have some empathy for your fellow passengers in the other tribe.

11 responses to this post.

  1. Tim McCormack's avatar

    I think you can be tribalistic without being blind to what’s happening. I’m an AI hater but I acknowledge that some people are able to squeeze utility out of these things (along with a lot of negative externalities, and often harm to themselves).

    I can see it. It’s just that I refuse, on principle, to use this stuff until there’s no other choice.

    Anyway, I don’t see much point in experimenting with and learning how to use LLMs. The landscape is changing so fast that anything I learn about it would be outdated soon. If I ever need to use them, I should be able to pick up what I need to know within a week. (After all, they’re getting smarter and easier to use all the time, right?)

    Reply

    • Nolan Lawson's avatar

      This is a reasonable take, yeah. I actually had a similar thought process a ~year ago, which mostly paid off in that I picked up Claude Code pretty easily despite having essentially never touched LLMs before. Even the stuff you “need to know” eventually gets baked into the tool itself (e.g. plan mode). The whole argument of “you better learn this now or you’re ‘not gonna make it'” strikes me as false – if the tools improve, then their usability should improve, so there’s hardly a rush. But I personally am glad I did not wait until 2026!

      Reply

  2. James's avatar

    Posted by James on January 25, 2026 at 4:43 AM

    I dont care what anyone says I love working with a LLM on projects.

    When I started programming it was just me, a book, and the machine. Very isolated and lonely but I was forced to just figure things out.

    Then the internet came around and I was exposed to other programmers. It was the most wonderful thing and also the worst possible thing. You could ask questions to your hardest problems and either get an answer or called an idiot and to RTFM. This was both liberating but also a terrible experience while learning.

    Slowly the internet developed stackoverflow and github took off and now if you had the time you could lurk in the background and still learn from everyone else.

    Now we have the LLM it’s like the best of everything above. You can ask it stupid questions and it will not judge. Now when I am working I dont feel like I am alone. I have a little AI buddy to help figure out problems, design systems, and write code.

    I do not blindly trust the AI! First of all it doesnt know what it doesnt know so I have to be very careful of that flaw! Secondly it does now have the complete context of the real world, the business problems we are trying to solve, or the history of things we have tried in the past.

    I am very excited and feel like I have a super power. All the little toy projects I thought I would never have time for are now being built. Absolutely in love with tech again!

    Reply

    • Nolan Lawson's avatar

      Thanks for your story, James! I think your take is actually incredibly common. For example there has been a wave of non-coders discovering Claude Code and suddenly feeling like they have superpowers (e.g. this article or this article).

      I’ve had a hard time feeling the rush of exuberance myself mostly because I found a lot of AI boosters to be smug and boorish (this is part of the tribalism thing I was getting at – I’m not immune to it!), and I’m terrified about how it’s changing the field of software development. I still feel that way, but I’m also trying to get over my fears and be open-minded.

      Reply

  3. Mike K.'s avatar

    Posted by Mike K. on January 25, 2026 at 2:16 PM

    I don’t think you’re _wrong_ about the tribalism, really — that’s definitely a factor — but I do think there’s more to it than that.

    Like, for my part: I’ve been enthusiastic/terrified about the capabilities of LLMs for software dev for years now, both because I could see how they were useful immediately and I could see the trendlines. This wasn’t a tribal belief (and I really have no truck with the LinkedIn weirdo crowd), it was one based on my actual experiences with the tools.

    And so I tend to think that the people who are skeptical are, at least in large part, skeptical because of their own experiences. I’ve just struggled with how that could be true, especially these days. Where I’ve landed is that I think this has to do with how people approach coding, and whether they’re top-down or bottom-up coders.

    (Is this true? I have no idea. I’d really love to read a take from someone who is LLM-skeptical but trying to grapple honestly with the reality that lots of experienced and knowledgeable developers are not.)

    Reply

    • Nolan Lawson's avatar

      This is interesting! I think you’re onto something with the bottom-up vs top-down framing. I have a colleague who didn’t really “get” Claude Code, but loved in-IDE AI autocomplete (which is the exact opposite of my experience). I wonder if this different coding style was a factor.

      Long-term though, I suspect it won’t matter much, at least for small bugfixes. It’s already at the point where you can go from a well-written bug report to a fix in one shot.

      Reply

  4. Bruno A's avatar

    It’s amazing to me that in all these discussions, no one brings up the fact these tools were built on theft of other people’s work.

    Reply

    • Nolan Lawson's avatar

      It’s built off my work too. 🙂 But most of my code is Apache-licensed, and people have been using it for who-knows-what for a long time, so honestly this never bothered me much.

      Reply

      • Bruno A's avatar

        It’s great you’re OK with people taking most of your work that is adequately licensed (what about the rest?) without permission, but to assume everyone else should be too is a bit disingenuous.

  5. […] AI tribalism Nolan Lawson: if you say the magic word “accessibility,” … then suddenly they’re doing a better job than the median web dev. […]

    Reply

  6. edsu's avatar

    Posted by edsu on January 30, 2026 at 10:21 PM

    I think that being a professional software developer entails thinking clearly and deliberately about the social and political impacts of technical decisions. Dismissing these issues as “tribalism” is a bit dismissive of what are real and important issues that need to be discussed.

    Reply

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.