By far the most popular npm package I’ve ever written is blob-util, which is ~10 years old and still gets 5+ million weekly downloads.
It’s a small collection of utilities for working with Blobs in JavaScript. I wrote it because I found that PouchDB users were endlessly confused about how to work with Blobs and how to convert them to strings, ArrayBuffers, etc.
Given that some 80% of developers are now using AI in their regular work, blob-util is almost certainly the kind of thing that most developers would just happily have an LLM generate for them. Sure, you could use blob-util, but then you’d be taking on an extra dependency, with unknown performance, maintenance, and supply-chain risks.
And sure enough, Claude will happily spit out whatever Blob utilities you need when prompted:
> Write me a utility function in TypeScript to convert a Blob to an ArrayBuffer. It should return a Promise.
function blobToArrayBuffer(blob: Blob): Promise<ArrayBuffer> {
return new Promise((resolve, reject) => {
const reader = new FileReader();
reader.onload = () => {
if (reader.result instanceof ArrayBuffer) {
resolve(reader.result);
} else {
reject(new Error('Failed to read Blob as ArrayBuffer'));
}
};
reader.onerror = () => {
reject(reader.error);
};
reader.readAsArrayBuffer(blob);
});
}
Claude’s version is pretty close to the blob-util version (unsurprising, since it was probably trained on it!). Although it’s much more verbose, unnecessarily checking if readAsArrayBuffer actually gives you an ArrayBuffer (although this does make TypeScript happy). To be fair, it also improves on my implementation by directly rejecting with an error rather than the more awkward onerror event.
Note: for anyone wondering, yes Claude did suggest the new Blob.arrayBuffer() method, but it also generated the above for “older environments.”
I suppose some people would see this as progress: fewer dependencies, more robust code (even if it’s a bit more verbose), quicker turnaround time than the old “search npm, find a package, read the docs, install it” approach.
I don’t have any excessive pride in this library, and I don’t particularly care if the download numbers go up or down. But I do think something is lost with the AI approach. When I wrote blob-util, I took a teacher’s mentality: the README has a cutesy and whimsical tutorial featuring Kirby, in all his blobby glory. (I had a thing for putting Nintendo characters in all my stuff at the time.)
The goal wasn’t just to give you a utility to solve your problem (although it does that) – the goal was also to teach people how to use JavaScript effectively, so that you’d have an understanding of how to solve other problems in the future.
I don’t know which direction we’re going in with AI (well, ~80% of us; to the remaining holdouts, I salute you and wish you godspeed!), but I do think it’s a future where we prize instant answers over teaching and understanding. There’s less reason to use something like blob-util, which means there’s less reason to write it in the first place, and therefore less reason to educate people about the problem space.
Even now there’s a movement toward putting documentation in an llms.txt file, so you can just point an agent at it and save your brain cells the effort of deciphering English prose. (Is this even documentation anymore? What is documentation?)
Conclusion
I still believe in open source, and I’m still doing it (in fits and starts). But one thing has become clear to me: the era of small, low-value libraries like blob-util is over. They were already on their way out thanks to Node.js and the browser taking on more and more of their functionality (see node:glob, structuredClone, etc.), but LLMs are the final nail in the coffin.
This does mean that there’s less opportunity to use these libraries as a springboard for user education (Underscore.js also had this philosophy), but maybe that’s okay. If there’s no need to find a library to, say, group the items in an array, then maybe learning about the mechanics of such libraries is unnecessary. Many software developers will argue that asking a candidate to reverse a binary tree is pointless, since it never comes up in the day-to-day job, so maybe the same can be said for utility libraries.
I’m still trying to figure out what kinds of open source are worth writing in this new era (hint: ones that an LLM can’t just spit out on command), and where education is the most lacking. My current thinking is that the most value is in bigger projects, more inventive projects, or in more niche topics not covered in an LLM’s training data. For example, I look back on my work on fuite and various memory-leak-hunting blog posts, and I’m pretty satisfied that an LLM couldn’t reproduce this, because it requires novel research and creative techniques. (Although who knows: maybe someday an agent will be able to just bang its head against Chrome heap snapshots until it finds the leak. I’ll believe it when I see it.)
There’s been a lot of hand-wringing lately about where open source fits in in a world of LLMs, but I still see people pushing the boundaries. For example, a lot of naysayers think there’s no point in writing a new JavaScript framework, since LLMs are so heavily trained on React, but then there goes the indefatigable Dominic Gannaway writing Ripple.js, yet another JavaScript framework (and with some new ideas, to boot!). This is the kind of thing I like to see: humans laughing in the face of the machine, going on with their human thing.
So if there’s a conclusion to this meandering blog post (excuse my squishy human brain; I didn’t use an LLM to write this), it’s just that: yes, LLMs have made some kinds of open source obsolete, but there’s still plenty of open source left to write. I’m excited to see what kinds of novel and unexpected things you all come up with.

Posted by Ralph Haygood on November 16, 2025 at 1:21 PM
“but then you’d be taking on an extra dependency, with unknown performance, maintenance, and supply-chain risks”: So instead of a small JavaScript library that’s been used on millions of websites for around ten years, you’ll take on a gigantic, continually changing statistical model that has a known vulnerability of wide scope (prompt injection)? That’s preposterous. I doubt many people are klarna koding because they’re worried about performance, supply-chain risks, or, least of all, maintenance. I suspect they’re doing it because they’re afflicted with Shiny-Object Syndrome (to which programmers as a group are quite prone), they’re appallingly lazy*, or they have bosses threatening to fire them if they don’t.
I call it klarna koding rather than vibe coding because like Klarna, it’s buy now, pay later, in that if you’re doing a lot of it, technical debt is probably piling up in your codebase. You (Lawson) have provided a minor case in point: as you note, “Claude’s version is … much more verbose”, which makes it slightly harder, not easier, to understand. Unless you’re working at a fly-by-night start-up that plans to get bought or go bust before anyone has to worry about the godawful mess that is your hacked-together codebase, you should care about how easy it is for humans to understand your code, because sooner or later, somebody – like maybe you a year from now – may well need to do so, no matter what the marketing fodder from Anthropic or Anysphere may claim.
Oh well. All this means plenty of lucrative work for people willing to clean up the messes klarna koding makes:
https://www.404media.co/the-software-engineers-paid-to-fix-vibe-coded-messes/
See also:
https://pivot-to-ai.com/2025/09/09/if-ai-coding-is-so-good-where-are-the-little-apps/
*I’m unapologetically lazy myself, but not so lazy that I’m willing to sign my name to shitty work.
Posted by Tim McCormack on November 16, 2025 at 7:34 PM
Holdout here. :-) Note that before LLMs, there was already the same choice: Write a utility function, or pull it in from a library. The utility functions often aren’t *that* hard to write, but just off the top of my head there were a bunch of reasons to avoid writing them that didn’t involve the effort required:
There are cons as well, especially for the _really_ small or trivial libraries, but you see my point — there was value in utility libraries then, and for that reason there’s value in them now.
Posted by Jim Shortz on November 17, 2025 at 9:05 AM
First off, thank you for contributing a valuable piece of open source to
the community. However, I have to respectfully disagree on the teaching
angle.
I have reluctantly begun to use AI LLMs, mostly for hobby projects
involving tech stacks I don’t use much (such as Node.js). When I ask it
to write something for me, I don’t just accept what it gives me. I read
through it and make sure I understand what every line is doing. If there
is a language construct or library function I’m not familiar with, I can
ask follow up questions to learn what it is.
Even though it’s just a machine (and sometimes gives me wrong answers),
I find the “conversational” style to be a great asset in helping me
learn the new thing.
In the end, I usually don’t use what it generated, or I use it as a
starting point and modify the heck out of it.
In contrast, I have taken dependencies on hundreds of open source
libraries and never look inside of them unless I’m having a problem.
Posted by Manuel Jasso on November 17, 2025 at 12:28 PM
Nolan, fist of all: I miss you.
I’ve been writing code for about 40 years now, and even though this AI wave is impressive, I’ve seen enough impressive tech waves to say that in the end, only time will tell where this one will land. Everything we say today is just speculation.
I have a concern with this AI wave that you bring up: learning and understanding vs producing code.
My concern is that young programmers will grow up depending too much on something that is inherently not trustworthy, because I think trust is a human-to-human phenomenon. And yes, this is my perspective, it is not right or wrong, it is what I believe. Nobody can prove me right or wrong, only time will tell.
Posted by Nolan Lawson on November 18, 2025 at 1:13 PM
Miss you too, Manuel! And yes I have exactly this concern. I’m trying to be a bit optimistic about it, though – maybe we’re just adding on to the list of “low-level details you don’t need to know.”
Posted by Drop #732 (2025-11-17): Reliable Sources – hrbrmstr's Daily Drop on November 17, 2025 at 12:32 PM
[…] The referenced article questions the future of tiny npm packages like blob‑util in an AI‑driven development world, referencing Nolan Lawson’s post on small‑open‑source projects (https://nolanlawson.com/2025/11/16/the-fate-of-small-open-source/) […]
Posted by Tim on November 17, 2025 at 3:47 PM
As this is the internet, I’m going to take one incidental remark you said and run off on a wild tangent.
“80% of developers are now using AI in their regular work”
80% of StackOverflow users. I would say that’s not terribly surprising, since people go there to find answers to small, self-contained questions, which is exactly what LLMs excel at. It’s also not at all representative of all developers.
Every non-web development field I’ve worked in (aerospace, embedded systems, databases outside of the major open-source relational and document-based, gaming, industrial, etc) is represented poorly or not at all on StackOverflow. For example, the “playstation4” tag has only 4 questions, and there is still no “playstation5” tag. Surely nobody believes that the entire global community of PlayStation developers hasn’t had a single question in 5 years!
There are also many languages/libraries which already had a great online community and never felt the need to move there. For example, the average Lisp question on StackOverflow is quite basic (there are a ton of questions about “hello world” and setting up an editor), while the serious Lisp programmers still meet elsewhere. Clearly, people still trying to figure out basic syntax in a new language for fun are going to massively over-represent LLM usage.
I think the more interesting aspect of your observation is: are the people writing the infotainment system in your car (for example) going to use an LLM to inline all their libraries, rather than take on actual dependencies? If so, how are they going to debug it later, when they don’t understand the code, and can’t upgrade it with a dependency manager? What if they write the code for your ECU this way?
We already thought it was bad that companies were mooching off volunteer maintainers (XKCD: 2347). It’s only going to get worse, because now they’re going to mooch off our source code without feeling the need to obey license terms, or even file a bug report when they discover a problem.
Posted by Open Source Now for Rich Peeps : Stephen E. Arnold @ Beyond Search on December 3, 2025 at 2:07 AM
[…] a time, open source was the realm of startups in a niche market. Nolan Lawson wrote about “The Fate Of ‘Small’ Open Source” on his blog Read The Tea Leaves. He explains that more developers are using AI in their work […]