Archive for the ‘Open source’ Category

What it feels like to be an open-source maintainer

Outside your door stands a line of a few hundred people. They are patiently waiting for you to answer their questions, complaints, pull requests, and feature requests.

You want to help all of them, but for now you’re putting it off. Maybe you had a hard day at work, or you’re tired, or you’re just trying to enjoy a weekend with your family and friends.

But if you go to github.com/notifications, there’s a constant reminder of how many people are waiting:

screenshot showing 403 unread GitHub notifications

When you manage to find some spare time, you open the door to the first person. They’re well-meaning enough; they tried to use your project but ran into some confusion over the API. They’ve pasted their code into a GitHub comment, but they forgot or didn’t know how to format it, so their code is a big unreadable mess.

Helpfully, you edit their comment to add a code block, so that it’s nicely formatted. But it’s still a lot of code to read.

Also, their description of the problem is a bit hard to understand. Maybe this person doesn’t speak English as a first language, or maybe they have a disability that makes it difficult for them to communicate via writing. You’re not sure. Either way, you struggle to understand the paragraphs of text they’ve posted.

Wearily, you glance at the hundreds of other folks waiting in line behind them. You could spend a half-hour trying to understand this person’s code, or you could just skim through it and offer some links to tutorials and documentation, on the off-chance that it will help solve their problem. You also cheerfully suggest that they try Stack Overflow or the Slack channel instead.

The next person in line has a frown on their face. They spew out complaints about how your project wasted 2 hours of their life because a certain API didn’t work as advertised. Their vitriol gives you a bad feeling in the pit of your stomach.

You don’t waste a lot of time on this person. You simply say, “This is an open-source project, and it’s maintained by volunteers. If there’s a bug in the code, please submit a reproducible test case or a PR.”

The next person has run into a very common error, with an easy workaround. You know you’ve seen this error a few times before, but can’t quite recall where the solution was posted. Stack Overflow? The wiki? The mailing list? After a few minutes of Googling, you paste a link and close the issue.

The next person is a regular contributor. You recognize their name from various community forums and sibling projects. They’ve run into a very esoteric issue and have proposed a pull request to fix it. Unfortunately the issue is complicated, and so their PR contains many paragraphs of prose explaining it.

Again, your eye darts to the hundreds of people still waiting in line. You know that this person put a lot of work into their solution, and it’s probably a reasonable one. The Travis tests passed, and so you’re tempted to just say "LGTM" and merge the pull request.

However, you’ve been burned by that before. In the past, you’ve merged a PR without fully evaluating it, and in the end it led to new headaches because of problems you failed to foresee. Maybe the tests passed, but the performance degraded by a factor of ten. Or maybe it introduced a memory leak. Or maybe the PR made the project too confusing for new users, because it excessively complicated the API surface.

If you merge this PR now, you might wind up with even more issues tomorrow, because you broke someone else’s workflow by solving this one person’s (very edge-casey) problem. So you put it on the back burner. You’ll get to it later when you have more time.

The next person in line has found a new bug, but you know that it’s actually a bug in a sibling project. They’re saying that this is blocking them from shipping their app. You know it’s a big problem, but it’s one of many, and so you don’t have time to fix it right now.

You respond that this looks like a genuine issue, but it’s more appropriate to open in another repo. So you close their issue and copy it into the other repo, then add a comment suggesting where they might look in the code to start fixing it. You doubt they’ll actually do so, though. Very few do.

The next person just says “What’s the status on this?” You’re not sure what they’re talking about, so you look at the context. They’ve commented on a lengthy GitHub thread about a long-standing bug in the project. Many people disagreed on the proper solution to the problem, so it generated a lot of discussion.

There are more than 20 comments on this particular issue, and it would take you a long time to read through them all to jog your memory. So you merely respond, “Sorry, this issue has been open for a while, but nobody has tackled it yet. We’re still trying to understand the scope of the problem; a pull request could be a good start!”

The next person is just a GreenKeeper bot. These are easy. Except that this particular repo has fairly flaky tests, and the tests failed for what looks like a spurious reason, so you have to restart them to pass. You restart the tests and try to remind yourself to look into it later after Travis has had a chance to run.

The next person has opened a pull request, but it’s on a repo that’s fairly active, and so another maintainer is already providing feedback. You glance through the thread; you trust the other maintainer to handle this one. So you mark it as read and move on.

The next person has run into what appears to be a bug, and it’s not one you’ve ever seen before. But unfortunately they’ve provided scant details on how the problem actually occurred. What browser was it? What version of Node? What version of the project? What code did they use to reproduce it? You ask them for clarification and close the tab.

The constant stream

After a while, you’ve gone through ten or twenty people like this. There are still more than a hundred waiting in line. But by now you’re feeling exhausted; each person has either had a complaint, a question, or a request for enhancement.

In a sense, these GitHub notifications are a constant stream of negativity about your projects. Nobody opens an issue or a pull request when they’re satisfied with your work. They only do so when they’ve found something lacking. Even if you only spend a little bit of time reading through these notifications, it can be mentally and emotionally exhausting.

Your partner has observed that you’re always grumpy after going through this ritual. Maybe you found yourself snapping at her for no reason, just because you were put in a sour mood. “If doing open source makes you so angry, why do you even do it?” she asks. You don’t have a good answer.

You could take a break; in fact you’ve probably earned it by now. In the past, you’ve even taken vacations of a week or two from GitHub, just for your own mental health. But you know that that’s exactly how you ended up in this situation, with hundreds of people patiently waiting.

If you had just kept on top of your GitHub notifications, you’d probably have a more manageable 20-30 to deal with per day. Instead you let them pile up, so now there are hundreds. You feel guilty.

In the past, for one reason or another, you’ve really let issues pile up. You might have seen an issue that was left unanswered for months. Usually, when you go back to address such an issue, the person who opened it never responds. Or they respond by saying, “I fixed my problem by abandoning your project and using another one instead.” That makes you feel bad, but you understand their frustration.

You’ve learned from experience that the most pragmatic response to these stale issues is often just to say, “I’m closing old issues. Please reopen if this is still a problem for you or if you can provide more details.” Usually there is no response. Sometimes there is, but it’s just an angry comment about how they were made to wait for so long.

So nowadays you try to be more diligent about staying on top of your notifications. Hundreds of people waiting in line are far too many. You long for that line to get down to a hundred, or a dozen, or even the mythical inbox zero. So you press on.

Attracting new contributors

After triaging enough issues like this, even if you eventually reach inbox zero, you might still end up with a large backlog of open bugs and pull requests. Labeling can help – for instance, you might label issues as “needs reproducing” or “has test case” or “good first patch.” The “good first patch” ones can be especially helpful, since they often attract new contributors.

However, you’ve noticed that often the only issues that attract new contributors are the very easy ones, the ones where the effort to document the issue and explain how to fix it outweighs the effort to just fix it yourself. You create some of these issues, because you know it’s a worthy goal to get new people involved in open source, and you feel good when the pull request author tells you, “This was my first contribution to an open-source project.”

But you know it’s very unlikely that they’ll ever come back; usually these folks don’t become regular contributors or maintainers. You wonder if you did something wrong, if there’s something more you could have done to onboard new maintainers and help lighten your load.

One of your projects is nearly self-sustaining. You haven’t touched it in years, but there’s a group of maintainers who respond to every issue and PR, so you don’t have to. You’re enormously grateful to these maintainers. But you have no idea what you did to get so many contributors to this project, whereas other projects wind up as your responsibility and yours alone.

Looking ahead

You’re reluctant to create new projects, because you know it will just increase your maintenance burden. In fact, there’s a perverse effect where, the more successful you are, the more you get “punished” with GitHub notifications.

You can still recall the thrill of creation, the joy of writing a new project from scratch and solving a previously-unsolved problem. But now you weigh that joy against the knowledge that any new project will necessarily steal time from old projects. You wonder if it it’s time to formally deprecate one of your old repos, or to mark it as unmaintained.

You wonder how much longer this can go on before you just burn out. You’ve considered doing open source as your day job, but from talking with folks who actually do open source for a living, you know that this usually means permission to work on a specific open-source project as your day job. That doesn’t help you much, because you have dozens of projects across various domains, which are all vying for your time.

What you want most of all is to have more projects that maintain themselves. You try to follow all the best practices: you have a CONTRIBUTING.md and a code of conduct, you enthusiastically hand out owner privileges to anyone who submits a high-quality PR. It’s exhausting to do this for every project, though, so you’re not as diligent as you wish you could be.

You feel guilty about that too, since you know open source is frequently regarded as an exclusive club for privileged white males, like yourself. So you worry that you’re not doing enough to help fix that problem.

More than anything, you feel the guilt: the guilt of knowing that you could have helped someone solve their problem, but instead you let their issue rot for months before closing it. Or the guilt of knowing that someone opened their first pull request ever on your repo, but you didn’t have time to respond to it, and because of that, you may have permanently discouraged them from open source. You feel guilty for the work that you do, for the work that you didn’t do, and for not recruiting more people to share in your unhappy guilt-ridden experience.

Putting it all together

Everything I’ve said above is based on my own experiences. I can’t claim to speak for all people who do open-source software, but this is what it feels like to me.

I’ve been doing open source for a long time (roughly seven years), and I’ve been reluctant to complain about any of this, because I worried it could be perceived as melodramatic whining from someone who ought to know better. After all, isn’t this situation one of my own making? I could walk away from GitHub whenever I want; I have no obligations to anyone.

Also, shouldn’t I be grateful? My work on open source has helped give me my standing in the community. I get invitations to speak at conferences. I have thousands of Twitter followers who listen to what I have to say and hold my opinion in high esteem. Arguably, I got my job at Microsoft because of my experience in open source. Who am I to complain?

And yet, I’ve known many others in positions similar to mine who have burned out. Folks who enthusiastically merged pull requests, fixed issues, and wrote blog posts about their projects, before vanishing without a trace. For some of these people, I don’t even bother opening issues on their repos, because I know they won’t respond. I don’t hold it against them, but I worry that I’ll share their fate.

I’ve already taken plenty of self-care measures. I don’t use the GitHub notification interface anymore – I use email filters, so that I can categorize my notifications based on project (unmaintained ones get ignored) or type of notification (at-mentions and threads I’ve commented on usually deserve higher priority). Since it’s email, this also helps me work offline and manage everything in one place.

Often I will get emails out of the blue asking for support on a project that I’ve long stopped maintaining (I still get at least one per month about this one, for instance), and usually I just don’t even respond to those. I also tend to ignore comments on my blog posts, responses to Stack Overflow answers, and mailing list questions. I aggressively un-watch repos that I feel someone else is doing a good enough job of maintaining.

One reason this situation is so frustrating is that, increasingly, I find that issue triage takes time away from the actual maintenance of a project. In other words, I often only have enough time to read through an issue and say, “Sorry, I don’t have time to look at this right now.” Just the mere act of responding can take up a majority of the time I’ve set aside for open source.

Issue templates, GreenKeeper, Travis, travis_retry, Coveralls, Sauce Labs… there are so many technical solutions to the problems of open-source maintenance, and I’m grateful to all of them. There’s no way I’d be able to keep my head on straight if I didn’t have these automated tools. But at some point you run up against issues that are more social problems than technical problems. One human being just doesn’t scale. I’m not even in the top 100 npm maintainers and I’m already feeling the squeeze; I can’t imagine how those hundred people must feel.

I’ve already told my partner that, if and when we decide to start having kids, I will probably quit open source for good. I just can’t see how I’ll be able to make the time for both raising a family and doing open source. I anticipate that ultimately this will be the solution to my problem: the nuclear option. I just hope it comes in a positive form, like starting a new chapter of my life, and not in a negative form, like unceremoniously burning out.

Closing thoughts

If you’ve read this far and are interested in the problems plaguing open-source communities and potential solutions, you may want to look into “Roads and Bridges” by Nadia Eghbal. It’s probably the clearest-eyed and most thorough analysis of the problem.

I’m also open to suggestions, although keep in mind that I’m very reluctant to mix money and labor in my open-source projects (for perhaps silly idealistic reasons). But I have seen it work well for other projects.

Note that despite all the negativity expressed above, I still feel that open source has been a valuable addition to my life, and I don’t regret any of it. But I hope this is a useful window into how it can feel to be a victim of your own success, and to feel weighed down by all the work left undone.

If there’s one thing I’ve learned in open source, it’s this: the more work you do, the more work gets asked of you. There is no solution to that problem that I’m aware of.

Please feel free to comment on Twitter and to read responses by Mikeal Rogers and Jan Lehnardt.

How to fix a bug in an open-source project

So, you’ve found a bug in an open-source project. First off: don’t panic! This is perfectly normal. Software is written by humans, and humans make mistakes.

You might also be thinking to yourself, “Gee, I’d love to fix this bug.” I mean, who wouldn’t want to be the hero who swoops in and fixes a project used by thousands, if not millions, of people? You’d feel the warm glow of knowing you gave back to the open-source community, and plus it’s a nice notch in the belt of your Github résumé. [1]

Bug in the "buffer" module

A typical open-source bug.

 

For new coders, however, the idea of contributing to an open-source project can be intimidating. One of my friends, who is currently learning JavaScript from online tutorials, told me she found Github’s UI “bewildering.”

There’s also the social aspect: “Will the project maintainer accept my pull request?” “What if they criticize or dismiss me?” These are legitimate concerns to early coders, who may be self-conscious of the perceived gap between their own knowledge and that of the experts.

There’s no need to be timid, though – open-source folks love to get pull requests from newcomers! Recent efforts like First Timers Only and Your First PR have shown that, given enough of a helping hand, anyone can contribute to an open-source project.

The purpose of this blog post is a bit different. Rather than give detailed instructions for a particular bug in a particular project, I’d like to explain how I go about fixing any bug in any project. To illustrate my problem-solving process, I’ll use the example of a recent bug in the “buffer” module, which I solved in about 1 hour, despite having almost no prior experience with the project.

If I can fix a bug in an unfamiliar project, so can you!

Setting the stage

“buffer” is a JavaScript implementation of the Node.js Buffer API for the browser. It allows you to use Node.js modules that depend on the Buffer object even in the browser, where that API doesn’t exist. (Instead, browsers have other APIs for working with binary data, like Uint8Array and ArrayBuffer.)

It might seem like a weird esoteric library, but in fact “buffer” is downloaded nearly 2 million times per month, since it is a core dependency of both Browserify and Webpack. But despite being such a high-profile project, I noticed an issue that had been languishing, unresolved, for over three months.

Open bug in the "buffer" project

This bug was opened in September, but was still open three months later.

 

This issue is no minor glitch, either – it’s a showstopper that causes “buffer” to fail entirely on certain browsers (notably Chrome 43+) and in certain build setups (notably with Babel). Many people hopped onto the thread to confirm the issue (“+1”, “same here,” etc.), and a few offered workarounds using project-specific Webpack configuration. But nobody fixed it.

I decided to take a crack at this bug, because at first glance it seemed easier than everybody was making it out to be. Also, I thought it would be instructive to fix a bug in an unfamiliar codebase and document how I went about solving it.

Note: to be fair, I have contributed to “buffer” before, but these were very minor pull requests, and I still consider myself pretty inexperienced with the project. In taking up this bug, I basically had to start from scratch, to jog my memory about how the project works. [2]

Step 1: download the code

Before I could investigate the bug, I needed to be sure I could build the code myself and run the tests. This is an important step, because it confirms that the project’s tests work on my machine, as written, with my current setup.

For instance: am I using the correct version of Node for this project? The correct version of npm? Is there a global dependency (such as a linter or test runner) that I need to install? Does it work on Mac, or only on Linux or Windows? If you try to fix a bug before establishing that the code works on your machine as-is, you can end up going down a rabbit hole before you even start.

Note: the following steps are JavaScript-specific, but they can also be applied to other languages. It helps to know the conventions for your particular language, such as the typical package manager, linter, and test runner.

First, I cloned the code. You can usually find the Git URL at the top of the project, and HTTPS is recommended unless you have committer rights:

Where to find the Git URL in the Github UI

Once I had the URL, I went to my terminal (iTerm 2 in my case), and I typed:

git clone https://github.com/feross/buffer.git
cd buffer

After this, I had the code on my machine, representing the master branch of the remote Git repository.

Step 2: run the tests

Once I had the code, I needed to figure out how to run the tests. Usually this information is provided in the README.md, but in this case, I searched for the word “test” and didn’t find anything. I also checked for a CONTRIBUTING.md (a document that gives instructions for contributors), but this project didn’t have one.

This is one of those cases where knowledge of the language and ecosystem can be helpful. I happen to know that most JavaScript projects are distributed with npm and can be installed and tested using:

npm install
npm test

Unfortunately, in this case, the above steps completed with an error:

Test output showing Saucelabs failure

However, I noticed the most important part of the error message, which was:

Error: Zuul tried to run tests in saucelabs, 
  however no saucelabs credentials were provided.

From reading this, I understood that this project was using Zuul and Saucelabs to run automated browser tests. Saucelabs is a remote browser-testing service, and I didn’t have my Saucelabs username or password declared as environment variables, so the tests didn’t run.

Furthermore, I didn’t really want to use Saucelabs in this case. I just wanted to test on my own machine, in my own browser. So I needed to figure out how to do that.

Luckily, in most JavaScript projects, you can just snoop around the package.json file and see what other commands are in the "scripts" section. In this case, I looked in the package.json and saw:

...
"scripts": {
  "test": "standard && node ./bin/test.js",
  "test-browser": "zuul -- test/*.js test/node/*.js",
  "test-browser-local": "zuul --local -- test/*.js test/node/*.js",
...

Aha! test-browser-local. That sounds promising!

So I ran:

npm run test-browser-local

And this time, I got the output:

open the following url in a browser:
http://localhost:62466/__zuul

Opening this URL in Chrome, I saw a nice UI where all the tests passed:

tests passing in Chrome

Yay! Success! At this point, I was confident that I could build and test the code on my own machine.

Note: if you’re thinking to yourself, “Wow, you shouldn’t have to do all that detective work just to test a project,” then you’re right! So I also took the time to open a pull request to document the testing procedure. This is one of the most valuable things you can do as a new contributor to a project, because seasoned contributors may be so accustomed to their workflow that they forget to include basic instructions for newbies.

Step 3: find a failing test

Next, we want to find a failing test, to confirm that we have reproduced the issue. (This is a core part of test-driven development, which is vital for many open-source projects.)

In this case, I had to read through the Github thread to try to figure out the source of the problem. Based on the discussion, it seemed that some combinations of tools (notably Babel and Webpack) might force a JavaScript module to run in strict mode. However, “buffer” is apparently not written in strict mode, so it fails in Chrome 43+ due to that browser’s interpretation of strict mode.

Based on this information, I figured I could reproduce the issue by simply adding 'use strict' to the top of the index.js file. (I knew index.js was the source file by checking the "main" field in package.json. But it’s also kinda obvious, because it’s the only top-level JavaScript file in this project.)

So I added 'use strict' to the top of index.js:

adding "use strict" to index.js

And lo and behold, when I refreshed the Zuul test page, I immediately saw the bug that everyone was talking about:

test failure

(Notice that the tests didn’t even manage to run. The page is yellow rather than green, and it says “0 failing, 0 passing.”)

At this point, I had successfully reproduced the bug, using the project’s own test suite. This is an invaluable step, for a few reasons:

  1. Even if you can’t solve the bug, you can open a pull request with just the failing test. This helps bypass a lot of lengthy discussion about how to reproduce the bug.
  2. And if you can solve the bug, then this gives you a way to prove that you’ve fixed it.

Here I just got lucky, because the existing tests were enough to suss out the problem. Sometimes, though, you may need to modify the tests yourself to reproduce the issue. In those cases, my workflow is usually:

  1. Try to break an existing test, e.g. by changing an assertTrue() to assertFalse(), then confirm that you see the test failing. (Believe me, this is a good sanity check!)
  2. Next, copy-paste a test that looks similar to the one you want to write. Then modify that new test until it fails.

For this particular bug, though, I already had a failing test. So I could move on to the next step.

Step 4: fix the bug

Unfortunately, the stacktrace didn’t provide a lot of guidance. And even the line numbers were messed up, because Zuul seems to mangle the code when it transpiles it. So that didn’t help.

unhelpful stacktrace

At this point, I was a bit puzzled. But I tried to think logically: it says we’re setting a property called length on an object that only supports a getter, not a setter. So is there any place where we do something like foo.length = bar? I tried searching the code for instances of .length =:

searching the code for instances of "length"

I found three places where .length is set. The most interesting is the first one, because it’s wrapped in a conditional if/else on Buffer.TYPED_ARRAY_SUPPORT. I have no idea what TYPED_ARRAY_SUPPORT is, but it immediately set off alarm bells for me that the .length assignment was guarded in one case, but not in the other two.

Trying to figure out what this TYPED_ARRAY_SUPPORT thing is, I came across this line of code:

if (Buffer.TYPED_ARRAY_SUPPORT) {
  Buffer.prototype.__proto__ = Uint8Array.prototype
  Buffer.__proto__ = Uint8Array
}

Aha, so when we have TYPED_ARRAY_SUPPORT (whatever that is), we set the prototype of the Buffer object (which is what index.js exports) to be the same as the prototype of the built-in Uint8Array. Hmm, and then in some cases, we’re setting that same prototype.length ourselves. Could it be that Chrome is blocking us from modifying the prototype of the built-in object, in strict mode? A theory for the bug was starting to materialize.

So I did a very simple fix: I took the two cases where length was being set, and I wrapped them both in checks for if (!Buffer.TYPED_ARRAY_SUPPORT):

wrapping the offending code in an if () check

(I also wrapped the .parent assignment, even though I wasn’t sure what it was doing, because it seemed related to the .length assignment.)

Then I refreshed the browser tests, and suddenly all 84 tests were passing! So apparently, that did the trick.

Step 5: open a pull request

At this point, it’s tempting to pat yourself on the back and declare victory. However, with most browser libraries, you can only be sure that your fix works if you test it against a wide variety of browsers. In this case, the Buffer.TYPED_ARRAY_SUPPORT fix seemed to be working for Chrome, but what about other browsers?

Rather than run the tests myself against every browser installed on my laptop (which would exclude browsers like IE and Android, because I’m on a Mac), a simpler trick is to just open a pull request on the project. Most well-run open-source projects have automated tests that run on every commit, including pull requests. This is an invaluable part of the pull request process, because it gives maintainers the peace of mind to know that the patch doesn’t break any tests.

I could tell that “buffer” was indeed using automated tests against many browsers, because I saw the Saucelabs badge in the project README:

The Saucelabs badge in the "buffer" README.

The Saucelabs badge shows the current state of the tests across various browsers.

 

Before opening my pull request, though, I also needed to check that my code conformed to the project style, which in this case is the somewhat presumptuously-named Standard. Personally I don’t much like Standard (semicolons for life!), but this isn’t my project, so I follow the age-old advice of “when in Rome, do as the Romans do.”

To check if your code conforms to Standard style, you can simply run:

npm install -g standard && standard

This passed, so I knew I was following the style guide of the project.

Next, to commit the fix, I created a separate Git branch. I called it 79, because that’s the issue number, and it’s a habit of mine to just name branches after the issue:

git checkout -b 79
git add index.js
git commit -m 'Proper strict mode support. Fixes #79'

Then, I forked the project and submitted a pull request using hub, which is a convenient git wrapper with some Github-specific tools:

hub fork
git push nolanlawson 79
hub pull-request

At this point, hub will create the pull request and print out a URL so you can view it in a browser. After waiting a bit for the tests to complete, I saw that I had a green checkmark – the tests passed in all browsers!

Github UI showing the test results

If the tests failed and you’re unsure why, you can always click the “Show all checks” link, then click “Details” for any failed test:

Github UI showing all checks

In this case, I could see that the tests were run by Travis CI, and I could also see the full log output for the tests:

Travis CI UI

Reassuringly, it was also clear that the tests were run in multiple browsers:

Travis CI output

(If your tests are not passing, you can simply keep pushing your commits to that branch, and Travis will re-run the tests for each commit.)

Final step: wait for PR approval or feedback

At this point, I felt confident that my pull request was a good candidate for merging. Even though it made a behavioral change to the code (using strict mode instead of non-strict mode), I predicted it would be uncontroversial, because most JavaScript projects prefer strict mode anyway. Also, strict code will run in non-strict environments, but the reverse is not true. So there is no practical reason to keep the code non-strict.

You might wonder: couldn’t I just remove the 'use strict' now, since we have the fix anyway? Yes, I could, but then it’s always possible that the project will regress in the future, because if anybody changes the code in a way that makes it non-strict, the automated tests won’t catch it. It’s important to guard against future regressions, because as Dale Harvey put it (paraphrasing):

Any untested code will eventually break.
– Harvey’s Law of Software Entropy

This might sound a bit hyperbolic or paranoid, but I’ve seen this truism play out time and time again in my career as an open-source maintainer. If you don’t test something, then eventually someone will commit code (maybe in a seemingly unrelated part of the codebase), and the untested code will start silently failing.

In any case, the maintainer seemed to agree with my choice, because he merged the code and published a new version within a few days. And this fix actually managed to get the “buffer” project down to 0 open PRs and 0 open issues!

PR was merged!

Yay, the PR got merged!

Conclusion

I hope that this blog post demonstrates that it’s neither impossible, nor even particularly difficult, to fix a bug in an open-source project. Open-source projects tend to play by different rules than other code (they’re more heavily tested, they discuss bugs out in the open, etc.), but if you’re comfortable committing code to personal or closed-source projects, then there’s no practical reason you couldn’t apply those same skills to the open-source world.

In the case of “buffer,” I found this issue to be sadly emblematic of a lot of open-source bugs. The module itself is heavily relied upon, and the bug is a showstopper, but it remained unresolved for months despite many people running into it. Lots of folks were offering temporary workarounds, but nobody made an attempt to fix the underlying problem.

I suppose the expectation was that the maintainer, Feross Aboukhadijeh, would fix the issue himself. But as you can see from his Github page, he maintains a lot of projects. Personally, I’m a fairly small-time open-source author, but it’s not uncommon for me to get 100 new Github notifications in a day. Feross undoubtedly gets even more than that.

So if you think Feross is going to drop everything to work on one particular bug, you should consider that he probably has dozens of other high-priority tasks on his to-do list. Perhaps he doesn’t even use Webpack or Babel (he seems to prefer Browserify), which means he himself might not get a lot of value out of this bugfix. It’s also arguable that this is actually a bug in Webpack or Babel, since it’s incorrectly trying to run non-strict code in a strict environment.

My takeaway is this: if a bug is impacting you personally, and you’re the one who ran into it, then you are in the best position to fix it. Asking a project maintainer, who has limited time and possibly limited interest in your issue, to both reproduce the bug based on your description and then fix it, is probably the least efficient way to get the issue resolved.

So the next time you run into a bug in an open-source project and are tempted to open a new issue (or to say “+1” or “me too”), please consider trying to fix it instead. Even if you’re only able to reproduce the issue, it’s an enormous help to the project maintainers, who are constantly working to triage new issues and decipher longwinded bug reports.

Open-source software is not manna from heaven. Nor is it a self-renewing resource that magically appears in your codebase, with zero responsibility on your part. Nope: open-source software is the tireless product of human labor and ingenuity, and it needs help from the community to survive.

Projects with an asymmetry of contribution – i.e., many more people benefiting than contributing – will eventually sputter out and even die due to maintainer burnout. You can help prevent this situation, while also giving yourself the sense of personal satisfaction from helping your fellow coders, by simply opening up a pull request.

a job well done

Comments like this make my day.

 

So if there’s an open-source project you benefit from, consider giving the maintainers the gift of a pull request. Go check out their Github page, open up the list of unresolved issues, and see if anything strikes your fancy. Even if you don’t succeed in fixing a bug, maybe you’ll find ways to improve the documentation, or to make the testing process easier.

There are lots of ways to contribute to open-source software – both big and small. But you won’t know just how easy it can be until you take that first step, and try.

Footnotes

 

1. For an alternate take on the “Github is your résumé” article, see Why GitHub is not your CV. Although it’s debatable whether your Github profile should influence hiring decisions (or if that only favors folks with the luxury of spare time and energy), I think it’s undeniable that your Github presence does influence your job-hunting prospects. So I still find “beef up your Github profile” to be valuable advice for new programmers. (Anyway, other open-source folks will look at your profile to learn more about you!)

 

2. Eagle-eyed readers may notice that yes, I am actually a collaborator on the “buffer” project. However, Feross gave me collaborator rights after only two pull requests (because he’s awesome), and I still felt that I was very unfamiliar with the codebase when I tried to tackle this bug. (For instance, I couldn’t remember how to run the tests, or maybe they had changed since I last submitted code.) So I still think this is a good example of fixing a bug in an unfamiliar repo.