Archive for the ‘Mastodon’ Category

How to deal with “discourse”

It was chaotic human weather. There’d be a nice morning and then suddenly a storm would roll in.

– Jaron Lanier, describing computer message boards in the 1970s (source, p. 42)

Are you tired of the “discourse” and drama in Mastodon and the fediverse? When it happens, do you wish it would just go away?

Here’s one simple trick to stop discourse dead in its tracks:

Don’t talk about it.

Now, this may sound too glib and oversimplified, so to put it in other words:

When discourse is happening, just don’t talk about it.

That’s it. That’s the way you solve discourse. It’s really as easy as that.

Discourse is a reflection of the innate human desire to not only look at a car crash, but to slow down and gawk at it, causing traffic to grind to a halt so that everyone else says, “Well, I may as well see what the fuss is about.” The more you talk about it, the more you feed it.

So just don’t. Don’t write hot takes on it, don’t make jokes about it, don’t comment on how you’re tired of it, don’t try to calm everybody down, don’t write a big thread about how discourse is ruining the fediverse and won’t it please stop. Just don’t. Pretend like it’s not even there.

There’s a scene in a Simpsons Halloween episode where a bunch of billboard ads have come to life and are running amuck, destroying Springfield. Eventually though, Lisa realizes that the only power ads have is the power we give them, and if you “just don’t look” then they’ll keel over and die.

Simpsons animation of billboard ads wrecking buildings with subtitle "Just don't look"

The “discourse” is exactly the same. Every time you talk about it, even just to mention it offhand or make a joke about it, it encourages more people to say to themselves, “Ooh, a fight! I gotta check this out.” Then they scroll back in their timeline to try to figure out the context, and the cycle begins anew. It’s like a disease that spreads by people complaining about it.

This is why whenever discourse is happening, I just talk about something else. I might also block or mute anyone who is talking about it, because I find the endless drama boring.

Like a car crash, it’s never really interesting. It’s never something that’s going to change your life by finding out about it. It’s always the same petty squabbling you’ve seen a hundred times online.

Once the storm has passed, though, it’s safe to talk about it. You may even write a longwinded blog post about it. But while it’s happening, remember: “just don’t look, just don’t look.”

Mastodon and the challenges of abuse in a federated system

This post will probably only make sense to those deeply involved in Mastodon and the fediverse. So if that’s not your thing, or you’re not interested in issues of social media and safety online, you may want to skip this post.

I keep thinking of the Wil Wheaton fiasco as a kind of security incident that the Mastodon community needs to analyze and understand to prevent it from happening again.

Similar to this thread by CJ Silverio, I’m not thinking about this in terms of whether Wil Wheaton or his detractors were right or wrong. Rather, I’m thinking about how this incident demonstrates that a large-scale harassment attack by motivated actors is not only possible in the fediverse, but is arguably easier than in a centralized system like Twitter or Facebook, where automated tools can help moderators to catch dogpiling as it happens.

As someone who both administrates and moderates Mastodon instances, and who believes in Mastodon’s mission to make social media a more pleasant and human-centric place, this post is my attempt to define the attack vector and propose strategies to prevent it in the future.

Harassment as an attack vector

First off, it’s worth pointing out that there is probably a higher moderator-to-user ratio in the fediverse than on centralized social media like Facebook or Twitter.

According to a Motherboard report, Facebook has about 7,500 moderators for 2 billion users. This works out to roughly 1 moderator per 260,000 users.

Compared to that, a small Mastodon instance like toot.cafe has about 450 active users and one moderator, which is better than Facebook’s ratio by a factor of 500. Similarly, a large instance like mastodon.cloud (where Wil Wheaton had his account) apparently has one moderator and about 5,000 active users, which is still better than Facebook by a factor of 50. But it wasn’t enough to protect Wil Wheaton from mobbing.

The attack vector looks like this: a group of motivated harassers chooses a target somewhere in the fediverse. Every time that person posts, they immediately respond, maybe with something clever like “fuck you” or “log off.” So from the target’s point of view, every time they post something, even something innocuous like a painting or a photo of their dog, they immediately get a dozen comments saying “fuck you” or “go away” or “you’re not welcome here” or whatever. This makes it essentially impossible for them to use the social media platform.

The second part of the attack is that, when the target posts something, harassers from across the fediverse click the “report” button and send a report to their local moderators as well as the moderator of the target’s instance. This overwhelms both the local moderators and (especially) the remote moderator. In mastodon.cloud’s case, it appears the moderator got 60 reports overnight, which was so much trouble that they decided to evict Wil Wheaton from the instance rather than deal with the deluge.

Screenshot of a list of reports in the Mastodon moderation UI

A list of reports in the Mastodon moderation UI

For anyone who has actually done Mastodon moderation, this is totally understandable. The interface is good, but for something like 60 reports, even if your goal is to dismiss them on sight, it’s a lot of tedious pointing-and-clicking. There are currently no batch-operation tools in the moderator UI, and the API is incomplete, so it’s not yet possible to write third-party tools on top of it.

Comparisons to spam

These moderation difficulties also apply to spam, which, as Sarah Jeong points out, is a close cousin to harassment if not the same thing.

During a recent spambot episode in the fediverse, I personally spent hours reporting hundreds of accounts and then suspending them. Many admins like myself closed registrations as a temporary measure to prevent new spambot accounts, until email domain blocking was added to Mastodon and we were able to block the spambots in one fell swoop. (The spambots used various domains in their email addresses, but they all used same email MX domain.)

This was a good solution, but obviously it’s not ideal. If another spambot wave arrives, admins will have to coordinate yet again to block the email domain, and there’s no guarantee that the next attacker will be unsophisticated enough to use the same email domain for each account.

The moderator’s view

Back to harassment campaigns: the point is that moderators are often in the disadvantageous position of being a small number of humans, with all the standard human frailties, trying to use a moderation UI that leaves a lot to be desired.

As a moderator, I might get an email notifying me of a new report while I’m on vacation, on my phone, using a 3G connection somewhere in the countryside, and I might try to resolve the report using a tiny screen with my fumbly human fingers. Or I might get the report when I’m asleep, so I can’t even resolve it for another 8 hours.

Even in the best of conditions, resolving a report is hard. There may be a lot of context behind the report. For instance, if the harassing comment is “lol you got bofa’d in the ligma” then suddenly there’s a lot of context that the moderator has to unpack. (And in case you’re wondering, Urban Dictionary is almost useless for this kind of stuff, because the whole point of the slang and in-jokes is to ensure that the uninitiated aren’t in on the joke, so the top-voted Urban Dictionary definitions usually contain a lot of garbage.)

Screenshot of the Mastodon moderation UI

The Mastodon moderation UI

So now, as a moderator, I might be looking through a thread history and trying to figure out whether something actually constitutes harassment or not, who the reported account is, who reported it, which instance they’re on, etc.

If I choose to suspend, I have to be careful because a remote suspension is not the same thing as a local suspension: a remote suspension merely hides the remote content, whereas a local suspension permanently deletes the account and all their toots. So account moderation can feel like a high-wire act, where if you click the wrong thing, you can completely ruin someone’s Mastodon experience with no recourse. (Note though that in Mastodon 2.5.0 a confirmation dialogue was added for local suspensions, which makes it less scary.)

As a moderator working on a volunteer basis, it can also be hard to muster the willpower to respond to a report in a timely manner. Whenever I see a new report for my instance, I groan and think to myself, “Oh great, what horrible thing do I have to look at now.” Hate speech, vulgar images, potentially illegal content – this is all stuff I’d rather not deal with, especially if I’m on my phone, away from my computer, trying to enjoy my free time. If I’m at work, I may even have to switch away from my work computer and use a separate device and Internet connection, since otherwise I could get flagged by my work’s IT admin for downloading illegal or pornographic content.

In short: moderation is a stressful and thankless job, and those who do it deserve our respect and admiration.

Now take all these factors into account, and imagine that a coordinated group of harassers have dumped 60 (or more!) reports into the moderator’s lap all at once. This is such a stressful and laborious task that it’s not surprising that the admin may decide to suspend the target’s account rather than deal with the coordinated attack. Even if the moderator does decide to deal with it, a sustained harassment campaign could mean that managing the onslaught has become their full-time job.

A harassment campaign is also something like a human DDoS attack: it can flare up and reach its peak in a matter of hours or minutes, depending on how exactly the mob gets whipped up. This means that a moderator who doesn’t handle it on-the-spot may miss the entire thing. So again: a moderator going to sleep, turning off notifications, or just living their life is a liability, at least from the point of view of the harassment target.

Potential solutions

Now let’s start talking about solutions. First off, let’s see what the best-in-class defense is, given how Mastodon currently works.

Someone who wants to avoid a harassment campaign has a few different options:

  1. Use a private (locked) account
  2. Run their own single-user instance
  3. Move to an instance that uses instance whitelisting rather than blacklisting

Let’s go through each of these in turn.

Using a private (locked) account

Using a locked account makes your toots “followers-only” by default and requires approval before someone can follow you or view those toots. This prevents a large-scale harassment attack, since nobody but your approved followers can interact with you. However, it’s sort of a nuclear option, and from the perspective of a celebrity like Wil Wheaton trying to reach his audience, it may not be considered feasible.

Account locking can also be turned on and off at anytime. Unlike Twitter, though, this doesn’t affect the visibility of past posts, so an attacker could still send harassing replies to any of your previous toots, even if your account is currently locked. This means that if you’re under siege from a sudden harassment campaign that flares up and dies down over the course of a few hours, keeping your account locked during that time is not an effective strategy.

Running your own single-user instance

A harassment target could move to an instance where they are the admin, the moderator, and the only user. This gives them wide latitude to apply instance blocks across the entire instance, but those same instance blocks are already available at an individual level, so it doesn’t change much. On the other hand, it allows them to deal with reports about themselves by simply ignoring them, so it does solve the “report deluge” problem.

However, it doesn’t solve the problem of getting an onslaught of harassing replies from different accounts across the fediverse. Each harassing account will still require a block or an instance block, which are tools that are already available even if you don’t own your own instance.

Running your own instance may also require a level of technical savvy and dedication to learning the ins and outs of Mastodon (or another fediverse technology like Pleroma), which the harassment target may consider too much effort with little payoff.

Moving to a whitelisting instance

By default, a Mastodon instance federates with all other instances unless the admin explicitly applies a “domain block.” Some Mastodon instances, though, such as awoo.space, have forked the Mastodon codebase to allow for whitelisting rather than blacklisting.

This means that awoo.space doesn’t federate with other instances by default. Instead, awoo.space admins have to explicitly choose the instances that they federate with. This can limit the attack surface, since awoo.space isn’t exposed to every non-blacklisted instance in the fediverse; instead, it’s exposed only to a subset of instances that have already been vetted and considered safe to federate with.

In the face of a sudden coordinated attack, though, even a cautious instance like awoo.space probably federates with enough instances that a group of motivated actors could set up new accounts on the whitelisted instances and attack the target, potentially overwhelming the target instance’s moderators as well as the moderators of the connected instances. So whitelisting reduces the surface area but doesn’t prevent the attack.

Now, the target could both run their own single-user instance and enable whitelisting. If they were very cautious about which instances to federate with, this could prevent the bulk of the attack, but would require a lot of time investment and have similar problems to a locked account in terms of limiting the reach to their audience.

Conclusion

I don’t have any good answers yet as to how to prevent another dogpiling incident like the one that targeted Wil Wheaton. But I do have some ideas.

First off, the Mastodon project needs better tools for moderation. The current moderation UI is good but a bit clunky, and the API needs to be opened up so that third-party tools can be written on top of it. For instance, a tool could automatically find the email domains for reported spambots and block them. Or, another tool could read the contents of a reported toot, check for certain blacklisted curse words, and immediately delete the toot or silence/suspend the account.

Second off, Mastodon admins need to take the problem of moderation more seriously. Maybe having a team of moderators living in multiple time zones should just be considered the “cost of doing business” when running an instance. Like security features, it’s not a cost that pays visible dividends every single day, but in the event of a sudden coordinated attack it could make the difference between a good experience and a horrible experience.

Perhaps more instances should consider having paid moderators. mastodon.social already pays its moderation team via the main Mastodon Patreon page. Another possible model is for an independent moderator to be employed by multiple instances and get paid through their own Patreon page.

However, I think the Mastodon community also needs to acknowledge the weaknesses of the federated system in handling spam and harassment compared to a centralized system. As Sarah Jamie Lewis says in “On Emergent Centralization”:

Email is a perfect example of an ostensibly decentralized, distributed system that, in defending itself from abuse and spam, became a highly centralized system revolving around a few major oligarchical organizations. The majority of email sent […] today is likely to find itself being processed by the servers of one of these organizations.

Mastodon could eventually move in a similar direction, if the problems aren’t anticipated and headed off at the pass. The fediverse is still relatively peaceful, but right now that’s mostly a function of its size. The fediverse is just not as interesting of a target for attackers, because there aren’t that many people using it.

However, if the fediverse gets much bigger, it could became inundated by dedicated harassment, disinformation, or spambot campaigns (as Twitter and Facebook already are), and it could shift towards centralization as a defense mechanism. For instance, a centralized service might be set up to check toots for illegal content, or to verify accounts, or something similar.

To prevent this, Mastodon needs to recognize its inherent structural weaknesses and find solutions to mitigate them. If it doesn’t, then enough people might be harassed or spammed off of the platform that Mastodon will lose its credibility as a kinder, gentler social network. At that point, it would be abandoned by its responsible users, leaving only the spammers and harassers behind.

Thanks to Eugen Rochko for feedback on a draft of this blog post.

Introducing Pinafore for Mastodon

Today I’m happy to announce a project I’ve been quietly working on for some time: Pinafore. Pinafore is an alternative web client for Mastodon, which looks like this:

Screenshot of Pinafore home page

Here are some of its features:

  • Speed. Pinafore is built on Svelte, meaning it’s faster and lighter-weight[1] than most web apps.
  • Simplicity. Single-column layout, easy-to-read text, and large images.
  • Multi-account support. Log in to multiple instances and set a custom theme for each one.
  • Works offline. Recently-viewed timelines are fully browsable offline.
  • PWA. Pinafore is a Progressive Web App, so you can add it to your phone’s home screen and it will work like a native app.
  • Private. All communication is private between your browser and your instance. No ads or third-party trackers.

Pinafore is still beta quality, but I’m releasing it now to get early feedback. Of course it’s also open-source, so feel free to browse the source code.

In the rest of this post, I want to share a bit about the motivation behind Pinafore, as well as some technical details about how it works.

If you don’t care about technical details, you can skip to pinafore.social or read the user guide.

The need for speed

I love the Mastodon web app, and I’ve even contributed code to it. It’s a PWA, it’s responsive, and it works well across multiple devices. But eventually, I felt like I could make something interesting by rewriting the frontend from scratch. I had a few main goals.

First off, I wanted the UI to be fast even on low-end laptops or phones. For Pinafore, my litmus test was whether it could work well on a Nexus 5 (released in 2013).

Having set the bar that high, I made some hard choices to squeeze out better performance:

  • For the framework, I chose Sapper and Svelte because they offer state-of-the-art performance, essentially compiling to vanilla JavaScript.
  • To be resilient on slow connections, or Lie-Fi, or when offline altogether, Pinafore stores data locally in IndexedDB.
  • To use less memory, Pinafore keeps only a fraction of its UI state in memory (most is stored in IndexedDB), and it uses a fully virtual list to reduce the number of DOM nodes.

Other design decisions that impacted performance:

I also chose to only support modern browsers – the latest versions of Chrome, Edge, Firefox, and Safari. Because of this, Pinafore is able to directly ship modern JavaScript instead of using something like Babel to compile down to a more bloated ES5 format. It also loads a minimum of polyfills, and only for those browsers that need them.

Privacy is key

Thanks to the magic of CORS, Pinafore is an almost purely client-side web app[2]. When you use the Pinafore website, your browser communicates directly with your instance’s public API, just like a native app would. The only job of the pinafore.social server is to serve up HTML, CSS, and JavaScript.

This not only makes the implementation simpler: it also guarantees privacy. You don’t have to trust Pinafore to keep your data safe, because it never handles it in the first place! All user data is stored in your browser, and logging out of your instance simply deletes it.

And even if you don’t trust the Pinafore server, it’s an open-source project, so you can always run it locally. Like the Mastodon project itself, I gave it an AGPL license, so you can host it yourself as long as you make the modified source code available.

Q & A

What’s with the name?

Pinafore is named after my favorite Gilbert and Sullivan play. If you’re unfamiliar, this bit from The Simpsons is a great intro.

Does it work without JavaScript?

Pinafore’s landing page works without JavaScript for SEO reasons, but the app itself requires JavaScript. Although Server-Side Rendering (SSR) is possible, it would require storing user data on Pinafore’s servers, and so I chose to avoid it.

Why are you trying to compete with Mastodon?

Pinafore doesn’t compete with Mastodon; it complements it. Mastodon has a very open API, which is what allows for the flourishing of mobile apps, command-line tools, and even web clients like halcyon.social or Pinafore itself.

One of my goals with Pinafore is to take a bit of the pressure off the Mastodon project to try to be all things to all people. There are lots of demands on Mastodon to make small UI tweaks to suit various users’ preferences, which is a major source of contention, and also partly the motivation for forks like glitch-soc.

But ideally, the way a user engages with their Mastodon instance should be up to that user. As a user, if I want a different background color or for images to be rendered differently, then why should I wait for the Mastodon maintainers or my admin to make that change? I use whatever mobile app I like, so why should the web UI be any different?

As Eugen has said, the web UI is just one application out of many. And I don’t even intend for Pinafore to replace the web UI. There are various features in that UI that I have no plans to implement, such as administration tools, moderation tools, and complex multi-column scenarios. Plus, the web UI is the landing page for each instance, and an opportunity for those instances to be creative and express themselves.

Why didn’t you implement <x feature>?

As with any project, I prioritized some features at the expense of others. Some of these decisions were based on design goals, whereas others were just to get a working beta out the door. I have a list of goals and non-goals in the project README, as well as a roadmap for basic feature parity with the Mastodon web UI.

Why didn’t you use the ActivityPub client-to-server API?

ActivityPub defines both a server-to-server and a client-to-server API, but Mastodon only supports the former. Also, Mastodon’s client-to-server API is supported by other projects like Pleroma, so for now, it’s the most practical choice.

What’s your business model?

None. I wrote Pinafore for fun, out of love for the Mastodon project.

How can I chip in?

I’m a privileged white dude living in a developed country who works for a large tech company. I don’t need any donations. Please donate to Eugen instead so he can continue making Mastodon better!

Thanks!

If you’ve read this far, give Pinafore a try and tell me what you think.

Footnotes

1. Measuring the size of the JavaScript payload after gzip when loading the home feed on a desktop browser, I recorded 333KB for Pinafore, 1.01MB for mastodon.social, and 2.25MB for twitter.com.

2. For the purpose of readable URLs, some minor routing logic is done on the server-side. For instance, account IDs, status IDs, instance names, and hashtags may be sent to the server as part of the URL. But on modern browsers, this will only occur if you explicitly navigate to a page with that URL and the Service Worker hasn’t already been activated, or you hard-refresh. In the vast majority of cases, the Service Worker should handle these URLs, and thus even this light metadata is not sent to the server.

Decentralized identity and decentralized social networks

I’d like to tell you a story about Bob. Bob is a fairly ordinary, upstanding citizen. Bob also has a lot of hobbies.

Bob is a good father, so one of his hobbies is to coach his son’s little-league team. Bob is careful to enforce a certain set of rules during the games and at the after-game pizza parties. If one of the kids uses a curse word or bullies another kid, Bob is expected to intervene and apply the appropriate discipline.

Bob is also a Christian. When he’s in church on Sundays, there’s another set of rules, both implicit and explicit, that Bob is expected to abide by. For instance, it’s okay for Bob to confide in other churchgoers about his momentary lapses of faith, or about his struggles to understand certain Bible passages. But if Bob started quoting Richard Dawkins or loudly preaching atheism, he’d probably create a very awkward scene, and may even get kicked out of church.

Bob also happens to be a vegetarian, and he attends a monthly vegetarian meetup. Within this group, there’s an entirely different set of rules and norms at play. Bob is careful not to talk too much about his favorite recipes involving cheese, eggs, or honey, because he knows that there’s a sizable minority of vegans in the group who may be offended. It would also be completely unacceptable to talk about a juicy steak dinner, even though this topic of conversation may be perfectly acceptable within the church or the little-league team.

Bob also has a set of old college buddies that he occasionally meets up with at the local bar. Here, once again, an entirely different set of norms apply. Raunchy jokes and curse words are not only allowed – they’re encouraged. Open debate about religion is tolerated. Bob may even be able to let his vegetarian guard down and talk about a delicious steak dinner he ate in a moment of weakness.

One Bob out of many

Bob intuitively understands these different norms and contexts, and he effortlessly glides from one to the other. It’s as if there’s a switch in his brain that activates as soon as he walks through the church doors or into the bar.

Furthermore, nobody accuses Bob of being dishonest or duplicitous for acting this way. In fact, everything described above is such a fundamental, everyday part of the human experience that it’s downright boring.

Now Bob goes online. Suddenly, every social network is telling him that he should have exactly one identity, speak in one voice, and abide by one set of rules. Mark Zuckerberg says, “Having two identities for yourself is an example of a lack of integrity.” OKCupid says, “We want you […] to go by who you are, and not be hidden beneath another layer of mystique.” Eric Schmidt says, “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.”

Bob is apparently expected to use his real name, and use one account, and present a single public face for every possible context and social situation.

Intuitively, Bob knows this is bullshit. We all know it’s bullshit. But in the online world, we’ve just learned to put up with it.

Identity on social networks

Now let’s step away from Bob for a moment and talk about how the rest of us deal with this problem.

One of the common coping mechanisms is to fracture ourselves into different silos. We use Slack to talk to our coworkers, Discord to chat with our gamer buddies, LinkedIn for professional talk, Twitter for talking about the news, Facebook for talking with our family, and so on. Folks who follow this strategy may use a completely different voice in each platform and may not even reveal their real names.

With decentralized social networks, though, the situation gets more interesting. On platforms like Mastodon, Pleroma, GNU Social, postActiv, and others, you sign up for a particular “instance” representing a self-contained community. This community may have its own rules, its own culture, and even its own emojis and theme color.

However, each instance isn’t walled off from the outside world. Instead, it has tendrils that can reach into any other community it may be connected to. It’s less like the isolated silos of Facebook, Twitter, and Slack, and more like a cluster of small towns with some well-traveled roads between them.

Some folks look at Mastodon as a mere replacement for Twitter. Maybe for them it’s Twitter for lefties, or Twitter for anime artists, or Twitter for a nationalist agenda. It’s really none of these things, though, and I think importantly it has the potential to address one of the biggest problems with online identity: context collapse.

Context collapse

“Context collapse” was first coined by danah boyd, and a good description of it can be found in a blog post by Michael Wesch.

The basic idea is this: when human beings converse in person, we use a wide variety of unwritten rules to govern the acceptable boundaries of the conversation. We pick up on subtle nonverbal cues such as someone’s posture, their hand gestures, and their degree of eye contact to choose the right “register” for the conversation. We might even switch between different dialects of the same language, depending on who we’re talking to (linguists call this “code-switching”).

All of this happens automatically and intuitively, and it’s a valuable tool for avoiding ambiguities and misunderstandings. There are entire subfields of linguistics that study how humans communicate this way, such as pragmatics and sociolinguistics.

Online, we have none of this context. Staring into the webcam or into the textbox on a social media site, we are simultaneously addressing everyone and no one, for now and for all time. Factor in character limits, upload limits, and just the limits of human attention, and this is a ripe environment for misunderstandings. Sarcasm, facetiousness, irony, playful ribbing between friends – all of it can be lost on your audience if they don’t have the proper context to guide them.

Here are some of the symptoms of context collapse that you may have experienced:

  • jokes are misunderstood and taken at face value
  • in-jokes between friends are perceived as harassment by outsiders
  • you accidentally offend someone when no offense was intended
  • something you say is taken out of context and used for dogpiling
  • you feel like you have to censor yourself online
  • you agonize over every character and punctuation mark to avoid misinterpretation

All of these situations can be frustrating or even harmful to your mental health. Consider Justine Sacco’s poorly-received joke that cost her her job and brought her a lot of mental anguish.

Fracturing ourselves into siloed social networks, in its own ham-handed way, offers a solution to this problem. Instead of just hoping that our readers will pick up on the context, we rely on the context granted by the social network itself. Discord is for gamers, LinkedIn is for professionals, Slack is for coworkers, etc.

But on decentralized social networks, we may have a more elegant solution to this problem, and one that doesn’t require locking up our identities into various proprietary silos.

Solving context collapse on decentralized social media

The notion that you should use a single identity online is, I believe, a holdover from the centralized social media sites. Their goal is to get you to reveal as much information about yourself as possible (to sell it to advertisers), so of course they would discourage you from having multiple accounts or from concealing your real name. But that doesn’t mean decentralized social networks need to be the same.

Instead of treating your identity “on Mastodon” or “on the fediverse” as a single entity, what if you had multiple identities on multiple instances, and you treated them as distinct? What might that look like?

I figured this out myself over the past year or so, as I largely split my online identity on Mastodon into two accounts: @nolan@mastodon.social and @nolan@toot.cafe.

The tone of the two accounts is completely different. @nolan@mastodon.social makes silly jokes and mostly writes in all lowercase. @nolan@toot.cafe talks about software, programming, and his day job, and tends to use proper punctuation. (It’s more like the voice of this blog.)

The reason for this split is partly historical. @nolan@mastodon.social tends to speak only in jokes because it was my first Mastodon account, and when I signed up I didn’t reveal my full name or tie it back to my real-world identity. Instead, I just tried to have fun, making as many silly jokes as I could and seeing what landed and what didn’t. I’d say I did fairly well, since at one point I had the most-favorited Mastodon post of all time and got quoted twice in this article on Mastodon.

Screenshot of Mastodon post saying "mom: hey son I joined this new Mastodon thing me: oh shit mom, I coulda helped you find a server, which one did you choose? mom: well I liked the privacy policy on satanic.bikerladi.es but then communist.blaze.party had the shortest ping latency so"

When I set up my own instance, though, things started to get complicated. Now I had an admin account, @nolan@toot.cafe, and I needed to talk about serious admin stuff: when was the server going down for maintenance, what was our privacy policy, what was our moderation policy, etc. So for that account, I switched to my professional voice so that folks could understand that I wasn’t joking or being sarcastic.

But I still had @nolan@mastodon.social for the silly stuff. And the more I used it, the more I found that I liked the split. People who followed me on @nolan@mastodon.social didn’t necessarily care about Mastodon admin topics (memory usage, systemd, Ruby, etc.) – maybe they only followed me for the fun stuff. Likewise, people who followed me on @nolan@toot.cafe maybe just wanted to keep up-to-date on Mastodon admin and development (especially as I started contributing to the Mastodon codebase itself), and didn’t care for the jokes.

Keeping my identities separate thus served a few purposes:

  • I could have fun with people who didn’t know or care about tech topics (e.g. my wife, who loves my jokes but finds tech boring).
  • Nobody had to wonder when I was being sarcastic and when I was being serious.
  • People who followed me for tech stuff didn’t have to put up with my jokes if they didn’t like my sense of humor.
  • People who liked both my jokes and my tech talk could still follow me on both accounts.

This process wasn’t without its hurdles, though. At one point I was using fairly similar avatars for each account:

Screenshot of two Mastodon avatars, one with a subtle coffee icon and one without

My original Mastodon account avatars. Could you tell the difference between the two?

Eventually, though, I got tired of people not picking up on the sarcasm in my joke account. So I switched to an avatar that could only be interpreted as something silly:

Picture of a Mastodon avatar that looks like Ogmo from the Jumper game but handrawn, a silly face with bug eyes

My new, unambiguously “zany” Mastodon avatar.

Taking this avatar seriously would be like arguing with the “We Rate Dogs” account on Twitter. The new avatar makes the intent of the account much clearer.

Am I being duplicitous? No, I link between the two accounts on my profile pages, so everyone can figure out that both accounts are me.

Is it hard to juggle two different accounts? No, I use separate browser tabs, and since toot.cafe has its own theme color, it’s easy for me to remember which site I’m on.

Screenshot of Mastodon toot saying "tfw I have to switch from my joke account to my serious account to boost a toot, to make it clear I am not boosting ironically"

Is it hard for others to know which Nolan to talk to? Well, sort of. When my wife wants to post something like “@nolan said to me today…” she tends to use my @nolan@mastodon.social account because she has more fun interacting with that account than with my serious one. But other than that, I haven’t really run into any problems with this system.

Previously, I was also running a French-language account at @nolan@mastodon.xyz, but I found this a bit hard to manage. I didn’t have a network of French-speaking friends I was regularly talking to, and most of the francophones on Mastodon can speak English anyway. Also, managing three social media accounts was just a bit too much of a time investment for me.

Instance policies and identity

Right now I have a simple two-account system, and my choice of instance for each account is fairly arbitrary. (Although toot.cafe is somewhat tech- and programming-themed, so talking about computery stuff there feels very natural.) However, you could imagine tailoring an account to its home instance, based on the instance’s theme or moderation policies.

Going back to the Bob example, let’s imagine that Bob sets up four instances:

  1. An instance for his son’s little-league team, which is closed off from most of the fediverse via whitelisting and has very strict moderation policies to ensure it stays kid-friendly. All of the parents have moderator accounts.
  2. An instance for his church group. The moderation policies reflect Christian values, and although you can talk openly about your faith, it’s advised to use Content Warnings for controversial topics.
  3. An instance for his vegetarian meetup. You’re encouraged to take pictures of your food, but pictures of meat dishes are strictly off-limits. Vegetarian (non-vegan) food photos are okay, but should be hidden behind a “sensitive” tag.
  4. A free-for-all instance for his drinking buddies. Anything goes, say whatever racy or off-color joke pops into your head, but be aware of the consequences – such as the fact that other instances might not want to federate with you. You might also want to use a pseudonym here instead of your real name.

Bob could create a separate account on all four instances, and he might speak in a very different voice on each of them. If he’s an admin or a moderator, he may even enforce very different policies, and he might choose different instances to block or silence. In fact, one of his instances may even block the other! Via whitelisting, the kid-friendly one certainly blocks the drinking-buddies instance.

Bob’s not doing anything wrong here. He’s not a hypocrite. He’s not being deceitful. He’s just taking exactly the same logic that we use in the real world, and applying it to the online world.

Conclusion

It’s unreasonable to expect people to speak in the same voice in every social setting offline, so it’s equally unreasonable to ask them to do it online.

In the world of centralized social networks, users have responded to “real name policies” and “please use one account” by fracturing themselves into different proprietary silos. On decentralized social networks, we can continue fracturing ourselves based on instances, but these disparate identities are allowed to comingle a bit, thanks to the magic of federation.

I don’t expect everyone to use the same techniques I use, such as having a joke account and a serious account. For some people, that’s just too much of an investment in social media, and it’s too hard to juggle more than one account. But I think it’s a partial solution to the problem of context collapse, and although it’s a bit of extra effort, it can pay dividends in the form of fewer misunderstandings, fewer ambiguities, and less confusion for your readers.