The joy and challenge of developing for KaiOS

Photo of my hand holding a Nokia 8110 4G running Pinafore on KaiOS

Recently I spent some time getting Pinafore to run on a KaiOS device. Overall, I found it to be challenging but enjoyable. In this post, I’ll talk about some of the tricks that helped me to work with this curious little feature phone.

Why KaiOS? Well, I guess I’ve always been a bit of a gadget geek. I’ve been developing for Android since the early days of the HTC Dream, I managed to get my Nexus 5 to triple-boot into Android, Firefox OS, and Ubuntu Touch, and I’ve also played around with the Amazon Fire Phone, Windows Phone, iPod Touch… You get the idea.

Recently though, I’ve found the mobile landscape a bit boring. Android and iOS are the two big players, and it’s been that way for nearly a decade. Firefox OS, Blackberry OS, Windows Phone, and other would-be contenders have long since bitten the dust.

That’s why I’m excited about KaiOS. It’s a platform that’s growing surprisingly fast and is actually based on Firefox OS (may it rest in peace). It’s especially popular in India, where it’s already the second-most popular OS after Android.

KaiOS can run regular websites, but it can also run “packaged” apps, ala Firefox OS. So how can an aspiring KaiOS developer get started?

Step one: buying a development phone

I like to test on real devices, because I want to experience my app with real-world hardware and performance constraints, as an actual user would. I decided to get the Nokia 8110 4G because it was available on Amazon for $70.

Note that this phone only supports AT&T in the U.S., so if you plan on using it as your actual personal phone, you may be out of luck.

When you first unbox the phone, you’ll have about 5 minutes’ worth of setup screens, after which you’re ready to go. I’d also recommend going into the settings and upgrading the OS, since mine needed an upgrade right away.

Next steps: development environment

The best resources I’ve found for KaiOS development are the official developer guide and Paul Kinlan’s quick start guide. Paul’s guide is the best place to start, since it’s short and will get you up and running quickly.

Paul says that he could only get WebIDE to work in Firefox 48, but I found that Firefox 59 also worked (per the KaiOS developer guide). So let’s use that.

First you’ll want to download Firefox 59 for your OS (in my case, Ubuntu) via Mozilla’s download site. Sadly I could not get Firefox 59 to run alongside the latest Firefox (which is my go-to browser), and I also found that Firefox 59 tried to aggressively update itself, so every time I reopened it, it would be running the latest version. To work around that, you can run this script:

rm -fr firefox profile
mkdir profile
tar -xjf firefox-59.0.3.tar.bz2
./firefox/firefox-bin -profile profile

This will delete Firefox’s stored data and restart it with a fresh profile every time. It’s handy for quickly restarting Firefox!

After you input the “secret” code that Paul mentions (*#*#33284#*#*), you should see a little bug icon in the corner:

Screenshot of KaiOS home screen showing a little bug icon in the corner

At this point you should be able to run:

adb devices

…and see your device in the list of devices:

List of devices attached
4939400 device

If adb isn’t running, you can run adb kill-server && adb start-server to restart it.

This didn’t work for me out of the box – I got an error saying it couldn’t connect to my device due to faulty udev rules. Luckily the KaiOS developer guide has a “Setting USB access” script that will fix that.

After running that script, I used Paul’s trick of setting up adb forwarding:

adb forward tcp:6000 localfilesystem:/data/local/debugger-socket

After this, you should be able to connect in Firefox WebIDE by clicking “Remote Runtime.”

Screenshot of Firefox WebIDE with Remote Runtime selected and localhost:6000 entered

One thing I noticed is that occasionally my device would get disconnected from WebIDE. In that case, you can just forward to another port:

adb forward tcp:6001 localfilesystem:/data/local/debugger-socket

…and then reconnect WebIDE to the new port, and it should work.

Next steps: actually writing code

As far as I can tell, KaiOS is based on Firefox 48. You can tell by the user agent string:

Mozilla/5.0 (Mobile; Nokia_8110_4G; rv:48.0) Gecko/48.0 Firefox/48.0 KAIOS/2.5.1

This means that you won’t necessarily have all the latest-and-greatest JavaScript or Web API features. Here’s a brief list of what I found didn’t work:

  • async functions
  • ServiceWorker
  • Intl

As a quick gut-check of whether your app will run on KaiOS, you can try downloading Firefox 48, loading your website, and seeing if it works. (I found this easier than trying to debug on KaiOS right out of the gate.) You can use the same script above to run Firefox 48 with a fresh profile.

Once your code is sufficiently backwards-compatible (I found that @babel/preset-env with the default settings plus an Intl polyfill was good enough for me as a starting point), you have two options for testing your app:

  • as a packaged app
  • as a hosted app

(If you’ve done Firefox OS development before, you’ll find these options familiar.)

I tried both techniques, but found that the hosted app was easier for quick testing. If your development machine and phone are on the same WiFi network, then you can create a manifest.webapp file per KaiOS’s documentation and then load a URL like so:

http://192.168.1.2:4002/manifest.webapp

Screenshot of WebIDE showing a local manifest URL entered into the Hosted App popup

(If you’re not sure what IP address, to use, run ifconfig -a | grep inet.)

Then you should see your app listed, and you can run it on the device by clicking the “play” button:

Screenshot of WebIDE showing Pinafore as a hosted app

You can also try the packaged-app route, but I found that it didn’t play well with the routing library in my framework of choice (Sapper) due to the extra /index.html. Plus, if I ever actually submit this app to the KaiStore, I’ll probably go with the hosted route since it’s less maintenance work.

Optimizing for KaiOS

In terms of web development, there are a few good things to be aware of when developing for KaiOS, and in particular for the Nokia 8110 4G.

First off, the available screen size is 240×294 pixels, which you can check by using window.innerWidth in JavaScript, @media (max-width: 240px) in CSS, etc. If you set "fullscreen": true to hide the status bar, you can get slightly more breathing room: 240×320.

I had already optimized my app for the iPhone 4’s screen size (which is 320×480), but I found I had to do extra work to make things show up correctly on an even tinier screen.

Screenshot comparing Pinafore on an iPhone 4 versus a smaller Nokia 8110 4G

The iPhone 4 is already pretty small, but it’s a giant compared to the Nokia 8110 4G.

Next, the input methods are quite limited. If you’ve actually taken the time to make your webapp keyboard-accessible, then you should be most of the way there. But there are still some extra optimizations you may have to make for KaiOS.

By default, the and arrows will typically scroll the page up or down. I decided to use the and arrows to change the focus – i.e. to act as proxies for the Tab and Shift + Tab keys. I wrote some fairly simple logic to navigate through the focusable elements on the page, taking special care to allow the arrow keys to work properly inside of text inputs and modal dialogs.

I also found that the focus outlines were a bit hard to see on the small screen, so I scaled them up for KaiOS and added some extra styling to make the focus more obvious.

Some apps may opt to display an onscreen cursor (this seems to be the default behavior of the web browser), but I found that to be a bit of a clumsy experience. So I think managing the focus is better.

 

Other KaiOS web developer hurdles

The next most difficult thing about KaiOS is still the lack of modern browser standards. For instance, I couldn’t get SVG <use> tags to work properly, so I fell back to inlining each SVG as a workaround.

CSS Grid is (mercifully) supported, so I didn’t have to backport my Grid logic. Since Grid is not supported in Firefox 48, this hints to me that KaiOS must have tweaked the Firefox OS code beyond what Firefox 48 can support. But I haven’t done the full rundown of all the supported features. (And I still find it helpful to use Firefox 48 as a minimum bar for a quick test target.)

As with all web development, this is the area where you’ll probably have to roll up your sleeves and test for what’s supported and what’s not. But overall I find KaiOS to be much easier to develop for than, say, IE11. It’s much further along on the standards track than a truly legacy browser.

Once you’ve gotten past those hurdles, though, the result can be surprisingly good! I was impressed at how well this tiny feature phone could run Pinafore:

 
(Much of the credit, I’m sure, is due to the excellent SvelteJS framework.)

Conclusion

Building for KaiOS is fun! The device is quirky and lightweight, and it makes me excited about mobile development in a way that I haven’t felt in a while. In terms of development, it feels like a nice compromise between a feature phone with limited hardware (small screen, no touch, limited key input) and a smartphone OS with cutting-edge web technologies (it has CSS Grid! And String.prototype.padStart()! It’s not so bad!).

However, I’m not a huge fan of app stores – I don’t like having to agree to the ToS, going through a review process, and ultimately subjecting myself to the whims of a private corporation. But KaiOS is neat, it’s cute, and it’s growing in developing markets, so I think it’s a worthy venue for development.

And unlike fully proprietary development platforms, you’re not writing throwaway code if KaiOS ever goes away. Since it’s ultimately just a webapp, improvements I’ve made to Pinafore for keyboard accessibility and small screens will accrue to any other device that can browse the web. Other than the manifest.webapp file, there’s nothing truly specific to KaiOS that I had to do to support it.

Overall, I find it fun and refreshing to build for something like KaiOS. It helps me see the web platform from a new perspective, and if nothing else, I’ve got a new gadget to play with.

Browsers, input events, and frame throttling

If there’s one thing I’ve learned about web performance, it’s that you have to approach it with a sense of open-mindedness and humility. Otherwise, prepare to be humbled.

Just as soon as you think you’ve got it all figured out, poof! Browsers change their implementation. Or poof! The spec changes. Or poof! You just flat-out turn out to be wrong. So you have to constantly test and revalidate your assumptions.

In a recent post, I suggested that pointermove events fire more frequently than requestAnimationFrame, and so it’s a good idea to throttle them to rAF. I also rattled off some other events that may fire faster than rAF, such as scroll, wheel, touchmove, and mousemove.

Do these events actually fire faster than rAF, though? It’s an important detail! If browsers already align/throttle these events to rAF, then there’s little point in recreating that same behavior in userland. (Thankfully an extra rAF won’t add an extra frame delay, though, assuming browsers fire the rAF-aligned events right before rAF. Thanks Jake Archibald for this tip!)

TL;DR: it varies across browsers and events. I’d still recommend the rAF-throttling technique described in my previous post.

Step one: check the spec

The first question to ask is: what does the spec say?

After reading the specs for pointermove, mousemove, touchmove, scroll, and wheel, I found that the only mention of animation frame timing was in pointermove and scroll. The spec for pointermove says:

A user agent MUST fire a pointer event named pointermove when a pointer changes button state. […] These events may be coalesced or aligned to animation frame callbacks based on UA decision.

(Emphasis mine.) So browsers are not required to coalesce or align pointermove events to animation frames, but they may do so. (Presumably, this is the point of getCoalescedEvents().)

As for scroll, it’s mentioned in the event loop spec, where it says “for each fully active Document […], run the scroll steps for that Document” as part of the steps before running rAF callbacks. So on the main document at least, scroll is definitely supposed to fire before rAF.

For contrast, here’s touchmove:

A user agent must dispatch this event type to indicate when the user moves a touch point along the touch surface. […] Note that the rate at which the user agent sends touchmove events is implementation-defined, and may depend on hardware capabilities and other implementation details.

(Emphasis mine.) So this time, nothing about animation frames, and also some language about “implementation-defined.” Similarly, here’s mousemove:

The frequency rate of events while the pointing device is moved is implementation-, device-, and platform-specific, but multiple consecutive mousemove events SHOULD be fired for sustained pointer-device movement, rather than a single event for each instance of mouse movement.

(Emphasis mine.) So we’re starting to get a pretty clear picture (or a hazy one, depending on your perspective). It seems that, aside from scroll, the specs don’t have much to say about whether events should be coalesced with rAF or not.

Step two: test it

However, this doesn’t mean browsers don’t do it! After all, it’s clearly in browsers’ interests to coalesce these events to animation frames. Assuming that most web developers do the simplest possible thing and handle the events directly, then any browser that aligns with rAF will avoid some unintentional jank from noisy input events.

Do browsers actually do this, though? Thankfully Jake has written a nice demo which makes it easy to test this. I’ve also extended his demo to test scroll events. And because I apparently have way too much free time on my hands (or I just hate uncertainty when it comes to browser stuff), I went ahead and compiled the data for various browsers and OSes:

pointermove mousemove touchmove wheel scroll
Chrome 76 (Windows 10) Y* Y* N/A Y* Y
Firefox 68 (Windows 10) Y Y N/A N Y
Edge 18 (Windows 10) N N N/A N Y
Chrome 76 (macOS 10.14.6) Y* Y* N/A Y* Y
Firefox 68 (macOS 10.14.6) Y Y N/A N Y
Safari 12.1.2 (macOS 10.14.6) N/A N N/A N N
Safari Technology Preview 13.1 (macOS 10.14.6) N N N/A N N
Chrome 76 (Ubuntu 16.04) Y* Y* N/A Y* Y
Firefox 68 (Ubuntu 16.04) Y Y N/A N Y
GNOME Web 3.28.5 (Ubuntu 16.04) N/A N N/A N N
Chrome 76 (Android 6) Y N/A Y N/A Y
Firefox 68 (Android 6) N/A N/A Y N/A Y
Safari (iOS 12.4) N/A N/A Y N/A N

Abbreviations:

  • Y: Yes, events are coalesced and aligned to rAF.
  • N: No, events fire independently of and faster than rAF.
  • N/A: Event doesn’t apply to this device/browser.
  • *: Except when Dev Tools are opened, apparently.

Conclusion

As you can see from the data, there is a lot of variance in terms of which events and browsers align to rAF. Although for the most part, it seems consistent within browser engines (e.g. GNOME Web is a WebKit-based browser, and it patterns with macOS Safari). Note though that I only tested a regular mouse or trackpad, not exotic input devices such as a Wacom stylus, Surface Pen, etc.

Given this data, I would take the cautious approach and still do the manual rAF-throttling as described in my previous blog post. It has the upside of being guaranteed to work roughly the same across all browsers, at the cost of some extra bookkeeping. [1]

Depending on your supported browser matrix, though, and depending on when you’re reading this (maybe at a point in the future when all browser input events are rAF-aligned!), then you may just handle the input directly and trust the browser to align it to rAF. [2]

Thanks to Ben Kelly and Jake Archibald for feedback on a draft of this blog post. Thanks also to Jake for clueing me in to this rAF-throttling business in the first place.

Footnotes

1. Interestingly, in the case of pointermove at least, the browser behavior can be feature-detected by checking getCoalescedEvents (i.e. Firefox and Chrome have it, Edge and Safari Technology Preview don’t). So you can use PointerEvent.prototype.getCoalescedEvents as a feature check. But there’s little point in feature-detecting, since manual rAF-throttling doesn’t add an extra frame delay in browsers that already rAF-align.

2. Jake also pointed me to an interesting detail: “Although these events are synced to rendering, they’ll flush if another non-synced event happens.” So for instance, keyboard events will interfere with pointermove and cause them to no longer sync to rAF, which you can reproduce in Jake’s demo by typing on the keyboard and moving the mouse at the same time. Another good reason to just rAF-throttle and be sure!

High-performance input handling on the web

Update: In a follow-up post, I explore some of the subtleties across browsers in how they fire input events.

There is a class of UI performance problems that arise from the following situation: An input event is firing faster than the browser can paint frames.

Several events can fit this description:

  • scroll
  • wheel
  • mousemove
  • touchmove
  • pointermove
  • etc.

Intuitively, it makes sense why this would happen. A user can jiggle their mouse and deliver precise x/y updates faster than the browser can paint frames, especially if the UI thread is busy and thus the framerate is being throttled (also known as “jank”).

Screenshot of Chrome Dev Tools showing that a long frame of 546ms can contain as many as four pointermove events

In the above screenshot, pointermove events are firing faster than the framerate can keep up.[1] This can also happen for scroll events, touch events, etc.

Update: In Chrome, pointermove is actually supposed to align/throttle to requestAnimationFrame automatically, but there is a bug where it behaves differently with Dev Tools open.

The performance problem occurs when the developer naïvely chooses to handle the input directly:

element.addEventListener('pointermove', () => {
  doExpensiveOperation()
})

In a previous post, I discussed Lodash’s debounce and throttle functions, which I find very useful for these kinds of situations. Recently however, I found a pattern I like even better, so I want to discuss that here.

Understanding the event loop

Let’s take a step back. What exactly are we trying to achieve here? Well, we want the browser to do only the work necessary to paint the frames that it’s able to paint. For instance, in the case of a pointermove event, we may want to update the x/y coordinates of an element rendered to the DOM.

The problem with Lodash’s throttle()/debounce() is that we would have to choose an arbitrary delay (e.g. 20 milliseconds or 50 milliseconds), which may end up being faster or slower than the browser is actually able to paint, depending on the device and browser. So really, we want to throttle to requestAnimationFrame():

element.addEventListener('pointermove', () => {
  requestAnimationFrame(doExpensiveOperation)
})

With the above code, we are at least aligning our work with the browser’s event loop, i.e. firing right before style and layout are calculated.

However, even this is not really ideal. Imagine that a pointermove event fires three times for every frame. In that case, we will essentially do three times the necessary work on every frame:

Chrome Dev Tools screenshot showing an 82 millisecond frame where there are three pointermove events queued by requestAnimationFrame inside of the frame

This may be harmless if the code is fast enough, or if it’s only writing to the DOM. However, if it’s both writing to and reading from the DOM, then we will end up with the classic layout thrashing scenario,[2] and our rAF-based solution is actually no better than handling the input directly, because we recalculate the style and layout for every pointermove event.

Chrome Dev Tools screenshot of layout thrashing, showing two pointermove events with large Layout blocks and the text "Forced reflow is a likely performance bottleneck"

Note the style and layout recalculations in the purple blocks, which Chrome marks with a red triangle and a warning about “forced reflow.”

Throttling based on framerate

Again, let’s take a step back and figure out what we’re trying to do. If the user is dragging their finger across the screen, and pointermove fires 3 times for every frame, then we actually don’t care about the first and second events. We only care about the third one, because that’s the one we need to paint.

So let’s only run the final callback before each requestAnimationFrame. This pattern will work nicely:

function throttleRAF () {
  let queuedCallback
  return callback => {
    if (!queuedCallback) {
      requestAnimationFrame(() => {
        const cb = queuedCallback
        queuedCallback = null
        cb()
      })
    }
    queuedCallback = callback
  }
}

We could also use cancelAnimationFrame for this, but I prefer the above solution because it’s calling fewer DOM APIs. (It only calls requestAnimationFrame() once per frame.)

This is nice, but at this point we can still optimize it further. Recall that we want to avoid layout thrashing, which means we want to batch all of our reads and writes to avoid unnecessary recalculations.

In “Accurately measuring layout on the web”, I explore some patterns for queuing a timer to fire after style and layout are calculated. Since writing that post, a new web standard called requestPostAnimationFrame has been proposed, and it fits the bill nicely. There is also a good polyfill called afterframe.

To best align our DOM updates with the browser’s event loop, we want to follow these simple rules:

  1. DOM writes go in requestAnimationFrame().
  2. DOM reads go in requestPostAnimationFrame().

The reason this works is because we write to the DOM right before the browser will need to calculate style and layout (in rAF), and then we read from the DOM once the calculations have been made and the DOM is “clean” (in rPAF).

If we do this correctly, then we shouldn’t see any warnings in the Chrome Dev Tools about “forced reflow” (i.e. a forced style/layout outside of the browser’s normal event loop). Instead, all layout calculations should happen during the regular event loop cycle.

Chrome Dev Tools screenshot showing one pointermove per frame and large layout blocks with no "forced reflow" warning

In the Chrome Dev Tools, you can tell the difference between a forced layout (or “reflow”) and a normal one because of the red triangle (and warning) on the purple style/layout blocks. Note that above, there are no warnings.

To accomplish this, let’s make our throttler more generic, and create one that can handle requestPostAnimationFrame as well:

function throttle (timer) {
  let queuedCallback
  return callback => {
    if (!queuedCallback) {
      timer(() => {
        const cb = queuedCallback
        queuedCallback = null
        cb()
      })
    }
    queuedCallback = callback
  }
}

Then we can create multiple throttlers based on whether we’re doing DOM reads or writes:[3]

const throttledWrite = throttle(requestAnimationFrame)
const throttledRead = throttle(requestPostAnimationFrame)

element.addEventListener('pointermove', e => {
  throttledWrite(() => {
    doWrite(e)
  })
  throttledRead(() => {
    doRead(e)
  })
})

Effectively, we have implemented something like fastdom, but using only requestAnimationFrame and requestPostAnimationFrame!

Pointer event pitfalls

The last piece of the puzzle (at least for me, while implementing a UI like this), was to avoid the pointer events polyfill. I found that, even after implementing all the above performance improvements, my UI was still janky in Firefox for Android.

After some digging with WebIDE, I found that Firefox for Android currently does not support Pointer Events, and instead only supports Touch Events. (This is similar to the current version of iOS Safari.) After profiling, I found that the polyfill itself was taking up a lot of my frame budget.

Screenshot of Firefox WebIDE showing a lot of time spent in pointer-events polyfill

So instead, I switched to handling pointer/mouse/touch events myself. Hopefully in the near future this won’t be necessary, and all browsers will support Pointer Events! We’re already close.

Here is the before-and-after of my UI, using Firefox on a Nexus 5:

 

When handling very performance-sensitive scenarios, like a UI that should respond to every pointermove event, it’s important to reduce the amount of work done on each frame. I’m sure that this polyfill is useful in other situations, but in my case, it was just adding too much overhead.

One other optimization I made was to delay updates to the store (which trigger some extra JavaScript computations) until the user’s drag had completed, instead of on every drag event. The end result is that, even on a resource-constrained device like the Nexus 5, the UI can actually keep up with the user’s finger!

Conclusion

I hope this blog post was helpful for anyone handling scroll, touchmove, pointermove, or similar input events. Thinking in terms of how I’d like to align my work with the browser’s event loop (using requestAnimationFrame and requestPostAnimationFrame) was useful for me.

Note that I’m not saying to never use Lodash’s throttle or debounce. I use them all the time! Sometimes it makes sense to just let a timer fire every n milliseconds – e.g. when debouncing window resize events. In other cases, I like using requestIdleCallback – for instance, when updating a non-critical part of the UI based on user input, like a “number of characters remaining” counter when typing into a text box.

In general, though, I hope that once requestPostAnimationFrame makes its way into browsers, web developers will start to think more purposefully about how they do UI updates, leading to fewer instances of layout thrashing. fastdom was written in 2013, and yet its lessons still apply today. Hopefully when rPAF lands, it will be much easier to use this pattern and reduce the impact of layout thrashing on web performance.

Footnotes

1. In the Pointer Events Level 2 spec, it says that pointermove events “may be coalesced or aligned to animation frame callbacks based on UA decision.” So hypothetically, a browser could throttle pointermove to fire only once per rAF (and if you need precise x/y events, e.g. for a drawing app, you can use getCoalescedEvents()). It’s not clear to me, though, that any browser actually does this. Update: see comments below, some browsers do! In any case, throttling the events to rAF in JavaScript accomplishes the same thing, regardless of UA behavior.

2. Technically, the only DOM reads that matter in the case of layout thrashing are DOM APIs that force style/layout, e.g. getBoundingClientRect() and offsetLeft. If you’re just calling getAttribute() or classList.contains(), then you’re not going to trigger style/layout recalculations.

3. Note that if you have different parts of the code that are doing separate reads/writes, then each one will need its own throttler function. Otherwise one throttler could cancel the other one out. This can be a bit tricky to get right, although to be fair the same footgun exists with Lodash’s debounce/throttle.

Advice for new bicycle commuters

Photo of my bike and yellow backpack

I started biking to work last summer, around 8-10 miles per day. Overall it’s been a big boon to my health – I’ve lost weight, I feel happier, and I have a new hobby to geek out over. But there are some lessons I wish I had learned earlier, so in this post I’m going to offer some advice to new cyclists.

Note: I’m not a doctor, a physical therapist, or even a big expert on cycling. So take my advice with a grain of salt.

Gear

You don’t need a lot of expensive gear to be a bike commuter. I used a 20-year-old hand-me-down mountain bike for my first few months, before I decided I was serious enough to want an upgrade.

The most important thing is safety: flashing lights, bright colors. If you’re going to be biking alongside traffic, you want to be visible in all conditions: daytime, nighttime, dusk, etc. If you don’t have a brightly-colored backpack, get a backpack cover that’s bright yellow with reflective strips. As a bonus, it will keep your backpack dry.

The second most important thing is comfort. I really like the dorky bike shorts with padding on bottom, because they reduce saddle sores, but you can also just use regular gym shorts. You don’t need special bike shoes, but you may want to use a different pair than you wear throughout the day, because they will get sweaty and stinky. In the winter, make sure to have gloves, a scarf, and a hat that will fit under your helmet.

Safety

I’ll say it again: flashing lights, bright colors. Some cars will treat you with respect, but a lot of drivers are just distracted or lazy and won’t see you coming. Deck yourself out like the yellow Power Ranger, and even then don’t assume that cars will see you.

Traditionally there are hand sigals for left turns and right turns. I don’t use the “bent elbow” left-hand signal for right turns, because I assume that no driver has seen that signal since the 1930’s. You’re better off just pointing with your right hand.

If you’re on a road without a bike lane, or if there’s barely any shoulder, then try to make it clear that you don’t want to be passed. Otherwise if you keep too far to the right, then drivers will take it as an invitation to pass you, even if you’re just inches away from their side mirror. Drive in the middle of the lane if you have to. Better to slow down someone’s commute than to be roadkill.

Look behind you when you merge left. Practice it if it’s hard to ride a straight line while doing so. I prefer this to using a rearview mirror, because it makes it really obvious to drivers that I’m craning my neck and so they should watch my movement.

If you can alter your route to include more dedicated bike paths and bike trails, then do it. Adding a few extra minutes to your commute is worth the peace of mind. Plus, you’re trying to get some exercise, right?

A bike path in the woods

Health

Biking has a lot of health hazards, but I’m just going to talk about the ones that affected me.

If you get drop bars, make sure you’re using them correctly. I made the mistake of over-using the lower position (“I’m going so fast! I’m a real cyclist now!”), and it really did a number on my wrists. I ended up with a lot of chronic wrist pain until I learned to do it right. It’s better now, but I can’t do push-ups anymore.

Basic advice: use the middle “Atari joystick” position for 90% of your ride, use the upper position on uphills (to lean back and get bigger lungfuls of air), and only use the lower position on downhills, when you want more control over your brakes and a bit more speed. Try to work your core muscles so you’re not putting weight on your wrists, and switch up your position occasionally.

Get a bike fitting. Yes, it can be expensive (mine ran $150), but it’s way cheaper than physical therapy. They’ll adjust your saddle height, your handlebar position, and everything else for maximum comfort. This can prevent all sorts of back and wrist pain. I wish I had done it earlier.

Have fun

Keep at it. Drink lots of water. Don’t feel bad when you get passed by 60 year-old dudes with gray beards – instead, think about how someday you could be that silverfox!

The other cyclists you see on the road are, statistically speaking, likely to be more hardcore than you. They probably spend a lot more time cycling – hence why you see them. So don’t feel bad about getting passed.

If there’s a particularly nasty uphill on your route, just think to yourself, “It gets a little easier every day.” Because the truth is, it does! Pretty soon you’ll be shaving time off your commute, and you may even find that it’s faster than other modes of transportation. (For me, cycling actually beats the bus, especially in bad traffic.)

So that’s it for my newbie cycling advice. Cycling is fun, it’s good for your health, and it’s good for the environment. Plus, the more you normalize it, the more you encourage other people to brave the car traffic and try cycling.

Mid-2019 book review

Photo of books on a desk

The news from this year’s book review is that I have belatedly decided I’m a fantasy fan. Even though I had read The Lord of the Rings as a teenager and the entire Song of Ice and Fire series (including the “Dunk and Egg” prequels) in my 20s, I still somehow thought of myself as “above” the glossy paperbacks with their scowling wizards and soaring pegasi. Well, the veil of self-delusion has lifted. Bring on the pegasi.

The other news is that I’m breaking 2019’s book review into two posts. There are just too many books to cover. (Famous last words! My reading velocity is going down as the summer starts to heat up.)

Quick links

Fiction

Non-fiction

Fiction

The Last Unicorn by Peter S. Beagle

A haunting, beautiful book. I’d never considered myself a big fantasy fan, but somehow this one really stuck with me. I decided to read it because of this Atlantic article, and I’m glad I did.

I think what sets this one apart is that, while books like Harry Potter or the Narnia series are about childhood and its relationship with fantasy, The Last Unicorn is about growing up and growing away from fantasy. In the book, people have forgotten about unicorns or can’t see their horns. Some of them look upon the unicorn and start crying even if they don’t know why.

The book is ultimately about loss – loss of childhood, loss of innocence, loss of childish fantasies – as well as regret. It’s a very profound and moving book. Oh, and the author has a real gift for language; the book is filled with beautiful poetry to paint its fantasy world. Anyway, read it.

The Magicians Trilogy by Lev Grossman

I decided after reading The Last Unicorn that I should start taking fantasy books a bit more seriously. So I picked up The Magicians, and quickly devoured all three books in the trilogy. The whole series is great, although for slightly different reasons than Unicorn.

At first glance, Magicians comes across as a mash-up between Harry Potter and Narnia, but with some decidedly adult elements thrown in. At times I burst out laughing at the incongruity between the magical situations that the characters found themselves in and their wry commentary on it. These are fantasy novels for people who think fantasy novels are a bit silly.

In the end, though, I think Magicians is actually closest in theme and tone to The Dark Tower by Stephen King. It has the same sense of taking old fables and tropes and turning them into something gritty and believable. It’s a fantasy world seen through a dark, ironic lens. But it’s also a great piece of storytelling. Well worth the read.

On the Beach by Nevil Shute

I needed one last piece of post-apocalyptic fiction for the road, before switching over to the wizards and unicorns.

This one tells a good story, although ultimately I don’t find its depiction of a world waiting to die very believable. I just find it hard to imagine that, in the face of a nuclear dust-cloud descending inexorably towards Australia, that an entire continent would decide to go the “stiff upper lip” route and carry on as usual, pretending as if Armageddon wasn’t on its way.

Societal breakdown and anarchy seem more likely to me, although I guess that might be hindsight talking. This book was written in 1957, well before post-apocalyptic fiction had really settled into its groove and the Mad Max-style mohawked warlords had become staples of the genre. So it gets points for trying – I’m sure this book spooked a lot of people back in the days when fallout shelters and “duck and cover” drills were still a thing.

Radicalized by Cory Doctorow

I loved this book. I’ve been reading Cory Doctorow’s blog posts for years, and I’m surprised at what an effective storyteller he is. Think Black Mirror, but funnier and less bleak.

The two short stories that stood out the most to me were the first – about a toaster oven that refuses to toast “unlicensed” bread – and the third – about cancer survivors radicalized by an online forum.

The first story in particular feels plausible in a disturbing way, and it cuts to the core of some of the concerns I’ve expressed about the ways that technology can be used to take more power away from those who are already powerless. For instance, consider this (true) story about renters in Brooklyn who are unable to stop their landlord from installing face-recognizing cameras. This story shows that Doctorow’s DRM toaster isn’t so much a vision of the future as it is an extrapolation of present trends. Which is what good science fiction is all about.

Nonfiction

Eating Animals by Jonathan Safran Foer

A central fact about eating meat is that it’s much easier if you forget where it comes from. With this book I forced myself to take a hard look at where it comes from, and I found the results to be disturbing and appalling.

I don’t think it’s unnatural for humans to eat meat (my incisors are proof of that), but I do think that the modern factory farming system is immoral. It’s a form of industrial cruelty, systematized and magnified on a monstrous scale. Anyone who has owned a pet wouldn’t want it to experience even one minute of what these animals have to suffer every day of their lives.

If humans still did animal husbandry the old-fashioned way, on small-scale farms where the animals could live more-or-less decent lives, then I wouldn’t have a problem with eating animal products. What bothers me isn’t the way they die – it’s the way they live. Reading about the lives of egg-laying chickens and pigs in factory farms activates all my moral instincts and says in no uncertain terms, This is wrong. In fact, I think most meat-eaters would consider it wrong too, which is why they try to push it out of their minds.

I’d love to say that, after reading this book, I went fully vegan and never looked back. The truth is that I gave it a shot for a few weeks, found it too difficult, and then settled into a quasi-vegetarian/pescetarian thing, which is what I’ve been doing for the past decade or so anyway.

The main difference is that I have a better sense now of what kinds of foods actually reduce animal suffering. For instance: less dairy, more wild-caught fish. (I know; it’s surprising. Read the book.)

My relationship with food is still complicated, but at least this book has brought some facts and numbers to inform my decisions.

The Uninhabitable Earth: Life After Warming by David Wallace-Wells

I’ll admit: as recently as 2010, I probably would have described myself as a “climate change skeptic.” Not because I doubted the science (the consensus was clear at this point) but because I questioned whether the economic cost of combating climate change would outweigh the benefits of preventing it. India and China were rapidly developing – who was I to say that poor people in Kerala should live without air conditioning?

Like most everybody else, though, I’ve come around to the massive challenge posed by climate change. Living through two summer forest fires in Seattle, where people wore protective masks and the sky looked like a hazy Martian sunset, certainly helped change my mind. As did this book.

Before The Uninhabitable Earth I had also started reading Carbon Ideologies by William T. Vollmann. They’re good books, but honestly they’re so long and dense and meandering that it’s hard to recommend them to anyone but the convinced climate activist. If you’re really interested in the physics, the numbers, and the nitty-gritty, then these books are for you.

Wallace-Wells’s book is different. It’s short, it’s punchy, and it encourages you to actually envision a world after global warming, and to let it hit you at a gut level. I imagine a book like this will inspire some great science fiction (cli-fi?), which might do more to get people to care about climate change than all the facts and figures in the world. So for that, it’s a book I strongly recommend.

The Sixth Extinction by Elizabeth Kolbert

Kolbert’s book is another eye-opening look at humanity’s relationship with nature and where we fit into the grand arc of geologic history. I’ve had a longstanding interest in paleontology, and I find Kolbert’s defense of the Anthropocene (which is what this book amounts to) very compelling.

One thing that always puzzled me about climate change was why one or two degrees of average temperature, or a few percentage points of carbon dioxide in the atmosphere, would really be such a catastrophic event for humanity. Wasn’t the carbon dioxide level orders of magnitude higher during the Mesozoic? Why would a few percentage points be our death knell?

What this book makes clear is that it’s not so much about the absolute numbers, but instead the rate of change. Earth has adapted to rapid changes before (such as an unlucky rendez-vous with an asteroid), but the recovery always takes a long time. Like, “longer than the human species has been around” long. Are we willing to trade 300 years of indulgence in fossil fuels for hundreds of thousands of years of getting the Earth’s ecosystem back on track?

If Wallace-Wells’s book hasn’t already bummed you out too much, then you should definitely pick this one up. It certainly helps put things in (geologic) perspective.

How to de-Google your Android phone

First, download a ROM from this Russian message board. It’s okay! You can totally verify the GPG signature. Allow yourself 30 minutes to remember how GPG works, then verify that forum poster LeetAndrej420 has indeed signed the file.

Next, root your Android phone. You will need to hold the volume-up and power buttons for ten seconds, then unplug from USB, then reboot a few times after you mess it up, then give up and download the Android dev tools.

After you figure out the Android adb and fastboot commands, you should see a friendly UI with green Courier text on a black background. Press the button that says, “I void my warranty and completely exonerate the OEM in the likely event that I am actually pwning myself by installing random software from the internet onto a tracking device I carry in my pocket every day.” But it’s okay. You trust Andrej, right?

Next you will need to install the “recovery” tool. Despite the name, this is actually the best way to brick your device. Luckily it is incredibly feature-rich, boasting 12 buttons on the home screen, including an “Advanced” button containing more buttons. These buttons will invite you to do things like “clear the Dalvik/ART cache,” which you totally know what that means.

When you download the recovery tool, make sure you get the right version for your phone! Of course, it’s not named after your phone’s brand name, but rather a cheeky internal name chosen by the OEM, like “bacon”, “cheeseburger”, or “mahimahi”. The professionalism on display from all parties should fill you with confidence.

You will download the recovery tool from a site called SickWarez.biz. Use GPG to ensure that it’s signed by Andrej.

Once downloaded, go into recovery mode and install the ROM, being careful to press the one correct button out of 12, like a game of Minesweeper that will brick your phone if you lose. This will also factory-reset your device, which is fine because all your photos and contacts are backed up to your Google account… ah, right. You’ll want to do something about that.

Assuming you have successfully installed the ROM without turning your phone into a $700 doorstop, you can now install apps. Thankfully there is F-Droid, which hosts all your favorite open-source apps. Wait, your favorite apps aren’t open-source? Well, at least it has Signal. Wait, it doesn’t have Signal?

Once you’ve installed the Yalp Store, which sideloads apps from Google Play in a way that may or may not be totally illegal and will get blocked by Google once they read this blog post and realize that it exists, you can now download some actually useful apps.

Thankfully, though, your personal data will be safe and secure from third-party developers, because these apps will not work. Be prepared for error messages like, “Please install Google Maps,” “Google Play Services required,” or “What kind of sicko has a Google phone without Google? What is wrong with you?”

After all this ceremony, you can now relax and enjoy your Google-free Android device. Note, though, that weather widgets, GPS, push notifications, and the majority of Android apps you rely on will not work. That said, there are some great note-taking apps! Plus SMS will still work. Good old SMS.

So now that you’ve successfully turned your $700 Android device into a glorified $30 Nokia flip phone, which may or may not be siphoning your passwords to a Ukrainian teenager, you can finally have a Google-free smartphone experience. Or you could just buy an iPhone.

One year of Pinafore

Screenshot of Pinafore showing a compose input

Pinafore is a standalone web client for Mastodon, which recently hit version 1.9.0. Here are some notable new features:

It’s been about a year since I first launched Pinafore. So I’d like to reflect on where the project came from, and where I hope to take it.

Background

In 2017, I was in a funk. I had stopped contributing to the PouchDB project largely due to burnout, and for various reasons I eventually left my job at Microsoft. In the meantime, I had become enamored of Mastodon and even contributed to it, but I was feeling restless and looking for a new project.

The Mastodon codebase is extremely well-written. I’m convinced that Eugen Rochko is some kind of savant. However, I never took much of a liking to React, and I found it difficult to fix some fundamental problems in the Mastodon UI, such as offline support or the occasionally jerky scrolling. I also really missed the single-column layout of Twitter (I was never a Tweetdeck fan).

So the idea came to me to create my own Mastodon web client. I had been working on web sites for years, but aside from some small prototypes, I had never built an entire web app by myself. This was an opportunity to test some of my ideas about how a web app “should” be, leveraging my experience in web performance and standards. Also, I wanted to teach myself about accessibility, which I had never really studied before.

I knew I wanted to use Svelte, because I agreed with Rich Harris and Tom Dale that JavaScript frameworks should focus less on being runtime APIs and more on being compilers. Incidentally, I was at the same talk by Jed Schmitt that Rich mentions in this video, and it blew my mind as much as it blew his. (The difference between Rich and me is that he actually went off and built a whole framework based on it!)

I started working on Pinafore at the end of December 2017, and released it in April 2018. So after 18 months of development, I’d like to consider where Pinafore has done well and where it can improve.

Success metrics

Pinafore doesn’t have any trackers on it, so I don’t know how many people are using it. Sure, I could use a privacy-respecting tracker like Fathom, but the Mastodon community is pretty allergic to any kind of tracking, so I’ve been hesitant to add it. In any case, I don’t really care, because I would work on Pinafore regardless of how many people are using it.

However, I do get a trickle of questions and bug reports about Pinafore, and the #Pinafore hashtag is pretty active. I’ve also heard from several folks that it’s their preferred Mastodon interface. The reasons they give are usually one of the following:

  • Accessibility: I’ve focused a lot on making Pinafore work well with keyboard navigation and screen readers. (Marco Zehe‘s guidance really helped!)
  • Design: the single-column layout of Pinafore is a key differentiator with the Mastodon frontend (although not for long).
  • Instance-switching: people who juggle multiple accounts on different instances don’t necessarily want one browser tab for each.

My favorite user testimonial, though, is from my wife. She told me, “I like Pinafore because it never loses my place in the timeline.” (Much of my motivation for working on Pinafore can be credited to “wife-driven development” – I like making her happy!)

So this confirms that I’ve achieved at least some of the goals from the Pinafore introductory blog post. Although notably, offline support is rarely mentioned, but I’ll get to that later.

Collaboration

Pinafore has also benefited from a lot of community contributions. I’d like to specifically thank:

And of course everyone else who contributed. Thank you so much!

There are some challenges with building a dev community around Pinafore. The app is implemented using Svelte v2 and Sapper, which unfortunately causes two downsides in terms of onboarding: 1) Svelte isn’t a very well-known framework, and 2) Svelte v2 is incompatible with Svelte v3, and there’s no upgrade path currently.

I’ll have to continue grappling with these challenges, but for now I’m very satisfied with Svelte v2. It’s fast, lightweight, and does everything I need it to. So I’m not in a big hurry to upgrade.

And oh yeah: Svelte really is lightweight. Pinafore only loads 32KB of compressed JavaScript for the landing page, and 137KB for the Home timeline. The total size of all JS assets is under 300KB compressed (<1MB raw). It gets a perfect 100 score from Lighthouse.

Screenshot of Lighthouse showing perfect 100 score in all categories, including Performance, Accessibility, Best Practices, and SEO

If you didn’t think I was going to brag about web perf vanity metrics, then you don’t know me very well.

Future plans

My first goal with Pinafore is completeness. Even though I’ve been working on it for over a year, there are still plenty of missing features compared to the Mastodon frontend. And although the gap has been narrowing, Mastodon itself hasn’t stopped innovating, so there’s always new stuff to add. (Polls! Blurhash! Keybase! Does Eugen ever sleep?)

Beyond that, I’d like to start focusing on features that make Pinafore a more pleasant social media experience. One of the virtues of decentralized social media is that we can experiment with features that give people control over their social media experience, even if it hampers addictiveness or growth. To that end, I’ve added a set of wellness features, inspired by Tristan Harris’s Center for Humane Technology. I’ll probably tweak and expand these features as feedback rolls in.

I’d also like to improve offline support. Even though Pinafore does have an offline mode, and even though it uses a Service Worker to cache static assets, it’s not very offline-first. Instead, it uses offline storage more as a fallback for when the network fails, rather than as the primary source of truth.

Given my background working on offline-first technology and advocating for it, I find this a bit disappointing. But it turns out that it’s really difficult to implement an offline-first social media UI. How do you deal with offline writes? How do you handle the gap between fresh content and stale content within the same timeline? These are not easy questions, and for the most part I’ve punted on them. But Pinafore can do better.

Conclusion

Pinafore is a passion project for me. It gives me something interesting to do on weekends and evenings, and it teaches me a lot about how the web platform works.

I also see Pinafore as an opportunity to provide more options to the Mastodon community, and to prove that you don’t have to treat Eugen as a gatekeeper for every minor UI tweak you’d like to see in Mastodon. Mastodon is decentralized; let’s decentralize the interface!

I have every intention to keep working on Pinafore, and I’m curious to know where you think it should go next.