In my mind, Michael Crichton's Jurassic Park marks the last time object-oriented programming was cool. Dennis Nedry, the titular park's sole computer engineer, adds a backdoor to the system disguised as "a block of code that could be moved around and used, the way you might move a chair in a room." Running whte_rabt.obj as a shell script turns off the security systems and electric fences, kicking off the major crisis that drives the novel forward. Per usual for Crichton, this is not strictly accurate, but it is entertaining.
(Crichton produced reactionary hack work — see: Rising Sun, Disclosure, and State of Fear — roughly as often as he did classic high-tech potboilers, but my favorite petty grudge is in The Lost World, the cash-grab sequel to Jurassic Park, which takes a clear potshot at the "This is a UNIX system, I know this!" scene from Spielberg's film: under siege by dinosaurs, a young woman frantically tries to reboot the security system before suddenly realizing that the 3D graphics onscreen would require a high-bandwidth connection, implying — for some reason — a person-sized maintenance tunnel she can use as an escape route. I love that they can clone dinosaurs, but Jurassic Park engineers do not seem to have heard of electrical conduits.)
In the current front-end culture, class-based objects are not cool. React is (ostensibly) functional wherever possible, and Svelte and Vue treat the module as the primary organizational boundary. In contrast, web components are very much built on the browser platform, and browsers are object-oriented programs. You just can't write vanilla JavaScript without using new, and I've always wondered if this, as much as anything else, is the reason a lot of framework authors seem to view custom elements with such disdain.
Last week, I wrote about slots and shadow DOM as a way to build abstract domain-specific languages and expressive web components. In this post, I want to talk about how base classes and inheritance can smooth out its rough edges, and help organize and arrange the shape of your application. Call me a dinosaur (ha!), but I think they're pretty neat.
Criticisms of custom elements often center around the amount of code that it takes to write something fairly simple: comparing the 20-line boilerplate of a completely fresh web component against, say, a function with some JSX in it. For some reasons, these comparisons never discuss how that JSX is transpiled and consumed by thousands of lines of framework dependencies — that's just taken for granted — or that some equivalent could also exist for custom elements.
That equivalent is your base class. Rather than inheriting directly from HTMLElement, you inherit from a middleware class that extends it, and fills in the gaps that the browser doesn't directly provide. Almost every project I work on either starts with a base element, or eventually acquires one. Typically, you'll want to include:
If you don't feel capable of providing these things, or you're worried about the maintenance burden, you can always use someone else's. Web component libraries like Lit or Stencil basically provide a starter class for you to extend, already packed with things like reactive state and templating. Especially if you're working on a really big project, that might make sense.
But writing your own base class is educational at the very least, and often easier than you might think, especially if you're not working at big corporate scale. In most of my projects, it's about 50 lines (which I often copy verbatim from the last project), and you can see an example in my guidebook. The templating is the largest part, and the part where just importing a library makes the most sense, especially if you're doing any kind of iteration. That said, if you're mostly manipulating individual, discrete elements, a pattern I particularly like is:
class TemplatedElement extends HTMLElement {
elements = {};
constructor() {
super();
// get the shadow root
// in other methods, we can use this.shadowRoot
var root = this.attachShadow({ mode: "open" });
// get the template from a static class property
var { template } = new.target;
if (template) {
root.innerHTML = template;
// store references to marked template elements
for (var element of root.querySelectorAll("[as]")) {
var name = element.getAttribute("as");
this.#elements[name] = element;
}
}
}
}
From here, a class extending TemplatedElement can set a string as the static template property, which will then be used to set up the shadow DOM on instantiation. Any tag in that template with an "as" attribute will be stored on the elements lookup object, where we can then add event listeners or change its content:
class CounterElement extends TemplatedElement {
static template = `
<div as="counter">0</div>
<button as="increment">Click me!</button>
`;
#count = 0;
constructor() {
// run the base class constructor
super();
// get our cached shadow elements
var { increment, counter } = this.elements;
increment.addEventListener("click", () => {
counter.innerHTML = this.#count++;
});
}
}
It's simple, but it works pretty well, especially for the kinds of less-intrusive use cases that we're seeing in the new wave of HTML components.
For the other base class responsibilities, a good tip is to try to follow the same API patterns that are used in the platform, and more specifically in JavaScript in general (a valuable reference here is the Web Platform Design Principles). For example, when providing method binding and property reflection, I will often build the interface for these as arrays assigned to static properties, because that's the pattern already being used for observedAttributes:
class CustomElement extends BaseClass {
static observedAttributes = ["src", "controls"];
static boundMethods = ["handleClick", "handleUpdate"];
static reflectedAttributes = ["src"];
}
I suspect that once decorators are standardized, they'll be a more pleasant way to handle some of this boilerplate, especially since a lot of the web component frameworks are already doing so via Typescript. But if you're using custom elements, there's a reasonable chance that you're interested in no-build (or minimal build) systems, and thus may want to avoid features that currently require a transpiler.
But if you're designing larger applications, then your components will need to interact with each other. And in that case, a class is not just a way of isolating some DOM code, it's also a contract between application modules for how they manage state and communication. This is not new or novel — it's the foundation of model-view-controller UI dating back to Smalltalk — but if you've learned web development in the era since Backbone fell out of popularity, you may have never really had to think about state and interaction between components, as opposed to UI functions that all access slices of a common state store (or worse, call out to hooks and magically get served state from the aether).
Here's an example of what I mean: the base class for drawing instructions in Tarot, Chalkbeat's social media image generator, does the normal templating/binding dance in its constructor. It also has some utility methods that most canvas operations will need, such as converting between normalized coordinates and pixels or turning variable-length CSS padding strings into a four-item array. Finally, it defines a number of "stub" methods that subclasses are expected to override:
When Tarot needs to re-render the canvas, it starts at the top level of the input form, loops through each direct child, and calls draw(). Some instructions, like images or rectangle fills, render immediately and exit. The layout brushes, <vertical-spacer> and <vertical-stack>, first call getLayout() on each of their children, and use those measurements to apply a transform to the canvas context before they ask each child to draw. Putting these methods onto the base class in Tarot makes the process of adding a new drawing type clear and explicit, in a way that (for me) the "grab bag of props" interface in React does not.
Two brushes actually take this a little further. The <series-logo> and <logo-brush> elements don't inherit directly from the Brush base class, but from a specialized subclass of it with properties and methods for storing and tinting bitmaps. As a result, they can take a single-color input PNG and alter its pixels to match any of the theme colors selected while preserving alpha, which means we can add new brand colors to the app and not have to generate all new logo art.
Planning the class as an API contract means that when they're slotted or placed, we can use duck-typing in our higher-level code to determine whether elements should participate in a given operation, by checking whether they have a method name that matches our condition. We can also use instanceof to check if they have the required base class in their prototype chain, which is more strict.
It's worth noting that this approach has its detractors, and has for a (relatively) long time. In 2015, the React team published a blog post claiming that traditional object-oriented code inherently creates tight coupling, and the code required grows "as the square of the number of possible states of the component." Personally I find this disingenuous, especially when you step back and think about the scale of the infrastructure that goes into the "easier" rendering method it describes. With a few small changes, it'd be indistinguishable from the posts that have been written discounting custom elements themselves, so I guess at least they're consistent.
As someone who cut their teeth working in ActionScript 3, it has never been obvious to me that stateful objects are a bad foundation for creating rich interfaces, especially when we look at the long history of animation libraries for React — eventually, every pure functional GUI seems to acquire a bunch of pesky escape hatches in order to do anything useful. Weird how that happens! My hot take is that humans are messy, and so code that interacts directly with humans tends to also be a little messy, and trying to shove it into an abstract conceptual model is likely to fail in frustrating ways. Objects are often untidy, but they give us more slack, and they're easier to map to a mental model of DOM and state relationships.
That said, you can certainly create bad class code, as the jokes about AbstractFactoryFactoryAdapter show. I don't claim to be an expert on designing inheritance — I've never even drawn a UML diagram (one person in the audience chuckles, glances around, immediately quiets). But there are a few basic guidelines that I've found useful so far.
Remember that state is inspectable. If you select a tag in the dev tools and then type $0.something in the console, you can examine a JS value on that element. You can also use console.dir($0) to browse through the entire thing, although this list tends to be overwhelming. In Chrome, the dev tools can even examine private fields. This is great for debugging: I personally love being able to see the values in my application via its UI tree, instead of needing to set breakpoints or log statements in pure rendering functions.
Class instances are great places for related platform objects. When you're building custom elements, a big part of the appeal is that they give you automatic lifecycle hooks for the section of the page tree that they wrap. So this might be obvious, but use your class to cache references to things like Mutation Observers or drawing contexts that are related to the DOM subtree, even if they aren't technically its state, and use the lifecycle to set them up and tear them down.
Use classes to store local state, not application state. In a future post, I want to write about how to create vanilla code that can fill the roles of stores, hooks, and other framework utilities. The general idea, however, is that you shouldn't be using web components for your top-level application architecture. You probably don't need <application-container> or <database-connection>. That's why you...
Don't just write classes for your elements. In my podcast client, a lot of the UI is driven by shared state that I keep in IndexedDB, which is notoriously frustrating to use. Rather than try to access this through a custom element, there's a Table class that wraps the database and provides subscription and manipulation/iteration methods. The components in the page use instances of Table to get access to shared storage, and receive notification events when something else has updated it: for example, when the user adds a feed from the application menu, the listing component sees that the database has changed and re-renders to add that podcast to the list.
Be careful with property/method masking. This is far more relevant when working with other people than if you're writing software for yourself, but remember that properties or methods that you create in your class definitions will supplant any existing fields that exist on HTMLElement For example, on one project, I stored the default slot for a component on this.slot, not realizing that Element.slot already exists. Since no code on the page was checking that property, it didn't cause any problems. But if you're working with other people or libraries that expect to see the standard DOM value, you may not be so lucky.
Consider Symbols over private properties to avoid masking. One way to keep from accidentally overwriting a built-in field name is by using private properties, which are prefixed with a hash. However, these have some downsides: you can't see them in the inspector in Firefox, and you can't access them from subclasses or through Proxies (I've written a deeper dive on that here). If you want to store something on an element safely, it may be better to use a Symbol instead, and export that with your base class so that subclasses can access it.
export const CANVAS = Symbol("#canvas");
export const CONTEXT = Symbol("#context");
export class BitmapElement extends HTMLElement {
constructor() {
super();
this.attachShadow({ mode: "open" });
this[CANVAS] = document.createElement("canvas");
this[CONTEXT] = this[CANVAS].getContext("2d");
}
}
The syntax itself looks a little clunkier, but it offers encapsulation closer to the protected keyword in other languages (where subclasses can access the properties but external code can't), and I personally think it's a nice middle ground between actual private properties and "private by convention" naming practices like this._privateButNotReally.
Inherit broadly, not deeply. Here, once again, it's instructive to look at the browser itself: although there are some elements that have extremely lengthy prototype chains (such as the SVG elements, for historical reasons), most HTML classes inherit from a relatively shallow list. For most applications, you can probably get away with just one "framework" class that everything inherits from, sometimes with a second derived class for families of specific functionality (such as embedded DSLs).
There's a part of me that feels like jumping into a wave of interest in web components with a tribute to classical inheritance has real "how do you do, fellow kids?" energy. I get that this isn't the sexiest thing you can write about an API, and it's very JavaScript-heavy for people who are excited about the HTML component trend.
But it also seems clear to me, reading the last few years of commentary, that a lot of front-end folks just aren't familiar with this paradigm — possibly because frameworks (and React in particular) have worked so hard to isolate them from the browser itself. If you try to turn web components into React, you're going to have a bad time. Embrace the platform, learn its design patterns on their own terms, and while it still won't make object orientation cool, you'll find it's a much more pleasant (and stable) environment than it's been made out to be.
Over the last few weeks, there's been a remarkable shift in the way that the front-end community talks about web components. Led by a number of old-school bloggers, this conversation has centered around so-called "HTML components," which primarily use custom elements as a markup hook to progressively enhance existing light DOM (e.g., creating tabs, tooltips, or sortable tables). Zach Leatherman's taxonomy includes links to most of the influential blog posts where the discussions are taking place.
(Side note: it's so nice to see blogging start to happen again! Although it's uncomfortable as we all try to figure out what our new position in the social media landscape is, I can't help but feel optimistic about these developments.)
Overall, this new infusion of interest is a definite improvement from the previous state of affairs, which was mostly framework developers insisting that anything less than a 1:1 recreation of React or Svelte in the web platform was a failure. But the whiplash from "this API is useless because it doesn't bundle enough complexity" to "this API can be used in the simplest possible way" leaves a huge middle ground unexplored, including its most intriguing possibilities.
So in the interest of keeping the blog train rolling, I've been thinking about writing some posts about how I build more complex web components, including single-page apps that are traditionally framework territory, while still aiming for technical accessibility. Let's start by talking about slots, composition, and structure.
I wrote a little about shadow DOM in 2021, right before NPR published the Science of Joy, which used shadow DOM pretty extensively. Since that time, I've rewritten my podcast client and RSS reader, thrown together an offline media player, developed (for no apparent reason) a Eurorack-esque synthesizer, and written a social card image generator just in time for Twitter to fall apart. Between them, plus the web component book I wrote while wrapping up at NPR, I've had a chance to explore the shadow DOM in much more detail.
I largely stand by what I said in 2021: shadow DOM is a little confusing, not quite as bad as people make it out to be, and best used in moderation. Page content wants to be in the light DOM as much as possible, so that it's easier to style, inspect, and access for scripting. Shadow DOM is analagous to private properties or Symbol keys in JS: it's where you put stuff that only that element (and its user) needs to access but the wider page doesn't know about. But with the addition of slots, shadow DOM is also the way that we can define the relationships of an element to its contents in a way that follows the grain of HTML itself.
To see why, let's imagine a component with what seems like a pointless shadow DOM:
class EmptyElement extends HTMLElement {
constructor() {
super();
var root = this.attachShadow({ mode: "open" });
root.innerHTML = "<slot></slot>";
}
}
This class defines an element with a shadow root, but no private content. Instead, it just has a slot that
immediately reparents its children. Why write a no-op shadow root like this?
One (minor) benefit is that it lets you provide automatic fallback content for your element, which is hard to do in the light DOM (think about a list that shows a "no items" message when there's nothing in it). But the more relevant reason is because it gives us access to the slotchange event, as well as methods to get the assigned elements for each slot. slotchange is basically connectedCallback, but for direct children instead the custom element itself: you get notified whenever the elements in a slot are added or removed.
Simple slotting is a great pattern if you are building wrapper elements to enhance existing HTML (similar to the "HTML components" approach noted above). For example, in my offline media player app, the visualizer that creates a Joy Division-like graph from the audio is just a component that wraps an audio tag, like so:
<audio-visuals>
<audio src="file.mp3"></audio>
</audio-visuals>
When it sees an audio element slotted into its shadow DOM, it hooks it into the analyzer node, and there you go: instant WinAmp visualizer panel. I could, of course, query for the audio child element in connectedCallback, but then my component is no longer reactive, and I've created a tight coupling between the custom element and its expected contents that may not age well (say, a clickable HTML component that expects a link tag, but gets a button for semantic reasons instead).
Child elements that influence or change the operation of their parent is a pattern that we see regularly in built-ins:
Tarot, Chalkbeat's social card generator, takes this approach a little further. I talk about this a little in the team blog post, but essentially each card design is defined as an HTML template file containing a series of custom elements, each of which represents a preset drawing instruction (text labels, colored rectangles, images, logos, that kind of thing). For example, a very simple template might be something like:
<vertical-spacer padding="20 0">
<series-logo color="accent" x=".7" scale=".4"></series-logo>
<vertical-stack dx="40" anchor="top" x=".4">
<text-brush
size="60"
width=".5"
padding="0 0 20"
value="Insert quote text here."
>Quotation</text-brush>
<image-brush
recolor="accent"
src="./assets/Chalkline-teal-dark.png"
align="left"
></image-brush>
</vertical-stack>
<logo-brush x=".70" color="text" align="top"></logo-brush>
</vertical-spacer>
<photo-brush width=".4"></photo-brush>
Each of the "brush" elements has its customization UI in its shadow DOM, plus a slot that lets its children show through. The app puts the template HTML into a form so the user can tweak it, and then it asks each of the top-level elements to render. Some of them, like the photo brush, are leaf nodes: they draw their image to the canvas and exit. But the wrapper elements, like the spacer and stack brushes, alter the drawing context and then ask each of their slotted elements to render with the updated configuration for the desired layout.
The result is a nice little domain-specific language for drawing to a canvas in a particular way. It's easy to write new layouts, or tweak the ones we already have. My editor already knows how to highlight the template, because it's just HTML. I can adjust coordinate values or brush nesting in the dev tools, and the app will automatically re-render. You could do this without slots and shadow DOM, but it would be a lot messier. Instead, the separation is clean: user-facing UI (i.e., private configuration state) is in shadow, drawing instructions are in the light.
I really started to see the wider potential of custom element DSLs when I was working on my synthesizer, which represents the WebAudio signal path using the DOM. Child elements feed their audio signal into their parents, on up the tree until they reach an output node. So the following code creates a muted sine wave, piping the oscillator tone through a low-pass filter:
<audio-out>
<fx-filter type="lowpass">
<source-osc frequency=440></source-osc>
</fx-filter>
</audio-out>
The whole point of a rack synthesizer is that you can rearrange it by running patch cords between various inputs and outputs. By using slots, these components effectively work the same way: if you drag the oscillator out of the filter in the inspector, the old and new parents are notified via slotchange and they update the audio graph accordingly so that the sine wave no longer runs through the lowpass. The dev tools are basically the patchbay for the synth, which was a cool way to give it a UI without actually writing any visual code.
Okay, you say, but in a Eurorack synthesizer, signals aren't just used for audible sound: the same outputs can be used as control voltage, say to trigger an envelope or sweep a frequency. WebAudio basically replicates this with parameter inputs that accept the same connections as regular audio nodes. All I needed to do to expose this to the document was provide named slots in components:
<fx-filter frequency=200>
<fx-gain gain=50 slot=frequency>
<source-osc frequency=1></source-osc>
</fx-gain>
<source-osc frequency=440></source-osc>
</fx-filter>
Here we have a similar setup as before, where a 440Hz tone is fed into a filter, but there's an additional input: the <fx-gain> is feeding a control signal with a range of -50 to 50 into the filter's frequency parameter once per second. The building blocks are the same no matter where we're routing a signal, and the code for handling parameter inputs ends up being surprisingly concise since it's able to lean on the primitives that slots provide for us.
In photography and cinema, the term "chiaroscuro" refers to the interplay and contrast between light and dark — Mario Bava's Black Sunday is one of my favorite examples, with its inky black hallways and innovative color masking effects. I think of the shadow DOM the same way: it's not a replacement for the light DOM, but a complement that can be used to give it structure.
As someone who loves to inject metaphor into code, this kind of thing is really satisfying. By combining slots, shadow DOM, and markup patterns, we can embed a language in HTML that produces either abstract data structures, user interface, or both. Without adding any browser plugins, we're able to manipulate this tree just using the dev tools, so we can easily experiment with our application, and it's compatible with our existing editor tooling too.
Part of the advantage of custom elements is that they have a lower usage floor: they do really well at replacing the kinds of widgets that jQueryUI and Bootstrap used to provide, which don't by themselves justify a full single-page app architecture. This makes them more accessible to the kinds of people that React has spent years alienating with JS-first solutions — and by that, I mean designers, or people who primarily use the kinds of HTML/CSS skills that have been gendered as feminine and categorized as "lesser" parts of the web stack.
So I understand why, for that audience, the current focus is on custom elements that primarily use the light DOM: after all, I started using custom elements in 2014, and it took six more years before I was comfortable with adding shadow DOM. But it's worth digging a little deeper. Shadow DOM and slots are some of my favorite parts of the web component API now, because of the way that they open up HTML as not just a presentational toolkit, but also as an abstraction for expressing myself and structuring my code in a language that's accessible to a much broader range of people.
If I had to guess, I'd say the last time there was genuine grassroots mania for "apps" as a general concept was probably around 2014, a last-gasp burst of energy that coincides with a boom of the "sharing economy" before it became clear the whole thing was just sparkling exploitation. For a certain kind of person, specifically people who are really deeply invested in having a personal favorite software company, this was a frustrating state of affairs. If you can't count the apps on each side, how can you win?
Then Twitter started its slow motion implosion, and Mastodon became the beneficiary of the exodus of users, and suddenly last month there was a chance for people to, I don't know, get real snotty about tab animations or something again. This ate up like a week of tech punditry, and lived rent-free in my head for a couple of days.
It took me a little while to figure out why I found this entire cycle so frustrating, other than just general weariness with the key players and the explicit "people who use Android are just inherently tasteless" attitude, until I read this post by game dev Liz Ryerson about GDC, and specifically the conference's Experimental Games Workshop session. Ryerson notes the ways that commercialization in indie games led to a proliferation of "one clever mechanic" platformers at EGW, and an emphasis on polish and respectability — what she calls "portfolio-core" — in service of a commercial ideology that pushed quirkier, more personal titles out:
there is a danger here where a handful of successful indie developers who can leap over this invisible standard of respectability are able to make the jump into the broader industry and a lot of others are expected not to commercialize their work that looks less 'expensive' or else face hostility and disinterest. this would in a way replicate the situation that the commercial indie boom came out of in the 2000's.
however there is also an (i'd argue) even bigger danger here: in a landscape where so many niche indie developers are making moves to sell their work, the kind of audience of children and teenagers that flocked to the flash games and free web games that drove the earlier indie boom will not be able to engage with this culture at large anymore because of its price tag. as such, they'll be instead sucked into the ecosystem of free-to-play games and 'UGC' platforms like Roblox owned by very large corporate entities. this could effectively destroy the influence and social power that games like Yume Nikki have acquired that have driven organic fan communities and hobbyist development, and replace them with a handful of different online ecosystems that are basically 'company towns' for the corporations who own them. and that's not a good recipe if you want to create a space that broadly advocates for the preservation and celebration of art as a whole.
It's worth noting that the blog post that kicked off the design conversation refers to a specific category of "enthusiast" apps. This doesn't seem to be an actual term in common use anywhere — searching for this provides no prior art, except in the vein of "apps for car enthusiasts" — and I suspect that it's largely used as a way of excluding the vast majority of software people actually use on mobile: cross-platform applications written by large corporations, which are largely identical across operating systems. And of course, there's plenty of shovelware in any storefront. So if you want to cast broad aspersions across a userbase, you have to artificially restrict what you're talking about in a vaguely authoritative way to make sure you can cherry-pick your examples effectively.
In many ways, this distinction parallels the distinction Ryerson is drawing, between the California ideology game devs that focus on polish and "finish your game" advice, and (to be frank) the weirdos, like Stephen "thecatamites" Gillmurphy or Michael Brough, designers infamous for creating great games that are "too ugly" to sell units. It's the idea that a piece of software is valuable primarily because it is a artifact that reminds you, when you use it, that you spent money to do so.
Of course, it's not clear that the current pace of high-definition, expansive scope in game development is sustainable, either: it requires grinding up huge amounts of human capital (including contract labor in developing countries) and wild degrees of investment, with no guarantee that the result will satisfy the investor class that funded it. And now you want to require every little trivial smartphone app have that level of detail? In this economy?
To be fair, I'm not the target audience for that argument. I write a lot of my own software. I like a lot of it, and some of it even sparks joy, but not I suspect in the way that the "enthusiast app" critics are trying to evoke. Sometimes it's an inside joke for an audience of one. Maybe I remember having a good time getting something to work, and it's satisfying to use it as a result. In some cases (and really, social media networks should be a prime example of this), the software is not the point so much as what it lets me read or listen to or post. Being a "good product" is not the sum total through which I view this experience.
(I would actually argue that I would rather have slightly worse products if it meant, for example, that I didn't live in a surveillance culture filled with smooth, frictionless, disposable objects headed to a landfill and/or the bottom of the rapidly rising oceans.)
Part of the reason that the California ideology is so corrosive is because it can dangle a reward in front of anything. Even now, when I work on silly projects for myself, I find myself writing elaborate README files or thinking about how to publish to a package manager — polish that software, and maybe it'll be a big hit in the marketplace, even though that's actually the last thing I would honestly want. I am trying to unlearn these urges, to think of the things I write as art or expression, and not as future payday. It's hard.
But right now we are watching software companies tear themselves apart in a series of weird hype spasms, from NFTs to chatbots to incredibly ugly VR environments. It's an incredible time to be alive. I can't imagine anything more depressing than to look at Twitter's period of upheaval, an ugly transition from the worldwide embodiment of context collapse to smaller, (potentially) healthier communities, and to immediately ask "but how can I turn this into a divisive, snide comment?" Maybe I'm just not enough of an enthusiast to understand.
When it comes to web development, I'm actually fairly traditional. By virtue of the kinds of apps I make (either bespoke visualizations for work or single-serving toys for personal use), I'm largely isolated from a lot of the pain of modern front-end web development. I don't use React, I don't need to scale servers, and I render my HTML the old-fashioned way, from string templates. Even so, my projects are usually built on top of a few build tools, including Rollup, Less, and various SDKs for moving data between different cloud providers.
However, for internal utilities and personal projects over the last few years, I've been experimenting with removing tools, and relying solely on the modern browser. So instead of bundling JS, I'm just loading modules with import statements. I write one CSS file for my light DOM, but custom properties have largely eliminated what I need a preprocessor to do (and the upcoming support for nesting will cover the rest). Add something like the Eleventy dev server for live reload, and it's actually a really pleasant experience.
It's one thing to go minimalist for a single-serving hobby app, or for people in the Chalkbeat newsroom who can reach me directly for support. It's another to do it for a general audience, where the developer/user ratio starts to tilt and your scale becomes more amibitious. But could we develop a real, public-facing web app that doesn't rely on a brittle and slow compilation step? Is a no-build deployment feasible?
While I'm optimistic, I have enough self-awareness to know that things are rarely as simple as I want them to be. I wasn't always a precious snowflake, and I've seen first-hand that national (or international) scale applications have support infrastructure for a reason. To that end, here's a non-exhaustive list of potential hurdles I believe developers will need to jump to get to that tooling-free future.
In theory, HTTP2 (which reuses connections and parallelizes transfers) means that we don't pay a penalty for deploying our JavaScript as individual modules instead of a single bundled file. But it raises a new issue that we didn't have with those big bundles: what happens when we make a breaking change in part of the application, and someone visits it with a partially-primed cache, so they have some old files still hanging around? How do we make sure that we can take advantage of caching appropriately, while still keeping our code coherent for a given deployment?
Imagine we have a page that loads module A, which loads B and C, and is styled using CSS file D. I update file B, and changes to D are required for the new components. Different files may be evicted from the browser cache in unpredictable ways, though. Ideally, A and C should be loaded from the cache, and B and D should be fresh requests. If everything comes from the cache, users won't see new features, but ideally nothing should be immediately broken. It would be wasteful, but not disastrous, if all files are loaded fresh. The real problem comes if only one of B or D comes from the cache, so that we either get new code without the matching style changes, or styles without the new code.
As Jake Archibald notes, there are two working (and compatible) strategies for caching interrelated code: either long cache times with unique URLs, or no-cache headers and a shorter lifetime. I lean toward the latter strategy for now, probably using ETag hash-based headers for each file. Individual requests would be a little slower, since the browser would always check the server for individual files, but you'd only actually transfer new code, which is the expensive part (cache hits would return 304 Not Modified). Based on my experience with a similar system for election data updates, I think this would probably scale pretty well, but you'd need to test to be sure.
Once import maps are supported in all evergreen browsers, the hashed URL solution becomes the simpler of the two. Use short identifiers for all your import statements (say, based from the project root), and then hash their contents and generate a JSON mapping between the original path and the mangled filename for production deployments. Now the initial page load can be revalidated on every load, but the scripts that go with that particular page version will be immutable, guaranteeing that any change means a new URL and no cache conflicts. Here's hoping Safari ships import maps to users soon.
Personally, the whole point of developing things in a no-build environment is that I don't need to learn, manage, and optimize around third-party libraries. The web platform is far from perfect, but it's fast and accessible, and there's an undeniable pleasure in writing every line of code. I'm lucky that I have that opportunity.
Most teams are not lucky, and need to load libraries written by other people. Package managers mean we have a wealth of code at our fingertips. But at the same time, the import patterns that work well for Node (lots of modules in a big, deep folder hierarchy) have proven a clumsy match for the front-end. Importing files from node_modules is clumsy and painful, especially if you're also loading stylesheets and other non-JavaScript assets. In fact, much of the tooling explosion (including innovations like tree-shaking and transpilation) comes from trying to have our cake from npm and eat it too.
So loading from the same package manager as the server-side code is frustrating, and using a CDN requires us to trust a remote host completely (plus introducing another DNS/TCP handshake) into our performance waterfall. The ideal would be a shallow set of third-party modules that are colocated with our front-end code, similar to how Bower (RIP) used to handle libraries. Sadly, there are few tools or code conventions that I'm aware of now specifically for that niche anymore.
One approach that I'm intrigued by is Deno's bundle command, which generates an importable module file from an URL, including all its dependencies. Using a tool like this, you could pretty easily zip up vendor code into a single file in the equivalent of src/bower_components. You'd also have a lot more visibility into just how big those third-party libraries are when they're packed up into self-contained (absolute) units, which might provoke a little reflection. Maybe you don't need 3MB of time zone data after all.
That said, one secret weapon for managing those chunky libraries is asynchronous import(). Whereas code-splitting in a bundler is a complicated and niche process, when we use ES modules natively our code is effectively pre-split, and the browser gives us a mechanism to only request libraries when we need them. This means the cost equation for vendor code can change somewhat: maybe it's not great that a given component is multiple megabytes of script, but if users only pay the cost for that transfer and compilation when they're actually going to use it, that's a substantial improvement over the current state of affairs.
I've worked on some large projects where we had a single, unprocessed CSS file for the product. It was hard to stay disciplined. Without nesting or external constraints, we'd end up duplicating styles in different parts of the document and worrying about breakages if we needed to change something. The team tried to keep things well-structured, but you know how it is: if you've got six programmers, you have 12 different ideas about how the site should be organized.
CSS has @import for natively splitting styles into multiple files, but historically it hasn't been considered good for performance. Imports block the renderer and parser, meaning that you may be halting page load for the header while you wait for footer styles. We still want multiple small files on HTTP2, so the best practice is still to generate lots of <link> tags for CSS, possibly using tricks to unblock the parser. Luckily, at least, CSS is not load-order dependent the way that JavaScript is, and the @layer rule gives us ways to manage the cascade. But manually appending a tag for every stylesheet doesn't feel very ergonomic.
I don't have a good solution here. It's possible this is not as serious a problem as I think it is — certainly on my own projects, I'm able to move localized styles into shadow DOM and load them as a part of the component registration, so it tends to solve itself for anything that's heavily interactive or component-based. But I wish @import had the kind of ergonomics and care that its JavaScript counterpart did, and I suspect teams will find PostCSS easier to use than the no-build alternative.
What's the ultimate point of eschewing build tools? Sure, on some level it's to avoid ever touching webpack.config.js (a.k.a. the Lament Configuration) ever again. But it's also about trying to claw our way back from a front-end culture that has neglected the majority of users. And the best way to address an audience on typical devices (read: an Android phone with meager single-core performance and a spotty network connection, or a desktop PC from 2016) is to send less JavaScript and more HTML and CSS.
Last week, I loaded a page from a local news outlet for work, which included data on a subset of Chicago schools. There was no dynamic content, although it did have an autocomplete search at the top. I noticed the browser tab was stuttering on load, so I looked in the dev tools: each of the 600+ schools was being individually templated and appended to a queried element from a JSON fetch. On a fairly new desktop PC, it froze the UI thread for more than half a second. On a phone, that was more like 7 seconds, even with ads blocked, and any news dev will tell you that the absolute easiest way to boost your story's load performance is to remove the ads from it.
If that page had been built as static HTML, it would parse and load almost instantly by comparison. Indeed, in the newsroom projects that I maintain, the most important feature is the ability to pull in data from a variety of sources (Google Docs, Sheets, local text and CSV, JSON, remote APIs) and merge that easily with the HTML template. The build scripts do other things, like bundling and CSS processing and deployment. But those things could be replaced, or reduced, or moved into other tools without radically changing the experience. HTML generation is irreplaceable.
At a bare minimum, let's say I want to be able to include partial templates (for sharing headers and snippets between pages), loop through some data, and inject my import map or my stylesheet collection into the page. Here's a list of the tools that let me do that easily, on most Linux servers, without installing a bunch of extra crap:
Listen, no shade on PHP, but I don't want to write it for a living anymore even if I wasn't working off of static file storage. It's a hard sell, especially in the context of "a modern web stack."
HTML templating is where the rubber really meets the road. We do not have capabilities for meta-processing in the language itself, and any solution that involves JavaScript (including the late, lamented HTML imports) is a non-starter. I'd kill for an <include> tag, especially if there were a way to use it without blocking the parser, similar to the way that declarative shadow DOM provides declarative support for component subtrees.
What I'm not interested in doing is stripping the build toolchain down if it means a worse experience for users. And once I need some kind of infrastructure to assemble my HTML, it's not actually that much more work to bolt on a script bundler and a stylesheet preprocessor, and reap the benefits from those ecosystems. I'm all-in on the web platform, but I'm not a masochist.
This is by no means an exhaustive list of challenges, but Nano tells me I'm well past 200 lines in this text document, so let's wrap it up.
The good news, as I see it, is that the browser is in a healthier place than ever for hobbyists, students, and small project developers. You can open index.html, import Lit or Vue from a CDN, and have a reasonably performant front-end environment that can be grow more complex to fit your needs and skills. You can also write a lot less JavaScript than in years past, because CSS has gotten so much better for layout and interaction.
I'd say we're within reach of a significantly less complicated front-end technical culture. I would not be surprised to see companies start to experiment with serving JavaScript or CSS directly, using tooling to smooth off the rough edges (e.g., producing import maps or automating stylesheet inclusion) rather than leaning hard into full, slow-moving compilation steps. The ergonomics of these approaches are going to be better than a lot of people expect. Some front-end teams that have specialized in tooling-intensive ecosystems are going to either eat a lot of crow or get very angry for a while.
All that said: we're not going back to the days when all you needed was notepad.exe and some moxy to make a "real" website. Perhaps it's naive to think we ever were. But making a good web app is hard, I would argue harder than many other kinds of programming. It's the code you write in a trio of languages, but also the network between you and the user, the management of distributed state, and a vast range of devices, inputs, and outputs. The least we can do is make it less wearying to get started.
There is no doubt in my mind that Everything Everywhere All at Once was my movie of the year. It's probably the best thing I've seen in about a decade, at least since Fury Road. I do want to talk about it in a little more detail. But first, let's examine the general landscape.
As of this writing, I watched 152 movies this year. Most of them were horror movies. That's true even if we discount my traditional Shocktober batch of 31. Horror flicks are shorter than other movies, I think, so by runtime they account for about half — 128 hours out of roughly 262.
This was also the year that I stopped getting DVDs from Netflix, after about two decades as a subscriber. Gradually, as part of a general shift away from physical media, the selection there had gotten worse for new movies, but it had also deteriorated for older films, which was a lot of the reason I had kept it around: if I heard about something from the 70s or 80s on a podcast, I would have liked to be able to find it there to watch.
The pitch of a streaming future was supposed to be that we would have access to anything ever made, even if we couldn't own it. Instead, we're ending up in the worst of both worlds: you can't own a movie or TV show physically, and they get yanked constantly from digital services due to a tangle of competing financial interests. Here's an example: Kathryn Bigelow's 1995 Strange Days cannot be streamed in any form, not as a rental or a "purchase." You can buy a DVD, maybe, but it's expensive and hard to locate. I got a used copy from Bucket O' Blood here in Chicago after years of looking in various record stores.
It's not like this is the end of society — after all, we used to have to sit down at a pre-arranged time every single week to watch a television series. But I do worry about how film culture moves forward in a world without a coherent memory of itself.
(In some ways, this is the same problem that visual art and software development face with the increasing onslaught of AI-generated images, text, and code. It's ironic that after spilling countless tons of pollution to create a world-spanning network of technology, that same technology will be used to pollute its own intellectual underpinnings. Capitalism truly eats itself.)
On other other hand, you've got something like Everything Everywhere, which smashed into me this year like a runaway truck. It's the story of Evelyn Wong, played by Michelle Yeoh: a bad mother running a failing laundromat, whose husband (Waymond, played by a resurgent Ke Huy Quan) wants a divorce. In fact, she's the worst of all Evelyn Wongs in the multiverse, being hunted by an omnipresent supernatural force known as Jobu Tupaki, whose ultimate plan involves a horrific Everything Bagel. ("I got bored one day, and I put everything on a bagel. Everything. All my hopes and dreams, my old report cards, every breed of dog, every last personal ad on Craigslist. Sesame. Poppyseed. Salt.")
There's this thing that a good TV show will do, where there's a character in the first episode that you hate, and then halfway through the season they'll get a feature episode, show their backstory or their personal tragedy, and suddenly they're your favorite, you can't imagine the show without them. Everything Everywhere is that, but for tone. It'll introduce a throwaway joke like Raccacoony, Evelyn's mangling of Ratatooille. Then, because it's a multiverse, we get to see Raccacoony, a trash panda voiced by Randy Newman controlling a hibachi chef, in a cutaway gag that pays off a cute fight sequence. And then at the end of two hours, somehow you've come to care deeply for Raccacoony, you're rooting for Evelyn to free him from animal control and somehow this all makes sense.
Also, there's like seven of these tonal judo throws going simultaneously. There's a self-contained homage to Wong Kar Wai, and an extended riff on the phallic IRS awards owned by an auditor played by Jamie Lee Curtis, who is clearly having the time of her life in her late career choices. It's an almost indescribably dense film, united by a Gondry-like aesthetic and a spirit of deep generosity. Somehow, almost impossibly, it sticks the landing. It's really good. Stephanie Hsu is magnetic. You should watch it.
In any other year, a new Jordan Peele movie would be a shoo-in for the top slot on my movie list. Nope is not the best thing he's ever done (I still think Us is going to be a dark horse for Peele's legacy), but it's extremely good. It's also a real showcase for his strengths as a writer: the script has a lot of layers to it but it's not relying on a gimmick, and his characters are drawn in specifics that his cast can really dig into. It's also not didactic — there are messages here, but not an easily-digested manifesto (he has stated pretty clearly that he doesn't just want to be "the racism horror guy,"). Instead, what we get are intersections between spectacle, creativity, labor, and trauma.
My third-favorite movie of 2022 is a little Senagalese film named Saloum, which I think is still only available on Shudder. It's Tarantino-esque in the best ways (incredibly charming actors given sparkling dialog and tense dynamics) and the worst (the ending fizzles a bit). You should go into this blind, but I'm so excited to see what comes next from everyone involved.
The first movie I watched back in January was Bound, the Wachowskis' 1996 audition for The Matrix, starring Jennifer Tilly and Gina Gershon. I used to have to introduce this movie to people by saying "it's really good, but don't be scared off by the first ten minutes." I don't know if that's still something we have to say in this day and age, but it's still a pretty camp start for what turns into a tight, constrained noir. I tend to forget that the whole movie is basically a play in a two-apartment set, every inch of which is lovingly chewed by Joe Pantoliano. Part of me really wishes that instead of being given actual budgets for Speed Racer or The Cloud Atlas, the Wachowskis had spent thirty years turning out clockwork gems like this.
Pig is a part of the latter-day Nicolas Cage renaissance, and given a lot of those movies you might expect it to be a blood-soaked revenge film, a la Mandy or Prisoners of the Ghostland. What you actually get here is a quiet meditation on labor and skill, as a retired chef returns to the city where he was famous in search of his kidnapped truffle hog. Parts of it get a little misty-eyed for my taste, but the performances (from Cage, Adam Arkin, and Alex Wolff) help keep it grounded most of the time. It's better than it has any right to be, basically.
As I mentioned, the streaming landscape worries me a little bit, but there are some weird gems in there as well — movies nobody cares enough to fight over. One of these is Siege, a 1983 Canadian horror film that's essentially a better Purge. When the Nova Scotian police go on strike, a gang of bigots attack a gay bar, killing all but one patron, who they pursue to an apartment building (hence, the siege). It's surprisingly progressive, funny, and filled with fun misfit characters who have to band together against creeping fascism. It's very 80s, but its heart is in the right place.
Another classic I'd never seen before was Brain Damage, directed by Frank Henenlotter, who's probably best known for Basket Case. A deeply unsubtle story about addiction, as personified by a weird talking parasite named Aylmer, it's both very funny and also (like Basket Case) a tribute to the scummier side of city life. If you've ever wanted to see a scene where a deep-voiced worm puppet mocks an addict's withdrawal from blue brain juice, this is the movie for you.
And then, of course, there is Shocktober. I didn't pick a theme this year (last year, I watched a lot of giallo). Three movies in particular stood out: The Changeling is a great haunted house movie starring George C. Scott, Under the Shadow feels like a great variation on The Babadook, and I still have a lot of love for Nia DaCosta's 2021 Candyman, which I think is smarter than most critics gave it credit for.
There aren't, unlike in the games roundup, any grand themes to my reading in 2022. I didn't set out to intentionally cover a particular subject, or to read something that I'd been putting off — in fact, I pretty much just read for pleasure. I think it was that kind of year.
My total, as of the time of writing, is 151 books finished, totalling about 55 thousand pages. Two thirds of those were by women or nonbinary authors, and about one third were people of color. Most of my reading was either science fiction, fantasy, or thriller. Twelve books were non-fiction, and only 20 were re-reads.
This is a lot of books and a lot of pages, and most of them weren't very good. In fact, I think one thing I learned this year was to trust myself more on first impressions: there are several titles in the sheet that I bailed on early, then saw in a list or in the "most popular" sort for the library, and thought "I'll give that another shot." Almost without exception I regretted it later.
Since there's no real theme to the reading, and a lot of it was chaff, let's take a look at some of the more exceptional titles.
I can't say enough good things about Rosemary Kirstein's Steerswoman books. They start off as a kind of low fantasy, but it quickly becomes clear that there's more going on. The main character is a "steerswoman," a kind of roving scholar with a simple code: they'll answer any question you have, as long as you answer theirs. The four books in the series so far are satisfying in and of themselves — these were originally published by an actual company, but the rights belong to Kirstein now, and there are two more on the way. I'm extremely excited to see those out. In many ways these remind me of Laurie J. Marks' Elemental Logic series, in that they both mix wide scope with very personal ethics, and also that they're long-running books that are universally loved by the criminally small number of people who have read them.
Like her Claire DeWitt mysteries, Sarah Gran's The Book of the Most Precious Substance, combines a love of esoteric mystical literature with a noir tone. In this case, instead of a PI who learned her methodology from a French detection manual, it's a book dealer on the hunt for a Necronomicon-like book of magic that will net a profit — as well as more exotic rewards. Gran has a gift for a very specific voice, so if you've enjoyed her other works or you're looking for a capable-but-broken female protagonist, it might be worth checking any of these out.
Sarah Gailey's The Echo Wife remains one of my favorite books of the last decade, and if their Just Like Home can't quite reach those highs, it's still a page-turner. Vera Crowder returns home as her mother is dying, to the house where her father killed and buried half a dozen people during her childhood. The result is a queasy exploration of guilt and culpability, as Vera attempts to understand her own feelings toward her family, and her role in the murders. It may not quite land the ending, but Gailey still milks a tremendous amount of tension from an economical cast and setting, and I'm looking forward to re-reading it in a year or so for a reappraisal. It's wild to imagine all this from an author who first landed on the scene with a goofy fun "steampunk hippo cowboy" novella.
In a year when social networks seem to be imploding left and right, An Ugly Truth may feel redundant. Who needs to read a book about Facebook, i.e. 4Chan for your racist boomer relatives? Yet Frenkel and Kang's detailed account of the Cambridge Analytica era makes a strong case that we still haven't reckoned with just how dumb, sheltered, and destructive Mark Zuckerberg and his company have been. If you have not yet accepted that these kinds of tech companies are the Phillip Morris of our generation, this book might convince you.
Nona the Ninth was a tough read for me. I adore Tamsyn Muir's previous books, Gideon and Harrow, and I'm still very much interested to see how she wraps the whole series up. But the explanation around this book was that it started as the opening chapters to that final book, and as it kept growing, it was eventually split off into its own title. I think you can feel that: this is not a book where a lot is happening. It is backstory and setup for the actual ending — well-written, charming setup, because Muir is still funny as hell, but setup nonetheless. I finished it very much feeling like she was stalling for time, in a way that middle chapters often do, but rarely so explicitly.
Kate Beaton's Ducks was also long-anticipated, and here I think the hype was justified. Beaton is known for her history-nerd comic, Hark a Vagrant, as well as some children's books. She's a funny and expressive illustrator, but here she turns those talents to telling the story of her own experiences working on the oil sands in Canada. In many ways, it's a history of an abusive relationship — not just Beaton herself, but her community, trapped in a cycle of dependence on an abusive and destructive industry. Part of what makes this book compelling is Beaton is clear-eyed about the ways that same environment could be funny, or charming, without ignoring its inherent harm.
Finally, Ruthanna Emrys' A Half-Built Garden is, among other things, pointing a new direction for ecological science fiction in an era of climate change. A highly-networked anarchist commune working to clean up the Chesapeake watershed is shocked one day to find that aliens have landed in their backyard, who make an offer: they're here to help, and by help they mean "move humans off the earth," which they see as a doomed ecosystem. And even if the commune isn't interested, the corporations who ruined the planet most certainly are. The resulting negotiations give Emrys a way to poke at all kinds of interesting angles, including social software, for-profit pronouns, found family. While you could lump this into the "cozy sci-fi" movement that started with Becky Chambers, I think it would be a mistake, and that Garden has grander ambitions than it immediately seems. I think about this book a lot.
This year, I kept a log in a spreadsheet of the media I took in: books I read, movies I watched, and games I played. As 2022 wraps up, I want to take a moment and look back. I don't do this kind of record-keeping every year — it has the downside of making enjoyment into homework of a kind — but it can be an interesting view into something that might otherwise blur together.
I'm going to start with games, because they're the longest experience of the three. As a result, while I only wrote down books and movies if I finished them (or came very close), I wrote down games when I started them. I was more likely to abandon a game if it turned out I wasn't actually having a good time, and while I may add some titles to my book and movie lists before January 1, I feel pretty confident that I can write now about the shape of the year with relative accuracy.
At the time of this writing, I played about 90 games in 2022. That sounds like a lot, but I finished less than half of them (a metric that's complicated by "evergreen" games like Devil Daggers or roguelike games like Atomicrops or Risk of Rain 2. A number of these were also short, or I just dipped into them and then dipped back out: Landlord of the Woods is about 45 minutes long, and I loaded Rez Infinite up just long enough to run through the new levels again on a whim. I'd estimate half of them were actually serious time investments.
Roughly two-thirds of what I played was new to me, although rarely new releases. However, there's a fun correlation here: I actually completed 2/3 of games I replayed, while those proportions are reversed for new games. I suspect this is because I was more likely to get back into something that I already knew I enjoyed, whereas a lot of the new titles are "browsing": trying out GBA games that I missed during the console's lifetime, wandering through my Steam back catalog, or impulse purchases during sales.
In retrospect, soulslikes loomed prominently over my habits this year (as, indeed, they've become pretty influential across the industry). It started in January, when I replayed Sekiro: Shadows Die Twice, a game that I thought was fine in 2019 and grew to strongly appreciate through a second run.
High off the Sekiro experience, I tried Bloodborne again, and I also gave Dark Souls Remastered a sustained attempt. In both cases, I got through a significant portion of the game (up to Vicar Amelia in the former, and most of Anor Londo in the latter) before admitting that while I am sure I could get farther, I just wasn't having fun. I just don't think these games are very good, personally — they feel sluggish (the parries in both are trash), and Dark Souls in particular has not aged well visually, with a real "asset pack" AA-budget vibe to it.
Unfortunately, what I've realized is that the stuff I really like about Sekiro — its mechanical purity, responsive combat with (limited) action cancels, an explicit narrative — are mostly outliers in Fromsoft's catalog. Simultaneously, the things that I find infuriating, like its befuddling and opaque quest chains or cheap encounter design, are in fact the aspects that draw in their most devoted fans.
Still, many of my favorite titles this year were non-Fromsoft soulslikes. The Surge 2 tries a high-risk-high-reward mechanic for parries that's initially frustrating but ultimately feels rewarding to master. Remnant: From the Ashes is a semi-procedural shooter with some great environment design. Tunic is playing more in the adventure space, but there are elements there in its combat and narrative design that are clearly evoking Miyazaki.
There were also some missteps. Ashen is probably the closest to a Fromsoft game (and has at least one dungeon that almost drove me away) but the art direction and writing kept me interested, as each victory builds out your hometown. Darksiders III is a cash grab in a franchise whose brawler roots don't mesh particularly well with punishing checkpoints, but it managed to eke out a few last drops of charm. Neither of these was bad enough to stop playing, but I also can't see myself revisiting them, or recommending either to other people.
(From last year, but also illuminating: Jedi: Fallen Order is blatantly pulling from Sekiro for its lightsaber combat (no complaints here), and its late-game character reveal had me cackling. Death's Door was in many ways a precursor to Tunic, with its Metroidvania progression and isometric combat, and I would argue it's a better game even if it doesn't have the latter's clever manual gimmick.)
As a genre experiment, this year was clarifying. I think I've got a better grasp now on what works for me, and what doesn't. I also feel freshly inoculated against, for example, Elden Ring, which should save me the frustration of playing 30% of a 120-hour game. We'll see whether that lasts.
Don't be too put off by the weird, swollen art style of Atomicrops. The underlying combination of light farming sim and bullet-hell twinstick shooter ate up a lot of hours in January. I played it on Switch, and while it's beatable (and fun) there it also feels like it wasn't optimized for the platform — the final boss turns into a slideshow. I'd recommend it, but probably on PC.
Halo Infinite took a lot of criticism for effectively being "what if we made the whole game out of Silent Cartographer," and parts of it do wear thin when it turns into an Ubisoft Map Game. But as the Master Chief Collection rolled the games out on PC, I'd played through all of them fairly recently, and I think you could do a lot worse than an entire game made out of Silent Cartographer (you could, for example, play Halo 4). I would argue this is the game they've been aiming toward for two decades.
I'm as surprised as anyone that in 2022 there would be a game that is A) based on a Marvel property, B) specifically Guardians of the Galaxy, and C) actually pretty good. Eidos Montreal's 2021 title is chatty, irreverent, and pulls a lot of the touchstones of the James Gunn films (a non-stop commentary from the team, Quill's tape deck, the Bautista take on Drax) while ditching their more obnoxious tics (some needlessly fatphobic humor, Chris Pratt). I think the combat does often feel weightless — my advice is to set it to easy so you can get through it faster and get back to the writing and performances.
Immortality is one of those games that's going to have a big influence conceptually but not mechically. It's an FMV title where you're essentially handed a big box of isolated clips from the career of a b-movie actress, roughly grouped into three films: a giallo-style religious tale, a noir in the style of Basic Instinct, and an extremely 90's thriller that wouldn't be out of place on the Lifetime Movie Network. As you scrub through and build connections between the clips (linked by clicking on objects in a paused frame), a second, more sinister narrative emerges. As a film buff, this felt like it was aimed right at me, and while it can drag a bit when you find yourself hunting the last few segments, I think it achieves exactly what it set out to do.
Finally, I don't think I can wrap up without mentioning Splatoon 3, a game that was only released in September and probably has more hours in it than anything else I've played. I was S+ rank in Splatoon 2, meaning fairly high-level but not elite (I believe the rank roughly translates into the lower end of the top 10% of players). So I was really looking forward to this.
In design, Splatoon 3 is pure Nintendo. It feels good play, with varied weapons and precise motion controls. It's brightly colored and fashionable, and has a non-toxic and notoriously LGBT-friendly community with lots of in-game creativity on display. The game's lore is weird and surprisingly grim. Taking team shooter concepts like map control/movement and translating them into literal painted areas is brilliant. Also, the soundtrack is fantastic.
The other classic Nintendo move is the networking stack, which is one of the most atrocious technical foundations for a multiplayer game that I've ever seen. It's barely dysfunctional: connections drop regularly, which completely cancels matches and counts as a loss for the disconnecting player, and the matchmaking is laughably bad in the regular ranked mode. It's a tribute to how good everything else is that it can be addictive despite a glaring central flaw.
Splatoon 3 adds a bunch of things that are different from its predecessor, but not always better. For example, the end game poses are no longer gendered and the clothing options are massively more flexible, but to work around that they've added a "catalog" season pass system that unlocks new ending poses or nametags as you play. Since players need to show those off, the game now only shows the winners at the end of the match, which means they cut the adorable tantrum/sulk animations and the more distinctive music after a loss. I do miss it, even while I do enjoy the new variety (and the vastly improved lobby area).
Gripes aside, at the end of the day, if you want a Splatoon experience (and I do), this is where you have to go for it. Nobody else makes anything like this. There's no "splatoonlike" genre, as inconceivable as that seems. It's Nintendo's way or the highway.
There was a lot of noise in 2022 about how the Switch hardware is aging. This isn't wrong! The Tegra chip that the console is based on was not really cutting-edge when it launched, and it's certainly not competing with other consoles — or even phones — at this point. Even so, the Switch is probably at least 50% of my gaming time, and although I have a PS4 hooked up to the same TV, it's almost always used as a DVD player instead.
If you think back to the PS3/360 era, there was a lot of noise made about the first real "HD" consoles. This was, to be fair, a real shift, one that meant games looked sharper but were also radically more expensive. But there were also changes in the kinds of games that became possible at that level of power. This is the time when we first started seeing open-world games like Assassin's Creed or Oblivion, which were not only very big, but also had bustling populations of NPCs and emergent behavior. There's a real case here of new kinds of game design being unlocked by the new generation of hardware.
In the Switch's case, these are often the same kinds of games that it struggles to run at full fidelity (Breath of the Wild excepted, and even there, it's a full world but not a busy one). But when the developer takes more control over the camera or the gameplay, it can return really good results. And in some cases, it can be pretty incredible — the Neir Automata port this year is certainly not as detailed as it is on a PC, but it's shockingly good.
It may be that there are some designs that are unlocked by the PS5/XB1 consoles, just as the open-world genre only really hit its stride a couple of generations back. But it's not clear to me what those are, and in the meantime, I do kind of wish the treadmill would slow down a little. Obviously there's an incentive for them, but Splatoon is a reminder that the Switch can be plenty compelling when developers target the hardware they have, and not what they wish they had.
An uncomfortable truth of modern web journalism is that most people only read the headlines. That's what happens when most of your interactions are mediated through social media, where the headline and a brief teaser surface in the share card, and then all the "fun" interaction is arguing about it in the responses.
There are sensible reactions to this (high on the list: stop letting the copy desk pick the headlines for stories they didn't report) and then there's the new wave of web publications (Politico Pro, Axios, and now Semafor) that have instead decided that the ideal strategy is to just write the story like a social media blurb anyway. From CJR:
Author bylines are, as promised, as prominent as headlines, but the meat of the Semaform concept comes in the text of the story itself, which is broken into distinct sections, each preceded by a capitalized subheading: “THE NEWS” (or “THE SCOOP”), offering the “undisputed facts” of a given story; “THE REPORTER’S VIEW,” which is what it sounds like, with an emphasis on “analysis”; “ROOM FOR DISAGREEMENT,” which is also what it sounds like; “THE VIEW FROM,” promising “different and more global perspectives” on the story in question; and “NOTABLE,” linking out to worthwhile related coverage from other outlets.
I don't consider myself a particularly old-fashioned news reader — I've spent most of my career trying to convince reporters and editors to get a little creative with their formats — but I admit to a visceral repulsion when I read these stories, maybe because they're so proscribed. They often feel, as Timothy Noah writes, so boiled down that they actually impede understanding. They can't be skimmed because there's nothing but skim there.
Even worse, the adherence to the fill-in-the-blanks writing formula (with its pithy, repetitive headers) does its best to drain any distinctiveness from the writers, even while it puts bylines front and center. Take, for example, this David Weigel piece on Oregon Democrats, which chafes deeply against the "Semaform." Weigel gives us 23 paragraphs that would not have been out of place in his Washington Post reporting, followed by a single paragraph of "David's View" (as if the previous reporting was not also his viewpoint), then a "Room for Disagreement" that... doesn't actually disagree with anything. And then "The View from the U.K.," which is a mildly amusing dunk on a British tabloid reporter but adds nothing to the story.
For a more "typical" example of the form, there's this story by Kadia Goba on Marjorie Taylor Greene's deranged anti-trans legislation. Goba is less of a "name," which may explain why her piece is less of a newspaper article with some additional sections jammed onto the end, but it still reads as if a normal inverted-pyramid piece had the subheads inserted at arbitrary locations. The final "View from the U.K." feels like twisting the knife: and now the news from TERF Island.
Here's the thing, to me: picking a story form like this is a great way to make sure nobody can ever remember a story an hour after reading it, because they all blend together. Why hire good journalists if you're not going to let them write? You're never going to get something like Lynda V. Mapes' adorable Rialto coverage in Semafor's article template. It doesn't make any sense for investigative writing. You're certainly not going to get the kinds of interactive or creative storytelling that I work on (although, given that Semafor's visual aesthetic is somewhere between "your dad made a Powerpoint meme" and "Financial Times circa 2008," I'm not sure they care).
Above all, these new outlets feel like a bet on a very specific future of news: one where it's very much a market commodity, as opposed to something that can be pleasurable or rewarding in itself. And maybe that's the right bet! I have my doubts, but my hit rate is no better than any other industry thinker, and I assume you wouldn't do something this joyless without a lot of market research indicating that you can sell it to somebody. But as someone who's been more and more convinced that the only sustainable path for journalism is non-profit, that person isn't me.
Iain M. Banks' classic novel Player of Games follows Jernau Morat Gurgeh, who is sent from the Culture (a socialist utopia that's the standard setting for most of Banks' genre fiction) to compete in a rival society's civil service exam, which takes the form of a complicated wargame named Azad. The game is thought by its adherents to be so complex, so subtle, that it serves as an effective mirror for the empire itself.The Emperor had set out to beat not just Gurgeh, but the whole Culture. There was no other way to describe his use of pieces, territory and cards; he had set up his whole side of the match as an Empire, the very image of Azad.
Another revelation struck Gurgeh with a force almost as great; one reading — perhaps the best — of the way he'd always played was that he played as the Culture. He'd habitually set up something like the society itself when he constructed his positions and deployed his pieces; a net, a grid of forces and relationships, without any obvious hierarchy or entrenched leadership, and initially quite peaceful.
[...] Every other player he'd competed against had unwittingly tried to adjust to this novel style in its own terms, and comprehensively failed. Nicosar was trying no such thing. He'd gone the other way, and made the board his Empire, complete and exact in every structural detail to the limits of definition the game's scale imposed.
Azad is, obviously, not real — it's a thought experiment, a clever dramatic conceit along the lines of Borges' famous 1:1 scale map. But we have our own Azad, in a way: as programmers, it's our job to create systems of rules and interactions that model a problem. Often this means we intentionally mimic real-world details in our code. And sometimes it may mean that we also echo more subtle values and viewpoints.
I started thinking about this a while back, after reading about how some people think about the influences on their coding style. I do think I have a tendency to lean into "playful" or expressive JavaScript features, but that's just a symptom of a low boredom threshold. Instead, looking back on it, what struck me most about my old repos was a habitual use of what we could charitably call "collaborative" architecture.
Take Caret, for example: while there are components that own large chunks of functionality, there's no central "manager" for the application or hierarchy of control. Instead, it's built around a pub/sub command bus, where modules coordinating through broadcasts of custom events. It's not doctrinaire about it — there's still lots of places where modules call into each other directly (probably too many, actually) — but for the most part Caret is less like a modern component tree, and more like a running conversation between equal actors.
I've been using variations on this design for a long time: the first time I remember employing it is the (now defunct) economic indicator dashboard I built for CQ, which needed to coordinate filters and views between multiple panels. But you can also see it in the NPR primary election rig, Weir's new UI, and Chalkbeat's social media card generator, among others. None of these have what what we would typically think of as a typical framework "inversion of control." I've certainly built more traditional, framework-first applications, but it's pretty obvious where my mind goes if given free rein.
(I suspect this is why I've taken so strongly to web components as a toolkit: because they provide hooks for managing their own lifecycle, as well as direct connection to the existing event system of the DOM, they already work in ways that are strongly compatible with how I naturally structure code. There's no cost of convenience for me there.)
There are good technical reasons for preferring a pub/sub architecture: it maps nicely onto the underlying browser platform, it can grow organically without having to plan out a UML diagram, and it's conceptually easy to understand (even if you don't just subclass EventTarget, you can implement the core command bus in five minutes for a new project). But I also wondered if there are non-technical reasons that I've been drawn to it — if it's part of my personal Azad/Culture strategy.
I'm also asking this in a very different environment than even ten years ago, when we used to see coyly neo-feudalist projects like Urbit gloss over their political design with a thick coat of irony. These days, the misnamed "web3" movement is explicit about its embrace of the Californian ideology: not just architecture that exists inside of capitalism, but architecture as capitalism, with predictable results. In 2022, it's not quite so kooky to say that code is cultural.
I first read Rediker and Linebaugh's The Many-Headed Hydra: Sailors, Slaves, Commoners, and the Hidden History of the Revolutionary Atlantic in college, which introduced me to the concept of hydrarchy: a type of anarchism formed by the "motley crew" of pirate ships in contrast to the strict class structures of merchant companies. Although they still had captains who issued orders, that leadership as not absolute or unaccountable, and it was common practice for pirates to put captured ship captains at the mercy of their crews as a taste of hydrarchy. A share system also meant that spoils were distributed more equally than was the case on merchant ships.
The hydrarchy was a huge influence on me politically, and it still shapes the way I manage teams and projects. But is it possible that it also influenced the ways I tend to think about and write code systems? This is a silly question, but not I think a stupid one: a little introspection can be valuable, especially if it provides insight in how to explain our work to beginners or accommodate their own subconscious worldviews.
This is not to say that, for example, Caret is an endorsement of piracy, or even a direct analog (certainly not in the way that web3 is tied to venture capitalism). But it was built the way it was because of who did the building. And its design did have cultural implications: building on top of events means that you could write a Caret plugin just by sending messages to its Chrome process, including commands for the Ace editor. The promise (not always kept, to be fair) was that your external code was using the same APIs that I used internally — that you were a collaborator with the editor itself. You had, as it were, an equal share in the outcome.
As we think about what the "next era of JavaScript" looks like, there's a tendency to express it in terms of platforms and layers. This isn't wrong! But if we're out here dreaming up new workflows empowered by edge computing, I think we can also spare a little whimsy for models beyond "pure render functions" or "strict hierarchy of control," and a little soul-searching about what those models for the next era might mean about our own mindsets.
Like a lot of people during the pandemic, early last year I got into mechanical keyboard collecting. Once you start, it's an easy hobby to sink a lot of time and money into, but the saving grace is that it's also ridiculously inconvenient even before the supply chain imploded, since everything is a "group buy" or some other micro-production release, so it tends to be fairly self-limiting.
I started off with a Drop CTRL, which is a pretty basic mechanical that serves as a good starting point. Then I picked up a Keychron Q1, a really sharp budget board that convinced me I need more keys than a 75% layout, and finally a NovelKeys NK87 with Box Jade clicky switches, which is just just a lovely piece of hardware and what I'm using to type this.
All three of these keyboards are (very intentionally) compatible with the open-source QMK firmware. QMK is very cool, and ideally it means that any of these keyboards can be extended, customized, and updated in any way I want. For example, I have a toggle set up on each board that turns the middle of the layout into a number pad, for easier spreadsheet edits and 2FA inputs. That's the easy mode — if you really want to dig in and write some C, these keyboards run on ARM chips somewhere on the order of a Nintendo DS, so the sky's pretty much the limit.
That said, "compatible" is a broad term. Both the Q1 and NK87 have full QMK implementations, including support for VIA for live key-remapping and macros, but the CTRL (while technically built on QMK) is usually configured via a web service. It's mostly reliable, but there have been a few times in the last few months where the firmware I got back after remapping keys was buggy or unreliable, and this week I decided I wanted to skip the middleman and get QMK building for the CTRL, including custom lighting.
Well, it could have been easier, that's for sure. In getting the firmware working the way I wanted it, I ended up having to trawl through a bunch of source code and blog posts that always seemed to be missing something I needed. So I decided I'd write up the process I took, before I forget how it went, in case I needed it in the future or if someone else would find it helpful.
The QMK setup process is reasonably well documented--it's a Python package, mostly, wrapped around a compilation toolchain. It'll clone the repo for you and install a qmk command that manages the process. I set mine up on WSL and was up and running pretty quickly.
Once you have the basics going, you need to create a "keymap" variation for your board. In my case, I created a new folder at qmk_firmware/keyboards/massdrop/ctrl/keymaps/thomaswilburn. There are already a bunch of keymaps in there, which is one of the things that gives QMK a kind of ramshackle feel, since they're just additions by randos who had a layout that they like and now everyone gets a copy. Poking around these can be helpful, but they're often either baroque or hyperspecialized (one of them enables the ability to programmatically trigger individual lights from terminal scripts, for example).
However, the neat thing about QMK's setup is that the files in each keymap directory are loaded as "overrides" for the main code. That means you only need to add the files that change for your particular use, and in most cases that means you only need keymap.c and maybe rules.mk. In my case, I copied the default_md folder as the starting place for my setup, which only contains those files. Once that's done, you should be able to test that it builds by running qmk compile -kb massdrop/ctrl -km thomaswilburn (or whatever your folder was named).
Once you have a firmware file, you can send it to the keyboard by using the reset button on the bottom of the board and running Drop's mdloader utility.
QMK is designed around the concept of layers, which are arrays of layout config stacked on top of each other. If you're on layer #3 and you press X, the firmware checks its config to see if there's a defined code it should send for that physical key on that layer. QMK can also have a slot defined as "transparent," which means that if there's not a code assigned on the current layer, it will check the next one down, until it runs out. So, for example, my "number pad" layer defines U as 4, I as 5, and so on, but most of the keys are transparent, so pressing Home or End will fall through and do the right thing, which saves time having to duplicate all the basic keys across layers.
If your board supports VIA, remapping the layer assignments is easy to do in software, and your keymap file will just contain mostly empty layers. But since the CTRL doesn't support VIA, you have to assign them manually in C code. Luckily, the default keymap has the basics all set up, as well as a template for an all-transparent layer that you can just copy and paste to add new ones. You can see my layer assignments here. The _______ spaces are transparent, and XXXXXXX means "do nothing."
There's a full list of keycodes in the QMK docs, including a list of their OS compatibility (MacOS, for example, has a weird relationship with things like "number lock"). Particularly interesting to me are some of the combos, such as LT(3, KC_CAPS), which means "switch to layer three if held, but toggle caps lock if tapped." I'm not big on baroque chord combinations, but you can make the extended functions a lot more convenient by taking advantage of these special layer behaviors.
Ultimately, my layers are pretty straightforward: layer 0 is the standard keyboard functions. Layer 1 is fully transparent, and is just used to easily toggle the lighting effects off and on. Layer 2 is number pad mode, and Layer 3 triggers special keyboard behaviors, like changing the animation pattern or putting it into "firmware flash" mode.
Getting the firmware compiling was pretty easy, but for some reason I could not get the LED lighting configuration to work. It turns out that there was a pretty silly answer for this. We'll come back to it. First, we should talk about how lights are set up on the CTRL.
There are 119 LEDs on the CTRL board: 87 for the keys, and then 32 in a ring around the edges to provide underglow. These are addressed in the QMK keymap file using a legacy system that newer keyboards eschew, I think because it was easier for Drop to build their web config tool around the older syntax. I like the new setup, which lets you explicitly specify ranges in a human-readable way, but the Drop method isn't that much more difficult.
Essentially, the keymap file should set up an array called led_instructions filled with C structs configuring the LED system, which you can see in my file here. If you don't write a lot of C, the notation for the array may be unfamiliar, but these unordered structs aren't too difficult from, say, JavaScript objects, except that the property names have to start with a dot. Each one gets evaluated in turn for each LED, and a set of flags tells QMK what conditions it requires to activate and what it does. These flags are:
{
.flags = LED_FLAG_MATCH_LAYER |
LED_FLAG_USE_RGB |
LED_FLAG_MATCH_ID,
.g = 255,
.id0 = 0x03800000,
.id1 = 0x0E000700,
.id2 = 0xFF8001C0,
.id3 = 0x00FFFFFF,
.layer = 2
},
The flags mean that this will only apply when the active layer matches the .layer
property, we're going to provide color byte values (just .g in this case, since
the red and blue values are both zero), and only LEDs matching the bitmask in
.id0 through .id3 will be affected.
Most of this is human-readable, but those IDs are a pain. They are effectively a bitmask of four 32-bit integers, where each bit corresponds to an LED on the board, starting from the escape key (id 0) and moving left-to-right through each row until you get to the right arrow in the bottom-right of the keyboard (id 86), and then proceeding clockwise all around the edge of the keyboard. So for example, to turn the leftmost keys on the keyboard, you'd take their IDs (0 for escape, 16 for `, 35 for tab, 50 for capslock, 63 for left shift, and 76 for left control), divide by 32 to find out which .idX value you want, and then modulo 32 to set the correct bit within that integer (in this case, the result is 0x00010001 0x80040002 0x00001000). That's not fun!
Other people who have done this have used a Python script that requires you to manually input the LED numbers, but I'm a web developer. So I wrote a quick GUI for creating the IDs for a given lighting pattern: click to toggle a key, and when the diagram is focused you can also press physical keys on your keyboard to quickly flip them off and on. The input contains the four ID integers that the CTRL expects when using the LED_FLAG_MATCH_ID option.
Using this utility script, it was easy to set up a few LED zones in a Vilebloom theme that, for me, evokes the classic PDP/11 console. But as I mentioned before, when I first started configuring the LED system, I couldn't get anything to show up. Everything compiled and loaded, and layers worked, but no lights appeared.
What I eventually realized, to my chagrin, was that the brightness was turned all the way down. Self-compiled QMK tries to load settings from persistent memory, including the active LED pattern and brightness, but I suspect the Drop firmware doesn't save them, so those addresses were zero. After I used the function keys to increase the backlight intensity, everything worked great.
As a starter kit, the CTRL is pretty good. It's light but solidly constructed with an aluminum case, relatively inexpensive, and it has a second USB-C port if you want to daisy-chain something else off it. It's a good option if you want to play around with some different switch options (I added Halo Clears, which are pingy but have the same satisfying snap as that one Nokia phone from The Matrix).
It's also weirdly power-hungry, the integrated plate means it's stiff and hard to dampen acoustically, it only takes 3-prong switches, and Drop's software engineering seems to be stretched a little thin. So it's definitely a keyboard that you can grow beyond. But I'm glad I put the time into getting the actual open source firmware working — at the very least, it can be a fun board for experimenting with layouts and effects. And if you're hoping to stretch it a little further than its budget roots, I hope the above information is useful.