this space intentionally left blank

August 10, 2016

Filed under: tech»web

RIP Chrome apps

Update: Well, that was prescient.

At least once a day, I log into the Chrome Web Store dashboard to check on support requests and see how many users I've still got. Caret has held steady for the last year or so at about 150,000 active users, give or take ten thousand, and the support and feature requests have settled into a predictable rut:

  • People who can't run Caret because their version of Chrome is too old, and I've started using new ES6 features that aren't supported six browser versions back.
  • People who want split-screen support, and are out of luck barring a major rewrite.
  • People who don't like the built-in search/replace functionality, which makes sense, because it's honestly pretty terrible.
  • People who don't like the icons, and are just going to have to get over it.

In a few cases, however, users have more interesting questions about the fundamental capabilies of developer tooling, like file system monitoring or plugging into the OS in a deeper way. And there I have bad news, because as far as I can tell, Chrome apps are no longer actively developed by the Chromium team at all, and probably never will be again.

I don't think Chrome apps are going away immediately — they're still useful and used by a lot of third-party companies — but it's pretty clear from the dev side of things that Google's heart isn't in it anymore. New APIs have ceased to roll out, and apps don't get much play at conferences. The new party line is all about progressive web apps, with browser extensions for the few cases where you need more capabilities.

Now, progressive web apps are great, and anything that moves offline applications away from a single browser and out to the wider web is a good thing. But the fact remains that while a large number of Chrome apps can become PWAs with little fuss, Caret can't. Because it interacts with the filesystem so heavily, in a way that assumes a broader ecosystem of file-based tools (like Git or Node), there's actually no path forward for it using browser-only APIs. As such, it's an interesting litmus test for just how far web apps can actually reach — not, as some people have wrongly assumed, because there's an inherent performance penalty on the web, but because of fundamental limits in the security model of the browser.

Bounding boxes

What's considered "possible" for a web app in, say, 2020? It may be easier to talk about what isn't possible, which avoids the judgment call on what is "suitable." For example, it's a safe bet that the following capabilities won't ever be added to the web, even though they've been hotly debated in and out of standards committees for years:

  • Read/write file access (died when the W3C pulled the plug on the Directories part of the Filesystem API)
  • Non-HTTP sockets and networking (an endless number of reasons, but mostly "routers are awful")

There are also a bunch of APIs that are in experimental stages, but which I seriously doubt will see stable deployment in multiple browsers, such as:

  • Web Bluetooth (enormous security and usability issues)
  • Web USB (same as Bluetooth, but with added attacks from the physical connection)
  • Battery status (privacy concerns)
  • Web MIDI

It's tough to get worked up about a lot of the initiatives in the second list, which mostly read as a bad case of mobile envy. There are good reasons not to let a web page have drive-by access to hardware, and who's hooking up a MIDI keyboard to a browser anyway? The physical web is a better answer to most of these problems.

When you look at both lists together, one thing is clear: Chrome apps have clearly been a testing ground for web features. Almost all the not-to-be-implemented web APIs have counterparts in Chrome apps. And in the end, the web did learn from it — mainly that even in a sandboxed, locked-down, centrally distributed environment, giving developers that much power with so little install friction could be really dangerous. Rogue extensions and apps are a serious problem for Chrome, as I can attest: about once a week, shady people e-mail me to ask if they can purchase Caret. They don't explicitly say that they're going to use it to distribute malware and takeover ads, but the subtext is pretty clear.

The great thing about the web is that it can run code without any installation step, but that's also the worst thing about it. Even as a huge fan of the platform, the idea that any of the uncountable pages I visit in any given week could access USB directly is pretty chilling, especially when combined with exploits for devices that are plugged in, like hacking a phone (a nice twist on the drive-by jailbreak of iOS 4). Access to the file system opens up an even bigger can of worms.

Basically, all the things that we want as developers are probably too dangerous to hand out to the web. I wish that weren't true, but it is.

Untrusted computing

Let's assume that all of the above is true, and the web can't safely expand for developer tools. You can still build powerful apps in a browser, they just have to be supported by a server. For example, you can use a service like Cloud 9 (now an AWS subsidiary) to work on a hosted VM. This is the revival of the thick-client model: offline capabilities in a pinch, but ultimately you're still going to need an internet connection to get work done.

In this vision, we are leaning more on the browser sandbox: creating a two-tier system with the web as a client runtime, and a native tier for more trust on the local machine. But is that true? Can the web be made safe? Is it safe now? The answer is, at best, "it depends." Every third-party embed or script exposes your users to risk — if you use an ad network, you don't have any real idea who could be reading their auth cookies or tracking their movements. The miracle of the web isn't that it is safe, it's that it manages to be useful despite how rampantly unsafe its defaults are.

So along with the shift back to thick clients has come a change in the browser vendors' attitude toward powerful API features. For example, you can no longer use geolocation or the camera/microphone in Chrome on pages that aren't served over HTTPS, with other browsers to follow. Safari already disallows third-party cookie access as a general rule. New APIs, like Service Worker, require HTTPS. And I don't think it's hard to imagine a world where an API also requires a strict Content Security Policy that bans third-party embeds altogether (another place where Chrome apps led the way).

The packaged app security model was that if you put these safeguards into place and verified the package contents, you could trust the code to access additional capabilities. But trusting the client was a mistake when people were writing Quakebots, and it stayed a mistake in the browser. In the new model, those controls are the minimum just to keep what you had. Anything extra that lives solely on the client is going to face a serious uphill battle.

Mind the gap

The longer that I work on Caret, the less I'm upset by the idea that its days are numbered. Working on a moderately-successful open source project is exhausting: people have no problems making demands, sending in random changes, or asking the same questions over and over again. It's like having a second boss, but one that doesn't pay me or offer me any opportunities for advancement. It's good for exposure, but people die from exposure.

The one regret that I will have is the loss of Caret's educational value. Since its early days, there's been a small but steady stream of e-mail from teachers who are using it in classrooms, both because Chromebooks are huge in education and because Caret provides a pretty good editor with almost no fuss (you don't even have to be signed in). If you're a student, or poor, or a poor student, it's a pretty good starter option, with no real competition for its market niche.

There are alternatives, but they tend to be online-only (like Mozilla's Thimble) or they're not Chromebook friendly (Atom) or they're completely unacceptable in a just world (Vim). And for that reason alone, I hope Chrome keeps packaged apps around, even if they refuse to spend any time improving the infrastructure. Google's not great at end-of-life maintenance, but there are a lot of people counting on this weird little ecosystem they've enabled. It would be a shame to let that die.

August 1, 2016

Filed under: tech»web

<slide-show>

On Thursday, I'll be giving a talk at CascadiaFest on using custom elements in production. It's kind of a sales pitch, to convince people that adopting web components is safe to do, despite the instability of the spec and the contentious politics between browsers. After all, we've been publishing with several components at the Times for almost two years now, with good results.

When I presented an early version of this at SeattleJS, I presented by scrolling through a single text file instead of slides, because I've always wanted to do that. But for Cascadia, I wanted to do something a little more special, so I built the presentation itself out of custom elements, with the goal that it would demonstrate how to write code that works with both versions of the spec. It's also meant to be a good example for someone who's just learning how web components function — I use pretty much every custom elements feature at one point or another in 300 lines of code. You can take a look at the source for it here.

There are several strategies that I ended up emphasizing while writing the <slide-show> elements, primarily the heavy use of events to tame asynchronicity. It turns out that between V0, V1, and the two major polyfills, elements and their attributes are resolved by the parser with entirely different timing. It's really important that child elements notify their parent when they upgrade, and parents shouldn't assume that children are ready at startup.

One way to deal with asynchronous upgrades is just to put all your functionality in the parent element (our <leaflet-map> does this), but I wanted to make these slides easier to extend with new types (such as text, code, or image slides). In this case, the slide show looks for a parsedContent property on the current slide, and it's the child's job to populate and update that value. An earlier version called a parseContents() method, but using properties as "duck-typing" makes it much easier to handle un-upgraded elements, and moving the responsibility to the child also greatly simplified the process of watching slide contents for changes.

A nice side effect of using live properties and events is that it "feels" a lot more like a built-in element. The modern DOM API is built on similar primitives, so writing the glue code for the UI ended up being very pleasant, and it's possible to interact using the dev tools in a natural way. I suspect that well-built component libraries in the future will be judged on how well they leverage a declarative interface to blend in with existing elements.

Ironically, between child elements and Shadow DOM, it's actually much harder to move between different polyfills than it is to write an element definition for both the new and old specifications. We've always written for Giammarchi's registerElement shim at the Times, and it was shocking for me to find out that Polymer's shim not only diverges from its counterpart, but also differs from Chrome's native implementation. Coding around these differences took a bit of effort, but it's probably work I should have done at the start, and the result is quite a bit nicer than some of the hacks I've done for the Times. I almost feel like I need to go back now and update them with what I've learned.

Writing this presentation was a good way to make sure I was current on the new spec, and I'm actually pretty happy with the way things have turned out. When WebKit started prototyping their own API, I started to get a bit nervous, but the resulting changes are relatively minor: some property names have changed, the lifecycle is ordered a bit differently, and upgrade code is called in the constructor (to encourage using the class syntax) instead of from a createdCallback() method. Most of these are positive alterations, and while there are some losses going from V0 to V1 (no is attribute to subclass arbitrary elements), they're not dealbreakers. Overall, I'm more optimistic about the future of web components than I have in quite a while, and I'm looking forward to telling people about it at Cascadia!

July 14, 2016

Filed under: gaming»perspective

Emu Nation

It's hard to hear news of Nintendo creating a tiny, $60 NES package and not think of Frank Cifaldi's provocative GDC talk on emulation. Cifaldi, who works on game remastering and preservation (most recently on a Mega Man collection), covers a wide span of really interesting industry backstory, but his presentation is mostly infamous for the following quote:

The virtual console is nothing but emulations of Nintendo games. And in fact, if you were to download Super Mario Brothers on the Wii Virtual Console...

[shows a screenshot of two identical hex filedumps]

So on the left there is a ROM that I downloaded from a ROM site of Super Mario Brothers. It's the same file that's been there since... it's got a timestamp on it of 1996. On the right is Nintendo's Virtual Console version of Super Mario Brothers. I want you to pay particular attention to the hex values that I've highlighted here.

[the highlighted sections are identical]

That is what's called an iNES header. An iNES header is a header format developed by amateur software emulators in the 90's. What's that doing in a Nintendo product? I would posit that Nintendo downloaded Super Mario Brothers from the internet and sold it back to you.

As Cifaldi notes, while the industry has had a strong official anti-emulation stance for years, they've also turned emulation into a regular revenue stream for Nintendo in particular. In fact, Nintendo has used scaremongering about emulation to monopolize the market for any games that were published on its old consoles. In this case, the miniature NES coming to market in November is almost certainly running an emulator inside its little plastic casing. It's not so much that they're opposed to emulation, so much as they're opposed to emulation that they can't milk for cash.

To fully understand how demented this has become, consider the case of Yoshi's Island, which is one of the greatest platformers of the 16-bit era. I am terrible at platformers but I love this game so much that I've bought it at least three times: once in the Gameboy Advance port, once on the Virtual Console, and once as an actual SNES cartridge back when Belle and I lived in Arlington. Nintendo made money at least on two of those copies, at least. But now that we've sold our Wii, if I want to play Yoshi's Island again, even though I have owned three legitimate copies of the game I would still have to give Nintendo more money. Or I could grab a ROM and an emulator, which seems infinitely more likely.

By contrast, I recently bought a copy of Doom, because I'd never played through the second two episodes. It ran me about $5 on Steam, and consists of the original WAD files, the game executable, and a preconfigured version of DOSBox that hosts it. I immediately went and installed Chocolate Doom to run the game fullscreen with better sound support. If I want to play Doom on my phone, or on my Chromebook, or whatever, I won't have to buy it again. I'll just copy the WAD. And since I got it from Steam, I'll basically have a copy on any future computers, too.

(Episode 1 is definitely the best of the three, incidentally.)

Emulation is also at the core of the Internet Archive's groundbreaking work to preserve digital history. They've preserved thousands of games and pieces of software via browser ports of MAME, MESS, and DOSBox. That means I can load up a copy of Broderbund Print Shop and relive summer at my grandmother's house, if I want. But I can also pull up the Canon Cat, a legendary and extremely rare experiment from one of the original Macintosh UI designers, and see what a radically different kind of computing might look like. There's literally no other way I would ever get to experience that, other than emulating it.

The funny thing about demonizing emulation is that we're increasingly entering an era of digital entertainment that may be unpreservable with or without it. Modern games are updated over the network, plugged into remote servers, and (on mobile and new consoles) distributed through secured, mostly-inaccessible package managers on operating systems with no tradition of backward compatibility. It may be impossible, 20 years from now, to play a contemporary iOS or Android game, similar to the way that Blizzard themselves can't recreate a decade-old version of World of Warcraft.

By locking software up the way that Nintendo (and other game/device companies) have done, as a single-platform binary and not as a reusable data file, we're effectively removing them from history. Maybe in a lot of cases, that's fine — in his presentation, Cifaldi refers offhand to working on a mobile Sharknado tie-in that's no longer available, which is not exactly a loss for the ages. But at least some of it has to be worth preserving, in the same way even bad films can have lessons for directors and historians. The Canon Cat was not a great computer, but I can still learn from it.

I'm all for keeping Nintendo profitable. I like the idea that they're producing their own multi-cart NES reproduction, instead of leaving it to third-party pirates, if only because I expect their version will be slicker and better-engineered for the long haul. But the time has come to stop letting them simultaneously re-sell the same ROM to us in different formats, while insisting that emulation is solely the concern of pirates and thieves.

June 20, 2016

Filed under: culture»america»race_and_class

Under our skin

This week, we've launched a major project at the Times on the words people use when talking about race in America. Under our skin was spearheaded by a small group of journalists after the paper came under fire for some bungled coverage. I think they did a great job — the subjects are well-chosen, the editing is top-notch, and we're trying to supplement it with guest essays and carefully-curated comments (as opposed to our usual all-or-nothing approach to moderation). I mostly watched from the sidelines on this one, as our resident expert on forcing Brightcove video to behave in a somewhat-acceptable manner, and it was really fascinating watching it take shape.

May 26, 2016

Filed under: random»personal

Speaking schedule, 2016

After NICAR, I wasn't really sure I ever wanted to go to any conferences ever again — the travel, the hassle, the expense... who needs it? But I am also apparently unable to moderate my extracurricular activities in any way, even after leaving a part-time teaching gig, so: I'm happy to announce that I'll be speaking at a couple of professional conferences this summer, albeit about very different topics.

First up, I'll be facilitating a session at SRCCON in Portland about designing humane news sites. This is something I've been thinking about for a while now, mostly with regards to bots and "conversational UI" fads, but also as the debate around ads has gotten louder, and the ads themselves have gotten worse (see also). I'm hoping to talk about the ways that we can build both individual interactives and content management systems so that we can minimize the amount of accidental harm that we do to our readers, and retain their trust.

My second talk will be at CascadiaFest in beautiful Semiahmoo, WA. I'll be speaking on how we've been using custom elements in production at the Times, and encouraging people to build their own. The speaker list at Cascadia is completely bonkers: I'll be sharing a stage with people who I've been following for years, including Rebecca Murphey, Nolan Lawson, and Marcy Sutton. It's a real honor to be included, and I've been nervously rewriting my slides ever since I got in.

Of course, by the end of the summer, I may never want to speak publicly again — I may burn my laptop in a viking funeral and move to Montana, where I can join our departing editor in some kind of backwoods hermit colony. But for right now, it feels a lot like the best parts of teaching (getting to show people cool stuff and inspire them to build more) without the worst parts (grading, the school administration).

May 10, 2016

Filed under: tech»web

Behind the Times

The paper recently launched a new native app. I can't say I'm thrilled about that, but nobody made me CEO. Still, the technical approach it takes is "interesting:" its backing API converts articles into a linear stream of blocks, each of which is then hand-rendered in the app. That's the plan, at least: at this time, it doesn't support non-text inline content at all. As a result, a lot of our more creative digital content doesn't appear in the app, or is distorted when it does appear.

The justification given for this decision was speed, with the implicit statement being that a webview would be inherently too slow to use. But is that true? I can't resist a challenge, and it seemed like a great opportunity to test out some new web features I haven't used much, so I decided to try building a client. You can find the code here. It's currently structured as a Chrome app, but that's just to get around the CORS limit since our API doesn't have the Access-Control-Allow-Origin headers added.

The app uses a technique that's been popularized by Nolan Lawson's Pokedex.org, in which almost all of the time-consuming code runs in a Web Worker, and the main thread just handles capturing UI events and re-rendering. I started out with the worker process handling network and caching in IndexedDB (the poor man's Service Worker), and then expanded it to do HTML sanitization as well. There's probably other stuff I could move in, but honestly I think it's at a good balance now.

By putting all this stuff into a second script that runs independently, it frees up the browser to maintain a smooth frame rate in animations and UI response. It's not just the fact that I'm doing work elsewhere, but also that there's hardly any garbage collection on the main thread, which means no halting while the JavaScript VM cleans up. I thought building an app this way would be difficult, but it turns out to be mostly similar to writing any page that uses a lot of AJAX — structure the worker as a "server" and the patterns are pretty much the same.

The other new technology that I learned for this project is Mithril, a virtual DOM framework that my old coworkers at ArenaNet rave about. I'm not using much of its MVC architecture, but its view rendering code is great at gradually updating the page as the worker sends back new data: I can generate the initial article list using just the titles that come from one network endpoint, and then add the thumbnails that I get from a second, lower-priority request. Readers get a faster feed of stories, and I don't have to manually synchronize the DOM with the new data.

The metrics from this version of the app are (unsurprisingly) pretty good! The biggest slowdown is the network, which would also be a problem in native code: loading the article list for a section requires one request to get the article IDs, and then one request for each article in that section (up to 21 in total). That takes a while — about a second, on average. On the other hand, it means we have every article cached by the time that the user can choose something to read, which cuts the time for requesting and loading an individual article hovers around 150ms on my Chromebook.

That's not to say that there aren't problems, although I think they're manageable. For one thing, the worker and app bundles are way too big right now (700KB and 200KB, respectively), in part because they're pulling in a bunch of big NPM modules to do their processing. These should be lazy-loaded for speed as much as possible: we don't need HTML parsing right away, for example, which would cut a good 500KB off of the worker's initial size. Every kilobyte of script is roughly 1ms of load time on a mobile device, so spreading that out will drastically speed up the app's startup time.

As an interesting side note, we could cut almost all that weight entirely if the document.implementation object was available in Web Workers. Weir, for example, does all its parsing and sanitization in an inert document. Unfortunately, the DOM isn't thread-safe, so nothing related to document is available outside the main process, and I suspect a serious sanitization pass would blow past our frame budget anyway. Oh well: htmlparser2 and friends it is.

Ironically, the other big issue is mostly a result of packaging this up as a Chrome app. While that lets me talk to the CMS without having CORS support, it also comes with a fearsome content security policy. The app shell can't directly load images or fonts from the network, so we have to load article thumbnails through JavaScript manually instead. Within Chrome's <webview> tag, we have the opposite problem: the webview can't load anything from the app, and it has a weird protocol location when loaded from a data URL, so all relative links have to be rewritten. It's not insurmountable, but you have to be pretty comfortable with the way browsers work to figure it out, and the debugging can get a little hairy.

So there you have it: a web app that performs like native, but includes support for features like DocumentCloud embeds or interactive HTML graphs. At the very least, I think you could use this to advocate for a hybrid native/web client on your news site. But there's a strong argument to be made that this could be your only app: add a Service Worker and (in Chrome and Firefox) it could load instantly and work offline after the first visit. It would even get a home screen icon and push notification support. I think the possibilities for progressive web apps in the news industry are really exciting, and building this client makes me think it's doable without a huge amount of extra work.

April 29, 2016

Filed under: journalism»education

Reporting with Python

This month, I'm teaching a class at the University of Washington on reporting with Python. This seems like an odd match for me, since I hardly ever work with Python, but I wanted to do a class that was more journalism-focused (as opposed to the front-end development that I normally teach) and teaching first-time programmers how to do data analysis in Node just isn't realistic. If you're interested in following along, the repository with the class materials is located here

I'm not the Times' data reporter, so I don't get to do this kind of analysis often, but I always really enjoy it when I do. The danger when planning a class on a fun topic is that it's easy to over-stuff the curriculum in my eagerness to cover the techniques that I think are particularly interesting. To fight that impulse, I typically make a list of material I want to cover, then cut it in half, then think about cutting it in half again. As a result, there's a lot of stuff that didn't make it in — SQL and web scraping primarily among them.

What's left, however, is a pretty solid base for reporters who are interested in starting to use code to generate and explore stories. Last week, we cleaned and searched 1,000 text files for a string, and this week we'll look at doing analysis on CSV files. In the final session, I'm planning on taking a deep dive into regular expressions: so much of reporting is based around interrogating text files, and the nice thing about an education in regex is that it will travel into almost any programming language (as well as being useful for many command line tools like grep or sed).

If I can get anything across in this class, I'm hoping to leave students with an understanding of just how big digital scale can be, and how important it is to have tools for handling it. I was talking one night with one of the Girl Develop It organizers, who works for a local analytics company. Whereas millions of rows of data is a pretty big deal for me, for her it's a couple of hours on a Saturday — she's working at a whole other order of magnitude. I wouldn't even know where to start.

Right now, most record requests and data dumps operate more at my scale. A list of all animal imports/exports in the US for the last ten years is about 7 million records, for example. That's approachable with Python, although you'd be better off learning some SQL for the heavy lifting, but it's past the point where Excel is useful, and it certainly couldn't be explored by hand. If you can't code, or you don't have access to someone who does, you can't write that story.

At some point, the leaks and government records that reporters pore over may grow to a larger kind of scale (leaks, certainly; government data has will be aggregated as long as there are privacy concerns). When that happens, reporters will have to develop the kinds of skills that I don't have. We already see hints of this in the tremendous tooling and coordination required for investigating the Panama papers. But in the meantime, I think it's tremendously important that students learn how to automate data at a basic level, and I'm really excited that this class will introduce them to it.

April 14, 2016

Filed under: tech»coding

Calculated Amalgamation

In a fit of nostalgia, I've been trying to get my hands on a TI-82 calculator for a few weeks now. TI BASIC was probably the first programming language in which I actually wrote significant amounts of code: although a few years later I'd start working in C for PalmOS and Windows CE, I have a lot of memories of trying to squeeze programs for speed and size during slow class periods. While I keep checking Goodwill for spares, there are plenty of TI calculator emulation apps, so I grabbed one and loaded up a TI-82 ROM to see what I've retained.

Actually, TI BASIC is really weird. Things I had forgotten:

  • You can type in all-caps text if you want, but most of the time you don't, because all of the programming keywords (If, Else, While, etc.) are actually single "character" glyphs that you insert from a menu.
  • In fact, pretty much the only code that's typed manually are variable names, of which you get 26 (one for each letter). There are also six arrays (max length 99), five two-dimensional matrices (limited by memory), and a handful of state variables you can abuse if you really need more. Everything is global.
  • Variables aren't stored using =, which is reserved for testing, but with a left-to-right arrow operator: value → dest I imagine this clears up a lot of ambiguity in the parser.
  • Of course, if you're processing data linearly, you can do a lot without explicit variables, because the result of any statement gets stored in Ans. So you can chain a lot of operations together as long as you just keep operating on the output of the previous line.
  • There's no debugger, but you can hit the On key to break at any time, and either quit or jump to the current line.
  • You can call other programs and they do return after calling, but there are no function definitions or return values other than Ans (remember, everything is global). There is GOTO, but it apparently causes memory leaks when used (thanks, Dijkstra!).

I'd romanticized it over time — the self-contained hardware, the variable-juggling, the 1-bit graphics on a 96x64 screen. Even today, I'm kind of bizarrely fascinated by this environment, which feels like the world's most cumbersome register VM. But loading up the emulator, it's obvious why I never actually finished any of my projects: TI BASIC is legitimately a terrible way to work.

In retrospect, it's obviously a scripting language for a plotting library, and not the game development environment I wanted it to be when I was trying to build Wolf3D clones. You're supposed to write simple macros in TI BASIC, not full-sized applications. But as a bored kid, it was a great playground, and the limitations of the platform (including its molasses-slow interpreter) made simple problems into brainteasers (it's almost literally the challenge behind TIS-100).

These days, the kids have it way better than I did. A micro:bit is cheaper and syncs with a phone or computer. A Raspberry Pi is a real computer of its own, as is the average smartphone. And a laptop or Chromebook with a browser is miles more productive than a TI-82 could ever be. On the other hand, they probably can't sneak any of those into their trig classes and get away with it. And maybe that's for the best — look how I turned out!

March 22, 2016

Filed under: tech»web

ES6 in anger

One of the (many) advantages of running Seattle Times interactives on an entirely different tech stack from the rest of the paper is that we can use new web features as quickly as we can train ourselves on them. And because each news app ships with an isolated set of dependencies, it's easy to experiment. We've been using a lot of new ES6 features as standard for more than a year now, and I think it's a good chance to talk about how to use them effectively.

The Good

Surprisingly (to me at least), the single most useful ES6 feature has been arrow functions. The key to using them well is to restrict them only to one-liners, which you'd think would limit their usefulness. Instead, it frees you up to write much more readable JavaScript, especially in array processing. As soon as it breaks to a second line (or seems like it might do so in the future), I switch to writing regular function statements.


//It's easy to filter and map:
var result = list.filter(d => d.id).map(d => d.value);

//Better querySelectorAll with the spread operator:
var $ = s => [...document.querySelectorAll(s)];

//Fast event logging:
map.on("click", e => console.log(e.latlng);

//Better styling with template strings:
var translate = (x, y) => `translate(${x}px, ${y}px);`;

Template strings are the second biggest win, especially as above, where they're combined with arrow functions to create text snippets. Having a multiline string in JS is very useful, and being able to insert arbitrary values makes building dynamic popups or CSS styles enormously simpler. I love writing template strings for quick chunks of templating, or embedding readable SQL in my Node apps.

Despite the name, template strings aren't real templates: they can't handle loops, they don't really do interpolation, and the interface for using "tagged" strings is cumbersome. If you're writing very long template strings (say, more than five lines), it's probably a sign that you need to switch to something like Handlebars or EJS. I have yet to see a "templating language" built on tagged strings that didn't seem like a wildly frustrating experience, and despite the industry's shift toward embedded DSLs like React's JSX, there is a benefit to keeping different types of code in different files (if only for widespread syntax highlighting).

The last feature I've really embraced is destructuring and object literals. They're mostly valuable for cleanup, since all they do is cut down on repetition. But they're pleasant to use, especially when parsing text and interacting with CommonJS modules.



//Splitting dates is much nicer now:
var [year, month, day] = dateString.split(/\/|-/);

//Or getting substrings out of a regex match:
var re = /(\w{3})mlb_(\w{3})mlb_(\d+)/;
var [match, away, home, index] = gameString.match(re);

//Exporting from a module can be simpler:
var x = "a";
var y = "b";
module.exports = { x, y };

//And imports are cleaner:
var { x } = require("module");

The bad

I've tried to like ES6 classes and modules, and it's possible that one day they're going to be really great, but right now they're not terribly friendly. Classes are just syntactic sugar around ES5 prototypes — although they look like Java-esque class statements, they're still going to act in surprising ways for developers who are used to traditional inheritance. And for JavaScript programmers who understand how the language actually works, class definitions boast a weird, comma-less syntax that's sort of like the new object literal syntax, but far enough off that it keeps tripping me up.

The turning point for the new class keyword will be when the related, un-polyfillable features make their way into browsers — I'm thinking mainly of the new Symbols that serve as feature flags and the ability to extend Array and other built-ins. Until that time, I don't really see the appeal, but on the other hand I've developed a general aversion to traditional object-oriented programming, so I'm probably not the best person to ask.

Modules also have some nice features from a technical standpoint, but there's just no reason to use them over CommonJS right now, especially since we're already compiling our applications during the build process (and you have to do that, because browser support is basically nil). The parts that are really interesting to me about the module system — namely, the configurable loader system — aren't even fully specified yet.

New discoveries

Most of what we use on the Times' interactive team is restricted to portions of ES6 that can be transpiled by Babel, so there are a lot of features (proxies, for example) that I don't have any experience using. In a Node environment, however, I've had a chance to use some of those features on the server. When I was writing our MLB scraper, I took the opportunity to try out generators for the first time.

Generators are borrowed liberally from Python, and they're basically constructors for custom iterable sequences. You can use them to make normal objects respond to language-level iteration (i.e., for ... of and the spread operator), but you can also define sequences that don't correspond to anything in particular. In my case, I created a generator for the calendar months that the scraper loads from the API, which (when hooked up to the command line flags) lets users restart an MLB download from a later time period:


//feed this a starting year and month
var monthGen = function*(year, month) {
  while (year < 2016) {
    yield { year, month };
    month++;
    if (month > 12) {
      month = 1;
      year++;
    }
  }
};

//generate a sequence from 2008 to 2016
var months = [...monthGen(2008, 1)];

That's a really nice code pattern for creating arbitrary lists, and it opens up a lot of doors for JavaScript developers. I've been reading and writing a bit more Python lately, and it's been amazing to see how much a simple pattern like this, applied language-wide, can really contribute to its ergonomics. Instead of the Stream object that's common in Node, Python often uses generators and iteration for common tasks, like reading a file line-by-line or processing a data pipeline. As a result, I suspect most new Python programmers need to survey a lot less intellectual surface area to get up and running, even while the guts underneath are probably less elegant for advanced users.

It surprised me that I was so impressed with generators, since I haven't particularly liked Python very much in the past. But in reading the Cookbook to prep for a UW class in Python, I've realized that the two languages are actually closer together than I'd thought, and getting closer. Python's class implementation is actually prototypical behind the scenes, and its use of duck typing for built-in language features (such as the with statement) bears a strong resemblance to the work being done on JavaScript Promises (a.k.a. "then-ables") and iterator protocols.

It's easy to be resistant to change, and especially when it's at the level of a language (computer or otherwise). I've been critical of a lot of the decisions made in ES6 in the past, but these are positive additions on the whole. It's also exciting, as someone who has been working in JavaScript at a deep level, to find that it has new tricks, and to stretch my brain a little integrating them into my vocabulary. It's good for all of us to be newcomers every so often, so that we don't get too full of ourselves.

March 16, 2016

Filed under: politics»national»executive

Seventy-two

Since it is election season, when I ran out of library books last week I decided to re-read Hunter S. Thompson's Fear and Loathing: On the Campaign Trail '72, as I do every four years or so. Surprisingly, I don't appear to have written about it here, even though it's one of my favorite books, and the reason I got into journalism in the first place.

On the Campaign Trail is always a relevant text, but it feels particularly so apt this year. In the middle of the Trump presidential run, the book's passage on the original populist rabble-rouser, George Wallace, could have been written yesterday if you just swap some names — not to mention the whirlwind chaos of the primaries and a convention battle. On the other hand, with writing this good, there's really no wrong time to bring it up.

Even before his death in 2005, when most people thought about Thompson, what usually came to mind was wild indulgence: drugs, guns, and "bat country." Ironically, On the Campaign Trail makes the strong case that his best writing was powerfully controlled and focused, not loose and hedonistic: the first two-thirds of the book (or even the first third) contain his finest work. After the Democratic national convention, and the resulting breakdowns in Thompson's health, the analysis remains sharp but the writing never reaches those heights again.

That's not to say that the book is without moments of depravity — his account of accidentally unleashing a drunken yahoo on the Muskie whistlestop tour is still a classic, not to mention the extended threat to chop the big toes off the McGovern political director — but it's never random or undirected. For Thompson, wild fabrication is the only way to bring readers into the surreal world of a political race. His genius is that it actually works.

Despite all that, if On the Campaign Trail has a legacy, it's not the craziness, the drugs, or even the politics. The core of the book is two warring impulses that drive Thompson at every turn: sympathy for the voters who pull the levers of democracy, and simultaneously a deep distrust of the kind of people that they reliably elect. The union of the two is the fuel behind his best writing. Or, as he puts it:

The highways are full of good mottos. But T.S. Eliot put them all in a sack when he coughed up that line about... what was it? Have these Dangerous Drugs fucked my memory? Maybe so. But I think it went something like this:

"Between the Idea and the Reality... Falls the Shadow."

The Shadow? I could almost smell the bastard behind me when I made the last turn into Manchester. It was late Tuesday night, and tomorrow's schedule was calm. All the candidates had zipped off to Florida — except for Sam Yorty, and I didn't feel ready for that.

The next day, around noon, I drove down to Boston. The only hitchhiker I saw was an eighteen-year-old kid with long black hair who was going to Reading — or "Redding," as he said it &mdsash; but when I asked him who he planned to vote for in the election he looked at me like I'd said something crazy.

"What election?" he asked.

"Never mind," I said. "I was only kidding."

Future - Present - Past