this space intentionally left blank

August 7, 2015

Filed under: gaming»software»mass_effect

Mess Affect

There's probably not a better modern space opera than the Mass Effect games, which is what makes its wildly incoherent plot all the more bizarre. Replaying the first title, it's hard to miss that the way that it eventually gets undercut by everything in the third. I can't decide which one looks better in hindsight — Mass Effect 1 is less ambitious (and not nearly as good mechanically), while 3 is ultimately lazier and, I think, more dishonest — but I think it does point out a really interesting problem in the way that Bioware builds their big, tentpole franchises.

Let's recap: in the first chapter, Commander Shepard (that's you) stumbles across a galactic double-agent named Saren while on a rescue mission, and is given carte blanche to hunt the traitor down. Eventually, it's revealed that Saren is working for a giant murderbot right out of central casting, and their plan is to open a gateway that lets all the other murderbots into this universe from whatever pocket dimension they've been hiding in. But surprise! the gateway is actually the ancient space station that galactic civilization picked as its center of government (living in Seattle after that New Yorker article, I know how they feel). Lots of stuff explodes.

By the time Mass Effect 3 rolls around, it would make sense for people to have thought, geez, maybe living on a massive, prehistoric portal to robot doom-town might not be a good idea. You'd think that, but you'd be wrong. In fact, during the plot of the final game, not only are people still living on the Citadel, but Shepard finds out that it doubles as the power source for the super-weapon that will wipe out the bad guys. Handy! And by handy, I mean tremendously lazy in a narrative sense.

Truthfully, the overarching plot is the weakest parts of the Mass Effect franchise. The first game is too small to sell "epic scope" — when characters like Wrex die, they haven't been around long enough to feel important, or for many players to even realize they can be saved. The third spends too much time trying to wrap up plot threads instead of actually telling a story.

By contrast, Mass Effect 2 is a heist flick. The beginning is nonsense, as is the ending (the less said about the giant glowing skeleton boss, the better), partly because those are the pieces that connnect to the other storylines. But the middle 95% of the game is brilliant, because it mostly tosses the Reapers and their Robot Death Party out the window. Instead, you wander from planet to planet, assembling your crew and settling their debts so they can join you in the final mission. It's in these smaller stories that the writers can actually explore the universe they've built, with all its weird little corners and rivalries.

At some point, it starts to feel like the quest for spectacle is not helping Mass Effect, or Bioware games, or games in general, in much the same way that the "huge crashing object" finale is the worst part of our society's superhero movie infestation. Maybe it's a broken record to say that we need more AAA games that are more about character and less about saving the city, country, world, or galaxy. It's still true, though, and the best evidence for it is the games themselves.

July 22, 2015

Filed under: journalism»industry

Covering letters

It's a low bar to clear, but I think I can honestly say that journalism has a better diversity record compared to tech. If there were a newsroom the size of Facebook, chances are high it would have hired more than 7 black people last year. But that doesn't mean we can't do better. And if we're going to talk about hiring in journalism, we need to talk about interns.

NPR's visuals team has decided to try making internships more diverse, by being transparent about their requirements. Basically, they want to be clear about the expectations around cover letters and interviewing, so that people from non-privileged backgrounds know to prepare for them. I know and like several members of the team there, so I'm going to give them the benefit of a doubt when they say that there's more to come, but as a diversity program this seems a bit thin.

Firstly, a post on a little-trafficked blog is not exactly a high-visibility broadcast (said post isn't linked from any of the open internship positions as far as I could tell). It's easy for people to miss. More importantly, if the team is finding that cover letters and interviews are excluding good candidates, maybe the point should be to change the way that those are evaluated (or drop them entirely). Perhaps cover letters are not a great criteria for picking interns, or the way you're looking at them is biased in some way.

My own thoughts on this are complicated, not least because I see the playing field being artificially manipulated from all sides. I'm always amazed when I teach workshops at UW and hear that students may be on their fourth or fifth internship. They're behaving rationally — a lot of journalism careers are founded on student internships — but it's still bizarre to think that the path to a newsroom job might require literally years of unpaid or low-paying labor. If nothing else, there are a lot of people for whom that's just not an option.

Perhaps this is why, as CJR noted in a just-published report, minority journalists aren't finding jobs at rates proportional to graduation. In fact, minorities who graduate with a degree in journalism were 17% less likely to find a print journalism job compared to their white counterparts, compared to only 2% difference in advertising. As Alex Williams states:

Overall, only 49 percent of minority graduates that specialized in print or broadcasting found a full-time job, compared to 66 percent of white graduates. These staggering job placement figures help explain the low number of minority journalists. The number of minorities graduating from journalism programs and applying for jobs doesn’t seem to be the problem after all. The problem is that these candidates are not being hired.

I think the lessons from this are two-fold. First, I think we should be better about spreading internships out to a wider range of students. That's partly about selecting more diverse candidates, but it's also about turning down interns many-times-over in favor of candidates who need more of a boost. Internships are about experience, but they're also a way of pre-selecting who we want in the newsrooms of the future by burnishing their resumes. It's great to see NPR taking some responsibility, small or not, for their role in the pipeline. Hopefully other organizations will follow suit.

Additionally, maybe we should be less interested in internships as hiring criteria in the first place. Although my corner of the field is a little atypical, many of the best digital journalists I know didn't enter the field through a traditional career path (myself included). If our goal is to diversify our newsrooms, being accepting of a variety of different backgrounds and experiences is part of how we get there. So a candidate didn't have an internship. So what? Can they write? Can they edit? Can they code?

I often worry about over-stressing credentials in journalism. Sure, it helps separate the wheat and the chaff, but it also brushes over the fact that what we do just isn't that hard. We go places, talk to people, and then write it down and give to other people to read. You don't need a degree from that (as Michael Lewis aptly chronicled more than 20 years ago), and you shouldn't necessarily need an internship. As a community, we mourned the passing of David Carr, but we haven't learned the lessons he taught to writers like Ta-Nehisi Coates, about hiring "knuckleheads" and molding them into the industry we want to be. And until we do, we will still struggle to find newsrooms that reflect modern American diversity.

July 10, 2015

Filed under: gaming»software»zachtronics

MOV ACC LEFT

It seems vaguely ridiculous to spend my days working on a computer, and then come home and write assembly code for an hour or two, much less enjoy it. That's how good TIS-100 is: a deranged simulator for a broken alien computer, it's the kind of game where the solutions are half inspiration, half desperate improvisation.

Here's the idea: the game boots up a fake computer, which immediately fails POST and dumps you into the debugger. Inside, it's made up of chips arranged in a grid, each of which can be programmed with up to 15 instructions in an invented assembly language. In a series of puzzles, you're given a set of inputs and a list of expected output, and it's up to you to write the transformation in the middle.

Complicating matters is the fact that TIS-100 processors are designed to be as eccentric as possible. Nodes have no RAM, just a single addressable register and a second backup register that can be swapped out. They're also severely limited in what they can do, but very good at communicating with each other. So each solution usually involves figuring out how to split up logic and shuffle values between nodes without deadlocking the CPU or writing yourself (literally) into a corner. There's a bunch of tricks you learn very quickly, like using a side node as a spare register or sending bits back and forth to synchronize "threads."

It's not perfect. Once I beat TIS-100, I didn't feel an urge to start back up the way I do for something more "game-like" (say, XCOM or Mass Effect). But it does have the wonderful quality that you can solve most of its puzzles while you're away from the computer, and actually typing them in is a bit of an anticlimax. For all its exaggeration and contrivance, I'm not sure you could get a better simulation of programming than that.

June 25, 2015

Filed under: tech»web

A good (virtual) walk spoiled

Earlier today I took the wraps off of the private repo for our Chambers Bay interactive flyover. You can find the source code here, and a post on our dev blog about it here. It was a really fun challenge, and a rare example of using WebGL in a news capacity (the NYT did the Dawn Wall, but that's the only one I can remember recently).

From a technical standpoint, this was my first three.js project, and the experience was largely positive. I think there's a strong case to be made that three.js is basically jQuery for WebGL: sure, you don't need it, but it only takes a couple of features to make it worthwhile. In this case, I didn't particularly feel like writing a model loader or a scene graph. There are still plenty of hooks to write the parts that I do enjoy, like the fragment shader for the landscape (check out that sweet dithering), or the UI for directing the camera. Sure, three.js is a relatively large library, but I'm loading 4MB of textures and another 4MB of gzipped landscape model, so what's a few more hundred KB of code?

WebGL itself runs surprisingly well these days, although failure modes do not seem to be its strong suit. For example, the browser may have WebGL support enabled, but then crash when it tries to render (or it may be lying about support, as with the remote VM sessions used in Times meeting rooms). That said, I was astonished to find that pretty much everything (mobile included) could run the landscape at a solid frame rate, despite the fact that it's a badly-optimized mesh with 150,000 triangles. Even iOS, which usually falls over and dies when WebGL pushes past its skimpy RAM limits, was able to run smoothly once I added a low-res texture for it to use.

This was an ambitious project using some pretty cutting-edge web technology, which makes it interesting in light of arguments that the web suffers from "featuritis". After all, when you're talking about feature overkill, WebGL is a barnstormer. But this story would have been tough to tell another way, and it would have never had the same reach siloed in an app store.

Or take Paul Ford's mind-boggling What is Code? in Businessweek this month: behind Bloomberg's Trapper Keeper design aesthetic, it's a powerful article that integrates animations, videos, and interactive demonstrations with the textual message. Ironically, I saw many of the same people that criticize web apps going wild for Ford's piece, a stance that I can only attribute to sophistry.

My team at the Seattle Times has gradually abandoned the term "news apps," since everyone who hears it assumes we're actually writing iPhone clients for the paper. As a term of art, it has always been clumsy. But it does strike at a crucial quality of what we do, which occupy a gray area between "text" and "program." And if it seems like I'm touchy about pundits who think we should abandon the web, this is in large part the reason why.

Arguments that browsers should just go back to being a document viewers ignore the fact that HTML is not just a text format: it's a hypermedia format, and those have always blurred the already-fuzzy lines between data and code (see also: Excel, Hypercard, and IPython notebooks). It's true that the features of the web platform are often abused. Nobody likes slow navigation, ad popups, or user tracking scripts. But it's those same features that make new kinds of storytelling possible — my journalism is built on the same heavily-structured, "over-tooled" web platform that critics find so objectionable. I wouldn't give that up for all the native apps in the world.

June 8, 2015

Filed under: journalism»professional

Paper Anniversary

It's ironic, I guess, that I was so busy at the Seattle Times a couple of weeks ago I forgot to write about my one-year anniversary here. Anyway: it's been quite a year! I've done real estate visualizations, provided an overview of Oso Valley development, and covered the Washington state elections. I did much of the development on our major investigative pieces, Loaded with Lead and Sell Block (not to mention graphs and narrative interactives for the Warren Buffett mobile home investigation). I made a Seahawks fan map so good that the team outright stole it for themselves. For the local architecture buff, I worked on a building quiz, and for the beer fans I helped build the landing page for our Brew with Us project. Want to know where the May Day protests went? I built a map for that. And this is just the big stuff.

In addition to the externally-facing development, I've been working on building tools that are used by the rest of the newsroom. I think our news app scaffolding is as good as anyone's in this business. We're leading the industry in custom element development, with responsive frames, Leaflet maps, and more. The watermarking tool I made on my second day is still in use, and will probably outlive me entirely at this point.

I have always had a low threshold for boredom, a character flaw that's led to overpacking for every trip I've ever taken and a general inability to read literary fiction. I love working in a newsroom for many reasons, but one of the greatest has always been that I am rarely bored here, and it never lasts more than two weeks when it does happen. I cannot recommend this job highly enough for technical people who want to have an impact, or journalists who want to break out of a single beat. Working at the Seattle Times has been the most fun I've had a job in a long time. I can't wait to see what the next year brings.

"As I look back over a misspent life, I find myself more and more convinced that I had more fun doing news reporting than in any other enterprise. It is really the life of kings."

--H.L. Mencken

May 20, 2015

Filed under: journalism»industry

Instant Noodles

Like all Facebook's attempts to absorb the news industry, there's a probable timeline their new Instant Articles will follow, and it basically looks like this:

  • 2015: Facebook introduces Instant Articles, in which a few media partners push their content directly into Facebook's servers, and (in the iPhone app only) it gets rendered without leaving the application. "Content," in this case, even includes the publisher's own ad and tracking systems.
  • 2016: The program expands to other publishers, albeit possibly with a few more "refinements" (read: restrictions) on what those publishers are allowed to do. It becomes fashionable in the newsroom to harass me about it.
  • Late 2016: Once Instant Articles gets some traction, Facebook finds a way to sabotage or undercut it. Either they'll introduce more restrictions on allowable features, or they'll lower the frequency at which the posts appear and charge newsrooms to "promote" them (or both — why take half measures?).
  • 2017: Noting that the magical promised ad dollars have not materialized (or are eaten up by tithes back to The Algorithm), media organizations start quietly reducing their Instant Article publishing rate. Jeff Jarvis writes a sad editorial about it.
  • 2018: Claiming it was an "educational" experiment, Facebook shuts down the program. Rumors begin circulating about its VR news platform, in which the New York Times will publish for Oculus Rift.

Instant Articles is not the first time Facebook has tried to take over the web, and it won't be the last. They're very bad at it, probably because they're the original kings of empty promises: working with Facebook is a constant stream of exasperation, until either you realize that they're incapable of maintaining a stable API/business relationship, or you slit your wrists. They've done it to game developers (goodbye, Farmville), to other newsrooms (remember Washington Post Social Reader?), and to anyone else who's tried to build on the various Facebook "platforms."

Lots of people have written very smart reactions to the Instant Articles announcement — I'm partial to Josh Marshall's behind-the-scenes take, John Herrman's spiral of bemused horror, and Zeynep Tufekci's reminder that Facebook cannot be trusted to engage honestly with its role as gatekeeper.

It's probably more fun to engage with the self-proclaimed "controversial" opinions, like this profoundly dumb thought-leadering from MG Siegler:

With Instant Articles, Facebook has not only done a 180 from what Mark Zuckerberg has called the company's biggest mistake, they've now done another lap just to prove a point.

They did a 180, and then took a lap, so... they ran the race backwards, which is a good thing? Somewhere, Tom Friedman feels a twinge of jealousy.

Not only is the web not fast enough for apps, it's not fast enough for text either. And you know what, they're right.

"They're right" that an app loading pre-cached text can be faster than a web browser downloading that same text from the network, yes. Apparently our plan now is just to restrict your reading material to what Facebook can download ahead of time. I hope you like Upworthy lists.

Though, in a way, Facebook itself really is just a web browser. It's just a different, newfangled one for a new era. A mobile era.

A different, newfangled web browser that only goes to Facebook, apparently. Who would want to read anything else? In the future, all websites are Facebook. (Ironically, according to the Instant Articles FAQ, they're fed from HTML anyway, so they're not even really that "new." But it's probably too much to expect Siegler to do research.)

Siegler's not the only person I've seen celebrating Facebook's move as an end to the open web (by which we mean HTML/CSS/JavaScript), although he's certainly one of the most gleeful (he also thinks Facebook should shut down its website entirely, in case you were wondering the general quality of his business advice). Of course, you'll notice that these hot takes are not themselves published to Facebook, or to a native app somewhere. If that were the case, no-one would have heard of them. They get posted to the web, where they can get linked and shared across social media, and read regardless of platform or hardware.

Even without without bringing in ideology, the "native apps instead of the web" idea faces a tremendous number of problems once you think about it for more than thirty seconds. How do new publications like The Toast or FiveThirtyEight get traction when you have to manually download them from an app store to read them? If they get popular through the web first, why bother transitioning to native? Nobody makes "reader" apps for desktops and laptops, so what happens to them? Does anyone really want to write long-form on Facebook, a service that only recently added an "edit post" button? Who cares: punditry is hard, let's go shopping!

It's easy to pick on shallow people who think Instant Articles represent a grand utopian state, but I'd also like to celebrate people who are actually building in the opposite direction. This weekend, I went to a Knight-Mozilla code convening in Portland, which included a ticket to the Write the Docs convention. I'm not a documentation writer, really, so most of the conference went right past me. But the keynote on the second day was by Ward Cunningham, inventor of the wiki, and it was a fascinating look at what it would really look like to reinvent the web.

For the past few years, Cunningham's been working on "federated" wikis, which store content on multiple servers instead of using a single database. If you link to another person's wiki page and you want to change the content, you fork it a la GitHub, and edit the new local copy (which remembers its origin) right there in your browser. You can also drag-and-drop content into a new page, if you want to merge text from multiple sources. It's pretty neat. The talk isn't online, but he did another presentation at New Relic that covers similar material.

Parts of Cunningham's pitch can sound kind of crankish, although I'm sure I would have said the same thing for the original wiki. But other parts are really interesting, such as the idea of creating a forkable attribution trail for data and reporting. Federated wikis are another attempt to decentralize and diversify the Internet, instead of walling it up behind a corporation's control. And a lot of it is inspired by the main insight that wikis had in the first place: on a wiki, you create a page by first creating a hyperlink to it, then following that link.

As a result, even though users don't directly type HTML into the window, this form of authorship is profoundly of the web, and it's the kind of thing that's never going to exist in a native application somewhere. The fact that Cunningham can experiment with adding new markup features in JavaScript — and even turn a browser into a new kind of hypertext reader, with a different interface paradigm — is what the web platform does best. Like water, it can flow, or it can crash.

And that's why it's ultimately ridiculous to act like some pre-cached news articles are the herald of a new media age. What the web gives us — a freedom for anyone to publish to everyone, a wildly cross-platform programming environment, a rich multimedia container where your plain-text article can live right next to my complex news app — is not going to be superceded by a bunch of native apps, and certainly not by Facebook. Instant Articles won't even be the future of news. Future of the web? Give me a break.

May 7, 2015

Filed under: journalism»new_media

Mayhem

I'm not sure what it says about Seattle that one of our biggest yearly events is a May Day protest that wrecks havoc across big chunks of downtown. What even competes? The Blue Angels shut down traffic on the bridges once a summer, and there's the Sea Fair downtown, but reception to those is always pretty muted in my experience. International Workers Day is the big show.

The May Day map I put together to track our reporters has quickly become one of my favorite projects for the Seattle Times. It was real-time, it posed interesting data challenges, and it really exploited our <leaflet-map> element more than anything else we've done so far. While I also wrote a post on it for our dev blog at work, I wanted to call out a couple of other interesting points here.

The most interesting technical detail here is the use of the Twitter streaming API, which delivers nearly instant updates for a search query (either on users, geolocation, or keyword). Node is a great fit for this, with the twitter module offering a readable stream that fires events as new items come in. Our scaffolding, on the other hand, is not intended to be run as a long-standing process, and I didn't really want to retrofit Grunt into a general-purpose application framework. I ended up writing the Twitter part of the app as a completely separate, continuous Node process, which then dumped out its data as a JSON file and started a standard build/deploy in a child process whenever new data arrived.

To store the tweets from the stream, the application uses a SQLite3 database, since that's the easiest way to query and update data. A static data store like this is not something that we've used on projects before, and I don't know if I'd re-use it again. Using SQLite itself is always a pleasure, but reliance on a local database means that I couldn't just clone the project from home and update it when I wanted to change the coloring on Saturday morning. Using cloud storage, like Google Sheets, has a lot of advantages for distributed and remote development.

Working with Twitter itself is an interesting problem, because it's clear that the company has no real coherent plan for outside developers. Over the last few years, the API for user access has been increasingly limited and broken as Twitter tried to drive third-party clients (which don't show ads and don't make money) out of existence. On the other hand, if you are building a Twitter bot, which our map effectively is, it remains a pretty useful and effective service for pub/sub communication. I'm not sure it says very much about Twitter's strategy that they'll let bots run wild while ordinary people are locked into a client monoculture, but that's honestly the least of my frustrations with them at this point.

All that said, I would personally use with this stack again in a heartbeat. Twitter is not the highest social traffic source for the Seattle Times, but almost all of our reporters use it anyway, and it's much nicer to program against compared to Facebook. The impending dilemma is if (or when) Twitter will decide to switch to a "curated" (read: algorithmically-tampered) stream a la Facebook's timeline. When that happens, its value to me as a news developer drops basically to nothing, because I won't be able to guarantee message delivery any more.

Which brings me to the most boring but probably most profound lesson of this project: we need a better build server. The May Day map ran on a box in the office we've affectionately dubbed "Cronda," which also currently tests our traffic alert application and previously powered the Seahawks fan map. In each of those cases, we've jury-rigged together a solution for pulling the latest source code and running builds at regular intervals (the cron Grunt task), but it's not optimal. We can't check on those builds remotely, or restart them if something goes wrong.

At some point, we'll probably move our builds from Cronda to an EC2 box that we can access remotely, but doing so doesn't honestly solve the problem — it just makes it less fragile. Eventually, I think we'll need to look into a real build monitor like Jenkins, which can automate deployments, track error logs, and respond to queries in our teach chat. I'm not entirely looking forward to that, since it feels like a very heavyweight solution, but the more complex our applications get, the more a little up-front rigor will pay off.

April 23, 2015

Filed under: journalism»professional

Winner of 10

It has been a busy week, but I wanted to take a moment to recognize my colleagues at The Seattle Times for their tremendous work, resulting in a 2015 Pulitzer Prize for breaking news journalism. Their coverage of the Oso landslide was clear, comprehensive, and accurate, and followup work continues to this day (including one of my first projects for the paper). It's very cool to be working in a newsroom that's the winner of 10 Pulitzer Prizes over the years, and I'm looking forward to being here when we win #11.

April 16, 2015

Filed under: journalism»new_media»data_driven

Empty-handed

We have sent several people from the newsroom, including myself, to journalism conferences over the last few months. Most conferences are about 50% inspirational and 50% crap (tilted heavily crap-wards in the keynotes), but you meet good people and you get to see the nuts and bolts behind the scenes of some of the best interactive news stories published.

It's natural to come back from a conference with a kind of inferiority complex, and equally easy to conclude that we're not making similar rich presentations because we don't have the cool tools that those other (richer, more tech-savvy) newsrooms have. We too, according to this train of thought, need to be coding elaborate visualization generators and complicated new CMS features — or, as Ryan Pitts from Mozilla said to me last weekend at the Society for News Design workshop, "let's not rest until every paper in the country has built its own charting application."

I think better newsroom tech is important, but let's play devil's advocate for a bit with an unpopular hypothesis: developing tools for the editors and reporters at your newspaper is a waste of your time, and a distraction from the journalism you should be doing.

Why a waste of your time? Partly because newsroom tools get a lot less uptake than you probably think they do (certainly less than we'd hope they would). I've written a lot of internal applications in my time, and they've never been particularly popular, because most reporters and editors don't care. They're too busy doing journalism to use your solution (which is as it should be), and they are probably not big on technology anyway (I have a lot of reporters who can't use Excel, which pains me greatly). Creating tools for reporters is, most of the time, attacking the problem at the wrong point.

For many newsrooms, that wasted time will end up being twice as expensive, because development resources are scarce and UI is hard. Building a polished, feature-filled chart generator that the average journalist can use will take at least a couple of programmer months, which is time those developers aren't working on stories and visualizations that readers want. Are you willing to sacrifice that time, especially if you can't guarantee that it'll actually get used? That's a pretty big gamble, unless you have the resources of the New York Times. You're probably better off just going with an off-the-shelf package, or even finding a simpler solution.

I don't think it's a coincidence that, for all the noise people make about the new data journalism startups like Vox and FiveThirtyEight, 99% of their chart output does not come from a fancy tool or a complex interactive: they post JPEGs. And that's fine! No actual reader has ever complained about having to look at a picture of a graph instead of a souped-up vector rendering (in Vox's case, they're too busy complaining that the graph was stolen from someone else, but that's another story). JPEG is a perfectly decent solution when it comes to simply telling the story across the entire web platform — in fact, it's a great embodiment of "do the simplest thing that works," which has served me well as a guiding motto in life.

So, as a rule of thumb: don't build charting libraries. Don't build general-purpose databases. Don't build drag-and-drop slideshows. Leave these things to other people, who have time and energy to build them for a living. Does this mean you shouldn't create tools at all? No, but the target audience should be you, the news developer, and other semi-technical newsroom staff like the web producers. In other words, make technology for the people who will actually use it, and can handle something that's not polished to a mirror sheen.

I believe this is the big strength of web components, and one reason I'm so bullish on them at the Seattle Times. They're not glossy, end-user products, but they are a great balance between power and accessibility for people with a little technical skill, and they're very fast to build. If the day comes when we do choose to invest in a slicker newsroom app, we can leverage them anyway, the same way that the NYT's fancy chart designers are all based on the developer-oriented D3 library.

In the meanwhile, while I would consider an anti-tool stance a "strong opinion weakly held," I think there's a workable philosophy there. These days, I feel two concerns very strongly (outside of my normal news/editorial production, of course): how to get the newsroom to make use of our skills, and how to best use the limited developer resources we have. A "no tools" guideline is not an absolute rule, but it serves as a useful heuristic to weed out the kinds of projects that might otherwise take over our time.

April 3, 2015

Filed under: tech»education

What's advanced?

Next week, I'll start teaching ITC 298 at Seattle Central College. It's the first special topics class I've actually gotten to teach (a previous class on tooling couldn't get enough students), and it's on a topic near and dear to my heart: namely, intermediate to advanced JavaScript. But what does that actually mean? If web development were a medieval blacksmith shop, what would you need to know in order to move beyond "apprentice" and hit the road as a journeyman? What is it that separates a jQuery dabbler from a serious front-end coder?

The honest answer is "about five years of practice," but that's not the whole story (nor is it something I can take to a curriculum planning committee). I think there are two areas of growth that students need to be aware of, and that I'm planning on stressing for this quarter: tooling and functional programming.

Tooling

Rebecca Murphey recently wrote a revised baseline for front-end developers, just as I was putting together my syllabus. It's still a great guide to the state of the art, even if I don't agree with everything in it. But I would add that modern JavaScript applications tend to be built on three pillars that students need to understand:
  • Server code written in NodeJS,
  • Client code built on top of some kind of MVC, and
  • A build system (also in Node) that weaves the two together.

Learning their way around this trio is going to be a huge challenge for my students, most of whom still live in a world where individual files are edited and sent to the browser as-is (possibly with a PHP include or two). They haven't built applications with RESTful routes, or written client-side code in a module system. SCC hasn't typically stressed those techniques, which is a shame.

I'm happy to be the person who forces students into the deep end, but I do want to make sure they have a good, structured experience. Throwing everything at students is a quick way to make sure that they get overwhelmed and give up (not a hypothetical scenario: the previous ITC 298 class had exactly that problem, and ended poorly). To ease them in, we'll try building the following sequence of exercises in our directed lab sessions:

  1. simple Node script with callbacks
  2. scraper using async/events/streams
  3. basic site using Hapi.js
  4. authenticated site with session-handling
  5. Grunt scripts to build JavaScript with Browserify
  6. simple MVC with Backbone

The progression starts with Node, and then builds out gradually so that each step conceptually depends on a previous lesson. Along the way, students will learn a lot about how to structure an application across all three of these environments — which brings us to the second, and probably harder, focus of the class.

Functional programming

Undoubtably, students need to have a better grasp of JavaScript fundamentals ("the good parts") if they're to be considered intermediate front-end devs. But how do we break down those fundamentals? We could concentrate on inheritance and object orientation, or think of everything as MVC. Maybe we could spend a bunch of time on modules, or deep-dive into how to write high-performance DOM code. Murphey recommends learning ES2015 features, like fat arrows and destructuring. All of these are important pieces of the JavaScript toolkit, but they are more patterns available for use, not core theoretical knowledge.

The heart of the language, however, remains the humble function. Where other languages have modules, private/static properties, blocks, async/await, classes, and list comprehensions, JavaScript just has first-class functions. Astonishingly, this has actually worked out pretty well, but it means you do really need to understand them in order to read and write code — particularly on Node, where callbacks are still the preferred method of handling concurrency.

Typically, functional coding has been one of the more difficult concepts to introduce in my basic class: we spend some time about halfway through the quarter implementing Array.forEach(), and we wire up a lot of event listeners. Map/reduce usually overwhelms most students, however, and call/apply gets a lot of blank stares. These are literally unavoidable in a well-written, modern JavaScript codebase: we have to find a way to approach them if students are to reach the next stage of their professional development.

The hard part of writing for Node is that you must embrace some degree of functional programming: the continuation-passing style used in the core APIs makes it inescapable. But the great part of writing for Node (especially as the first section of the course) is that it's actually a fairly gentle ramp-up. Callback functions are not that far from event listeners, and the ubiquitous async library softens the difficulty of mapping an array functionally. Between the two, there's no shortage of practice, since there's literally no other way to write a Node program.

That's the other strategy behind the tooling sequence I've laid out. We'll start from Node, and then build toward increasingly complex functional constructs, like modules, constructors, and promises. By the time the class have finished their final projects, they should be old hands at callbacks and closures, which will serve them well in almost any language.

The plan

Originally, I joked with people that we'd spend six weeks reading JavaScript: the Good Parts and then six weeks building a chat application. Two students would have enjoyed this, and the rest would have followed me home and murdered me in my sleep. But the two-part plan laid out in this post hopefully marries the practical and the theoretical in a way that will help students grow. They'll learn about the intricacies of JavaScript's functional quirks, but they'll do so by building real applications to solve real problems.

The specifics of this quarter are still a little bit in flux, and will likely remain so, since I think it's good to be flexible the first time teaching a class. But if you're interested in following along, feel free to check out the class repo, which contains the syllabus, supporting materials, and example code so far. Issues and pull requests are also welcome!

Past - Present - Future