this space intentionally left blank

November 5, 2014

Filed under: journalism»new_media

Election Elements

You might have heard that there was an election this last week. Like every news organization, The Seattle Times had a live results page, powered by a Node-based scraper. It did pretty well: we had no glitches with pulling results, and the response has been solid. It also generated the source data for the print edition. Oh, and we put bunting on the front page, which is not something you get to do every day.

Behind the scenes, however, that results page has another interesting feature: as far as I'm aware, it's the first use of Web Components (at least, the custom elements part) in production by a news organization. Each of the Washington maps on the page is a custom-built <svg-map> element, which handles loading the image document and provides a set of convenience methods for manipulating the map once it's available.

SVG is one of those technologies that I really want to like, but has always been a total pain to actually use. It's an annoying format to author, doesn't seem to actually save any space compared to bitmap images, and has a ton of edge cases even in "standard" browsers (for example, Chrome will forget the state of an SVG document inside an object tag if that tag or its parents are set to display: none). Wrapping it up in a component that would manage its lifecycle and quirks for me just seemed like a no-brainer.

To create the component, I used Andrea Giammarchi's registerElement() shim instead of Polymer's polyfill layer — Giammarchi's script only shims the custom element portion of Web Components, but it works all the way back to IE9 and (more importantly) is only 2KB. On top of that, I used RSVP.js to create a quick shared cache for SVG source documents, ICanHaz for my templating, and a custom module called Savage to do SVG class/style manipulation.

From the outside, however, you don't need to know any of that. Instead, the interface is simple:

  1. Write a map element into the page, with a src attribute pointing to the SVG file you want to load. Put your tooltip template inside the tag.
  2. Attach callbacks to the element's ready promise to run code once the image is on the page.
  3. Use the eachPath method on the element to do painting, and set the onhover callback to pass in data for templating in the tooltip.
Using these maps, in other words, is basically just like using a regular element, if regular elements had a DOM API that wasn't written by psychopaths. All their complexity is tucked away inside, and what they present externally is clean, simple, and self-contained. The map element does 80% of what Pro Publica's Landline does, and I'd argue it does it better.

As a developer, I'm really excited by the potential of these new custom elements. Although I had used them at ArenaNet for building the new Guild Wars 2 trading post, those were used to create tight integration with the in-game interface, and only needed to work in a single browser. This is the first time I've used them in a wider ecosystem, and they worked like a charm.

But as a library consumer, and particularly as a harried newsroom dev, I think web components have a tremendous potential to make complex behavior way easier to build and train for. Take the afore-mentioned Landline, for example: wouldn't it be nice to simply include a script tag (or an HTML import) and then be able to write <landline-map> tags into the page, with an attribute pointing to a CSV or a Google Sheet containing the necessary data? Or consider Pym, NPR's responsive iframe library that's so great I forked and rewrote big chunks of it. Right now, using Pym on the parent page requires including the script, adding a dummy element, and then initializing the script — why shouldn't it just be <pym-embed> instead?

Distributing libraries not as modules or loose scripts, but as chunks of new HTML functionality, has the potential to radically change how we create new content on the web in the future. Newsrooms, which are always under pressure and often consume "pre-made" tools for interactive elements like timelines and galleries, are a perfect use-case for Web Components. After this election experience, I'm planning to lean heavily on them whenever possible, and I'm hoping other people will as well.

October 21, 2014

Filed under: journalism»investigation

Loaded with lead

I'm very proud to say that "Loaded with lead," a Seattle Times investigation into the ways that gun ranges poison their customers and workers, went live this weekend. I worked on all four interactives for this project, as well as doing the header design and various special effects. We'll have a post up soon on the developer blog about those headers, but what I'd like to talk about today is one particular graphic — specifically, the string-of-pearls chart from part 2.

The data underlying the pearl chart is a set of almost 300 blood tests. These are not all tests taken by range workers in Washington, just the ones that had to be reported after exceeding the safe threshold of 10 micrograms per deciliter. Although we know who some of the tested workers are, most of them are identified only by an anonymous patient ID and the name of their employer. My first impulse was to simply toss the data into a scatter chart, but as is often the case, that first impulse proved ill-advised:

  • Although the tests are taken from a ten-year period, for any given employer/employee the tests tend to be more compact in timeframe, which makes it tough to look at any series without either losing the wider context, or making it impossible to see the individual tests.
  • There aren't that many workers, or even many ranges, with lots of test results. It's hard to draw a trend when the filtered dataset might be composed of only a handful of points. And since within a range the tests might be from one worker or from many, they can't really be meaningfully compared.
  • Since these tests are only of workers that exceeded the safe limit, even those trends that can be graphed do not tell a good story visually: they usually show a high exposure, followed by gradually lowered lead levels. The impression given is that gun ranges are becoming safer, but the truth is that workers with hazardous blood lead levels undergo treatment and may be removed from the high-lead environment, resulting in lowered test results but not necessarily a lead-free workplace. It's one of those graphs that's "technically correct," but is actually misleading.

Talking with reporters, what emerged was that the time dimension was not really important to this dataset. What was important was to show that there was a repeated pattern of negligence: that these ranges posted high numbers repeatedly, over long periods of time (in several cases, more than five years). Once we discard a strict time axis, a lot more interesting options open up to us for data visualization.

One way to handle this would be with a traditional box and whiskers plot, which shows the median and variation within a statistical set. Unfortunately, box plots are also wonky and weird-looking for most readers, who are not statisticians and would not know a quartile if it offered them a grilled cheese sandwich. So one prototype simplified the box plot down to its simplest form — probably too simple: I rendered a bar that began and ended within the total range of test results for each range, with individual test results marked with a line inside that bar.

This version of the plot was visually interesting, but it had flaws. It made it easy to see the general level of blood tests found at each range, and compare gun ranges against each other, but it didn't show concentration. Since a single tick mark was shown within the bar no matter how many test results at a given level, there was litttle visual difference between two employers with the same range of test results, even if one employer mainly showed results at the top of the range, and the other results were clustered at the bottom. We needed a way to show not only level, but also distribution, of results.

Given that the chart was already basically a number line, with a bar drawn from the lowest to the highest test result, I removed the bar and replaced the tick marks with circles that were sized to match the number of test results at each amount. Essentially, this is a histogram, but I liked the way that the circles overlapped to create "blobs" around areas of common test results. You can immediately see where most of the tests fall for each employer, but you don't lose sight of the overall picture (which in some cases, like the contractors working outside of a ventilation hood at Wade's, can be horrific — almost three times the amount considered dangerous by the CDC). I'm not aware of anyone else who's done this kind of chart before, but it seems too simple for me to be the first to think of it.

I'd like to take a moment here to observe that pretty much all data visualization comes down to translating information into a form that our visual systems are evolved to quickly understand. There's a great post on how that translation functions here, with illustrations that show where each arrangement sits on a spectrum of perceived accuracy and meaning. It's not rocket science, but I think it's a helpful perspective: I'm just trying to trick your visual cortex into absorbing a table's worth of data at a glance.

But what I've been trying to stress in the newsroom from this example is less technical, and more about how much effective digital journalism comes from the simple process of iteration and self-evaluation. We shouldn't expect to come up with a brilliant interactive on the first try every time, or even any of the time. I think the string-of-pearls is a great example of that, going from a visualization that I was confusing and overly-broad to a more focused graphic statement, thanks to a lot of evolution and brainstorming. It was exhausting work, but it's become my favorite of the four visualizations for this project, and I'm looking forward to tweaking it for future stories.

September 2, 2014

Filed under: journalism»new_media

Fanning the Flames

Over the weekend we soft-launched our Seahawks Fan Map project. It's a follow-up on last year's model, which was built on a Google Fusion Table map. The new one is better in almost every way: it lets you locate yourself based on GPS, provides autocomplete for favorite players, and clusters markers instead of throwing 3,000 people directly onto the map. It's also built using a couple of interesting technical choices: Google Apps Script and "web components" in jQuery.

Apps Script

Like my other news apps, the fan map is a static JavaScript application built on our news template. It ships all its data and code up to S3, and has no dynamic component. So how do we add new people to the map, if there's no server backing it up? When you fill out the fan form, where does your data go?

The answer, as with many newsrooms, is heavy use of Google Sheets as an ad-hoc CMS. I recently added the ability to for our news app scaffolding to pull from Sheets and cache the data locally as JSON, which is then available to the templating and build tasks. Once every few minutes, a cron job runs on a machine in our newsroom, which grabs the latest data and uploads a fresh copy to the cloud. Anyone can use a spreadsheet, so it's easy for editors and writers to update the data or mark a row as "approved," "featured," or "blocked."

Getting data from the form into the sheet is a more interesting answer. Last year's map embedded a Google Forms page, which is the source of many of its UI sins: they can't be styled, they don't offer any advanced form elements, and they can't be made responsive. Nobody really likes Google Forms, but they're convenient, so people use them all the time. We started from that point on this project, but kept running into features that we really wanted (particularly browser geolocation) that they wouldn't support, so I went looking for alternatives.

Most people use Google Apps Script as a JavaScript equivalent for Excel macros, but they have a little-known but extremely useful feature: an Apps Script can be published as a "web app" if it has a doGet() function to handle incoming requests. From that function, it can use any of the existing Apps Script APIs, including access to spreadsheets and (even better) the Maps geocoder. The resulting endpoint isn't fully CORS-compliant, but it's good enough for JSONP, making it possible to write a custom form and still submit to Sheets for storage. I've posted a sample of our handler code in this gist.

Combining the web endpoint for Apps Script with our own custom form gave us the best of both worlds. I could write a form that had pretty styling, geolocation, autocomplete, and validation, but it could still go through the same Google Docs workflow that our newsroom likes. Through the API, I could even handle geocoding during form submission, instead of writing a separate build step. The speed isn't great, but it's not bad either: most of the request time is spent getting a lock on the spreadsheet to keep simultaneous users from overwriting each other's rows. Compared to setting up (and securing) a server backend, I've been very happy with it, and we'll definitely be using this for other news apps in the future.

Web jQuery Components

I'm a huge fan of Web Components and the polyfills built on top of them, such as Polymer and Angular. But for this project, which does not involve putting data directly into the DOM (it's all filtered through Leaflet), Angular seemed like overkill. I decided that I'd try to use old-school technology, with jQuery and I Can Haz templates, but packaged in a component-like way. I used AMD to wrap each component up into a module, and dropped them into the markup as classes attached to placeholder elements.

The result, I think, is mixed. You can definitely build components using jQuery — indeed, I'm very happy with how readable and clean these modules are compared to the average jQuery library — but it's not particularly well-suited for the task. The resulting elements aren't very well encapsulated, don't respond to attribute values or changes, and must manually handle data binding and events in a way that Polymer and Angular safely abstract away. Building those capabilities myself, instead of just using a library that provides them, doesn't make much sense. If I were starting over (or as I consider the additional work we'll do on this map), it's very tempting to switch out my jQuery components for Angular directives or Mozilla's X-Tags.

That said, I'm glad I gave it a shot. And if you can't (or are reluctant to) switch away from jQuery, I'd recommend the following strategies:

  • Use a build process like AMD or Browserify that lets you write your element templates as .html files and bundle them into your components, then load those modules from your main scripts.
  • Delegate your event listeners, which forces you to write self-contained code that can handle re-templating, instead of attaching them directly.
  • Only communicate with external code via the data- attributes, in order to enforce encapsulation. Data attributes will automatically populate jQuery's internal storage, so they're a great way to feed your state at startup, but they're not bound two-way: you'll have to update them manually to reflect internal changes.

Still to come

The map you see today is only the first version — this is one of the few news projects I plan to maintain over an extended period. As we get more information, we'll add shaded layers for some of the extra questions asked on the form, so that you can see the average fan "lifespan" per state, or find out which players are favorites in countries around the world. We'll also feature people with great Seahawks stories, giving them starred icons on the map that are always displayed. And we'll use the optional contact info to reach out to a "fan of the week," making this both a fun interactive and a great reporting tool. I hope you enjoy the map, and if you're a Seahawks fan, I'll see you there!

August 4, 2014

Filed under: journalism»new_media

Angular Momentum

This week, my interactive work for the Seattle Times examines the bidding wars that are part and parcel of being one of the fastest growing cities in the country. It's got everything you need to be horrified by your local real estate market: high prices, short days-on-market, and a search function to see how dire it is next door. It is also the third or fourth interactive that I've built with Angular this year (source code here). There aren't a lot of people building news apps with Angular, which I find amazing: if your goal is to surface data on a deadline, I'd argue it's the best option out there.

Let's review what Angular brings to the table. At the most basic level, it's a library for doing two things:

  • Data-binding: Any data that you attach to the Angular scope will be used to update the page, and vice-versa — if users change the page (via form elements or other inputs), it'll automatically update your data.
  • Custom HTML: Angular directives let you create new HTML elements and behaviors, so that instead of loading a plugin for an auto-completion input, you can just write <auto-complete>.

As a news developer, this means that building visualizations based on data can be incredibly fast: you attach it to the scope, annotate your HTML, and you're all set. A smart, sortable table is less than 100 lines of code, and uses regular JavaScript objects instead of "collections" or other heavy classes. Meanwhile, directives keep your HTML clean and free of "div soup," and the use of "virtual DOM" means that updates are incredibly fast.

By contrast, when I look at code written in D3 (seemingly the most popular library for doing news visualizations), I see an entirely different set of priorities:

  • Elements and styles are written in the JavaScript code, instead of in the HTML/CSS where you'd expect them to be.
  • Values loaded into the page are done via lengthy chained functional expressions, which makes them harder to inspect compared to the Angular scope.
  • Performance in non-Chrome browsers tends to be terrible, because D3 spends a huge amount of time rewriting the DOM directly instead of batching its changes and abstracting the document away (and because of SVG, which is a dog in Firefox and IE).

After years of debugging spaghetti code in jQuery, this design seems both familiar and ominous, particularly the lack of templating and the long call chains. I've written my fair share of apps this way, and they tend to sprawl out into an unstructured, unmaintainable mess. That may not be a problem for the New York Times, which has more budgetary and development resources than I'll ever have. But as the (for now) only developer in the Seattle Times newsroom, I need to be able to respond instantly to feedback from designers, editors, and reporters. One of my favorite things to hear is "we didn't expect a change so fast!" Angular gives me the agility I need to iterate rapidly, try things out, and discard what doesn't work in favor of what does.

Speed and structure are good reasons to use Angular in a newsroom, but there's another, less obvious incentive. Angular is basically training wheels for Web Components: although it lacks the Shadow DOM, it includes equivalents for custom elements and HTML imports. It's a short hop from Angular to libraries like Polymer, and from there to a whole world of deadline-friendly tooling and reuse. Make no mistake, this is the future of web development, and it can't get here soon enough: I'd love to be able to simply send off an <interactive-feature> tag to the web producers, and I imagine they'd appreciate it too. The Google Web Components tags would be a similar godsend.

For me, this makes using Angular a no-brainer. It's fast, it's effective, it's great for visualizations, and it's forward-thinking. It shocks me that more people haven't seen its advantages — but then, given the way that most newsroom hackers seem to think of the browser as "that embarrassing thing that loads my server code," it probably shouldn't be surprising.

July 23, 2014

Filed under: journalism»new_media

The Landslide

We've just released a new interactive I've been working on for a couple of weeks, this time exploring the Oso landslide earlier this year. Our timeline (source) shows... well, I'll let the intro text explain it:

The decades preceding the deadly landslide near Oso reflect a shifting landscape with one human constant: Even as warnings mounted, people kept moving in. This interactive graphic tells that story, starting in 1887. Thirteen aerial photographs from the 1930s on capture the geographical changes; the hill is scarred by a succession of major slides while the river at its base gets pushed away, only to fight its way back. This graphic lets you go back in time and track the warnings from scientists; the failed attempts to stabilize the hill; the logging on or near the unstable slope; and the 37 homes that were built below the hill only to be destroyed.

The design of this news app is one of those cases where inspiration struck after letting its idea percolate for a while. We really wanted to showcase the aerial photos, originally intending to sync them up with a horizontal timeline. I don't particularly care for timelines — they're basically listicles that you can't scan easily — so I wasn't thrilled with this solution. It also didn't work well on mobile, and that's a no-go for my Seattle Times projects.

One day, while reading through the patterns at Bocoup's Mobile Vis site, it occurred to me that a vertical timeline would answer many of these problems. On mobile, a vertical scroll is a natural, inviting motion. On desktop, it was easier to arrange the elements side-by-side than stacked vertically. Swapping the axes turned out to be a huge breakthrough for the "feel" of the interactive — on phones and tablets that support inertial scrolling for overflow (Chrome and IE), users can even "throw" the timeline up the page to rapidly jump through the images, almost like a flipbook. On desktop, the mouse wheel serves much the same purpose.

On a technical level, this project made heavy use of the app template's ability to read and process CSV files. The reporters could work in Excel, mostly, and their changes would be seamlessly integrated into the presentation, which made copy editing a cinch. I also added live reload to the scaffolding on this project — it's a small tweak, but in group design sessions it's much easier to keep the focus on my editor for tweaks, but let the browser refresh on another monitor for feedback. I used Ractive to build the timeline itself, but that was mostly just for ease of templating and to get a feel for it — my next projects will probably return to Angular.

All in all, I'm extremely happy with the way this feature turned out. The reporting is deep (in a traditional story, it would probably be at least 5,000 words), but we've managed to tell this story visually in an intuitive, at-a-glance format, across multiple device formats. Casual readers can flip through the photos and see the movement of the river (as well as the 2014 devastation), while the curious can dig into individual construction events and warning signs. It's a pretty serious chunk of interactive storytelling, but we're just getting started. If you or someone you know would like to work on projects like this, feel free to apply to our open news app designer and developer positions.

June 19, 2014

Filed under: journalism»new_media

Move Fast, Make News

As I mentioned last week, the project scaffolding I'm using for news apps at the Seattle Times has been open sourced. It assumes some proficiency with NodeJS, and is built on top of the grunt-init command.

There are many other newsrooms that have their own scaffolding: NPR has one, and the Tribune often builds its projects on top of Tarbell. Common threads include the ability to load data from CSV or Google Sheets, minifying and templating HTMl with that data, and publishing to S3. My template also does those things, but with some slight differences.

  • It runs on top of NodeJS. This means, in turn, that it runs everywhere, unlike Tarbell, which will not work on Windows.
  • It has no dependencies outside of itself. I find this helpful--where the NPR template has to call out to an external lessc command to do its CSS processing, I can just load the LESS module directly.
  • It is opinionated, but flexible. It assumes you're using AMD modules for your JavaScript, and starts its build at predetermined paths. But it comes with no libraries, for example: instead, Bower is set up so that each project can pull only what it needs, and always have the latest versions.

What do you get from the scaffolding? Out of the box, it sets up a project folder that loads local data, feeds it to powerful templating, starts up a local development server, and watches all your files, rebuilding them whenever you make changes. It'll compile your JavaScript into a single file, with practically no work on your part, and do the same for your LESS files. Once you're done, it'll publish it to S3 for you, too. I've been using it for a project this week, and honestly: it's pretty slick.

If you're working on newroom development, or static app development in general, please feel free to check it out, and I'd appreciate any feedback you might have.

June 10, 2014

Filed under: journalism»new_media

Top Companies 2014

My first interactive feature for the Seattle Times just went live: our Top Northwest Companies features some of the most successful companies from the Pacific Northwest. It's not anything mind-blowing, but it's a good start, and it helped me test out some of the processes I'm planning on using for future news applications. It also has a few interesting technical tricks of its own.

When this piece was originally prototyped by one of the web producers, it used an off-the-shelf library to do the parallax effect via CSS background positions. We quickly found out that it didn't let us position the backgrounds effectively so that you could see the whole image, partly because of the plugin and partly because CSS backgrounds are a pain. We thought about just dropping the parallax, but that bugged me. So I went home, looked around at how other sites (particularly Medium) were accomplishing similar effects, and came up with a different, potentially more interesting solution.

When you load the page in a modern desktop browser now, there aren't actually any images at all. Instead, there's a fixed-position canvas backdrop, and the images are drawn to it via JavaScript as you scroll. Since these are simple blits, with no filtering or fancy effects, this is generally fast enough for a smooth experience, although it churns a little when transferring between two images. I suspect I could have faster rendering in those portions if I updated the code to only render the portions of the image that are visible, or rescaled the image beforehand, but considering that it works well enough on a Chromebook, I'm willing to leave well enough alone.

The table at the bottom of the page is written as an Angular app, and is kind of a perfect showcase for what Angular does well. Wiring up the table to be sortable and filterable was literally only a few minutes of work. The sparklines in the last column are custom elements, and Angular's filters make presenting formatted data a snap. Development for this table was incredibly fast, and the performance is really very good. There are still some issues with this presentation, such as the annoying sticky header, but it was by far the most painless part of the development process.

The most important part of this graphic, however, is not in the scroll or in the table. It's in the workflow. As I've said before, one of the things that I really learned at ArenaNet was the importance of a good build process. You don't build games like Guild Wars 2 without a serious build toolchain, and the web team was no different. The build tool that we used, Dullard, was used to compile templates, create CSS from LESS, hash filenames for CDN busting, start and stop development servers, and generate the module definitions for our JavaScript loader. When all that happens automatically, you get better pages and faster development.

I'm not planning on using Dullard at the Times (sorry, Pat!) only because I want to be able to bring people onboard quickly. So I'm going with the standard Grunt task runner, but breaking up its tasks in a very Dullard-like way and using it to automate as much as possible. There's no hand-edited code in the Top Companies graphic — only templates and data merged via the build process. Reproducing these stories, or updating them later, is as simple as pulling the repo (or, in this case, both repos) and running the Grunt task again.

That simplicity also extends to the publication process. Fast deployment means fast development and fewer mistakes hanging out in the wild when bugs occur. For Seattle Times news apps, I'm planning to host them as flat files on Amazon S3, which is dirt-cheap and rock-solid (NPR and the Chicago Tribune use the same model). Running a deployment is as simple as grunt publish. In testing last night, I could deploy a fixed version of the page faster than people could switch to their browser and press refresh. As a client-side kind of person, I'm a huge fan of the static app model anyway, but the speed and simplicity of this solution exceeded even my expectations.

Going forward, I want my all news apps to benefit from this kind of automation, without having to copy a bunch of files around. I looked at Yeoman for creating app skeletons, but it seemed like overkill, so I'm setting up a template with Grunt's project scaffolding with all the boilerplate already installed. Once that's done, I'll be able to run one command and create a blank project for news apps that includes LESS compilation, JavaScript concatenation, minification, templating, and S3 publishing. Automating all of that boilerplate means faster startup time, and that means more projects to make the newsroom happy.

As I work on these story templates, I'll be open-sourcing them and sharing my ideas. The long and the short of it is that working in a newsroom is unpredictable: crazy deadlines, no requirements to speak of, and wildly different subject matter. This kind of technical architecture may seem unrelated to the act of journalism, but its goal is to lay the groundwork so that there are no distractions from the hard part: telling creative news stories online. I want to worry about making our online journalism better, not debugging servers. And while I don't know what the final solution for that is, I think we're off to a good start.

May 15, 2014

Filed under: journalism»professional

In These Times

On Monday, I'll be joining the Seattle Times as a newsroom web developer, working with the editorial staff on data journalism and web projects there. It's a great opportunity, and I'm thrilled to be making the shift. I'm also sad to be leaving ArenaNet, where I've worked for almost two years.

While at ArenaNet, I never really got to work on the kinds of big-data projects that drew me to the company, but that doesn't mean my time here was a loss. I had my hands in almost every guildwars2.com product, ranging from the account site to the leaderboards to the main marketing site. I contributed to a rewrite of some of the in-game web UI as a cutting-edge single-page application, which will go out later this year and looks tremendously exciting. A short period as the interim team lead gave me a deep appreciation for our build system and server setup, which I fully intend to carry forward. And as a part of the basecamp team, I got to build a data query tool that incorporated heatmapping and WebGL graphing, which served as a testbed for a bunch of experimental techniques.

I also learned a tremendous amount about development in this job. It's possible to argue that ArenaNet is as much a web company as it is a game company, with a high level of quality in both. ArenaNet's web team is an (almost) all-JavaScript shop, and hires brilliant specialists in the language to write full-stack web apps. It's hard to think of another place where I'd have a chance to work with other people who know as much about the web ecosystem, and where I'd be exposed to really interesting code on a daily basis. The conversations I had here were often both infuriating and educational, in the way the best programming discussions should be.

Still, at heart, I'm not a coder: I'm a journalist who publishes with code. When we moved to Seattle in late 2011, I figured that I'd never get the chance to work in a newsroom again: the Times wasn't hiring for my skill set, and there aren't a lot of other opportunities in the area. But I kept my hand in wherever possible, and when the Times' news apps editor took a job in New York, I put out some feelers to see if they were looking for a replacement.

This position is a new one for the Times — they've never had a developer embedded in their newsroom before. Of course, that's familiar territory for me, since it's much the same situation I was in at CQ when I was hired to be a multimedia producer even though no-one there had a firm idea of what "multimedia" meant, and that ended up as one of the best jobs I've ever had. The Seattle Times is another chance to figure out how embedded data journalism can work effectively in a newsroom, but this time at a local paper instead of a political trade publication: covering a wider range of issues across a bigger geographic area, all under a new kind of deadline pressure. I can't wait to meet the challenge.

February 12, 2014

Filed under: journalism»industry

Last Against the Wall

I think most of us can imagine the frustrating experience of sharing a newspaper with the New York Times op-ed page. It must burn to do good reporting work, knowing that it'll all be lumped in with Friedman's Mighty Mustache of Commerce and his latest taxi driver. Let's face it: the op-ed section is long overdue for amputation, given that there's an entire Internet of opinion out there for free, and almost all of it is more coherent than whatever white-bread panic David Brooks is in this week.

But even I was surprised by the story in the New York Observer last week, detailing just how bad the anger between the journalists and the pundits has gotten:

The Times declined to provide exact staffing numbers, but that too is a source of resentment. Said one staffer, “Andy’s got 14 or 15 people plus a whole bevy of assistants working on these three unsigned editorials every day. They’re completely reflexively liberal, utterly predictable, usually poorly written and totally ineffectual. I mean, just try and remember the last time that anybody was talking about one of those editorials. You know, I can think of one time recently, which is with the [Edward] Snowden stuff, but mostly nobody pays attention, and millions of dollars is being spent on that stuff.”

First of all, the Times still runs unsigned editorials? And it takes more than ten people to write them? Sweet mother of mercy, that's insane. I thought the only outlet these days with an actual "from the editors" editorial was the Onion, and even they think it's an old joke. You might as well include an AOL keyword at the end.

And yet it's worth reading on, once you pick your jaw up off the floor, to see the weird, awkward cronyism that's not just the visible portions of the op-ed page, but its entire structure. Why is the editorial section so bad? In part, apparently, because it's ruled by the entitled, petty son of a former managing editor, who reports directly to the paper's publisher (and not the executive editor) because of a family debt. Could anything be more appropriate? As The Baffler notes:

What a perfect way to boil tapioca. Dynasties kill flavor. A page edited by a son because dad was kind of a big deal is a page edited with an eye to status and credentials. Hey, Friedman must be good—he won some Pulitzers. That’s a prize, you see, that Pulitzer thing. Big, big prize. We put it up on the wall. (Pause) Anyway, ready for a cocktail?

The Observer argues that the complaints from the newsroom at large are professional, not budgetary: reporters are angry about shoddy work being published under the same masthead as their stories. But it's hard to imagine that money doesn't enter into it at all. A staff of ten or more people, plus hundreds of thousands of dollars for each of the featured op-ed writers, would translate into serious money for journalism. It would hire a lot of staff, pay for a lot of equipment. You could use it to give interns a living wage, or institute a program for boosting minority participation in media. Arguably, you could put it into a sack and sink it into the Hudson, and still end up ahead of what it's currently funding.

Of course, most papers don't maintain a costly op-ed section, so it's not like this is an industry-wide problem. I don't know that I would even care, normally, beyond the sense of schadenfreude, except for the fact that it's such a perfect little chunk of journalistic mismanagement: when finances get strained, the cuts don't get made from politically-connected fiefdoms, or from upper-level salaries. They get taken from the one place that should be protected, which is the newsroom itself.

Call me an anarchist, but the most depressing part of the whole debate is that it's focused on how big the op-ed budget should be, or how it should be run, instead of whether it should exist at all. What's the point of keeping it around? Or, at the very least, why populate it with the same bland, predictable voices every day? One of the things I respect about the New York Times is the paper's forays into bucking conventional wisdom, from the porous subscription paywall to its legitimately innovative interactive storytelling. There's a lot of romance and tradition in the newsroom, but the op-ed page shouldn't be a part of it. I say burn it to the ground, and let's see what we can grow on the ashes.

January 10, 2014

Filed under: journalism»new_media

App-y New Year

At the end of January, I'll be teaching a workshop at the University of Washington on "news apps," thanks to an offer from the outgoing news app editor at the Seattle Times. It's a great opportunity, and a chance to revisit my more editorial skills. From the description:

This bootcamp will introduce students to the basic components of creating news applications, which are data-powered digital stories tied together through design, programming and journalism. We’ll walk through all the components of creating a news application, look at industry examples of what works and what doesn’t, and learn the basic coding skills required to build a news app.
Sounds cool, but it's still a wide-open field — "data-powered digital stories" covers a huge range of approaches. What do you even teach, and how do you do it in two 4-hour workshops?

It turns out that for almost any definition of "news app," there's an exception. NPR's presidential election board is a data-powered news app, but it's not interactive beyond an auto-update. Snow Fall is certainly a news app, but it's hard to call it "data-powered." How can we craft a category that includes these, but also includes traditional, data-oriented interactives like The Atlantic's Netflix Genre Generator and the Seattle Times mayoral race comparison? More importantly, how do we get young journalists to be able to think both expansively and productively about telling stories online?

That said, I think there is, actually, a unifying principle for news apps. In fact, I think it cuts to the heart of what draws me to web journalism, and the web in general. News apps are journalistic stories told via hypermedia — or, to put it simply, they have links.

A link seems like a small thing after years on the web, so it's good to revisit just how fundamentally groundbreaking they are. Links can support or subvert their anchor, creating new rhetorical devices of their own. At the most basic level, they contextualize a story. More abstractly, they create non-linearity: users explore a news app at their own pace and with their own priorities, rather than the direct stream of narrative from a text story.

A link is a simple starting place. But it starts us down a path of thinking about more complicated applications and usage. I'm fond of saying that an interactive visualization is constructed in many layers, with users peeling open the onion as far as they may want. If we're thinking in terms of other hypertext documents (a.k.a., the TV Tropes Rabbit Hole) from the start, we're already prepared when readers use similar interaction patterns to browse data-based interactives — either by shallowly skipping around, or diving in depth for a specific feature.

By reconceptualizing news apps as being hypermedia instead of a specific technology or group of technologies, such as mapping or graphing, introducing students to web storytelling gets a lot easier — particularly since I won't have time to teach them much beyond some basic HTML and CSS (in the first workshop) and a little scripting (in the second).

It also leaves them plenty of room to think creatively when presenting stories. I'd love for budding news app developers to be as interested in wikis and Twine as they are in D3 and PostGIS. Most importantly, I'd love for an appreciation of hypertext to leak into their writing in general, if only to reduce the number of print die-hards in newsrooms around the country. You don't have to end up a programmer to create new, interesting journalism that's really native to the web.

Past - Present - Future