this space intentionally left blank

December 19, 2013

Filed under: movies»television»elementary

The Watson Problem

The difficulty in making a Sherlock Holmes adaption for American television is that we've already got three or four of them. Hyper-observant detectives are a dime a dozen, from The Mentalist to Monk. Arguably, Psych is just Holmes and his deductive skills with an added dose of arrested development (and I say that as someone who enjoys Psych at its fluffiest). Dule Hill's Gus even serves as a Watson, but reduced to a pharmeceutical rep instead of a doctor to match his detective friend's lack of ambition. The BBC's Sherlock owes Psych a debt for the visual style illustrating the deductive process, although I doubt they'd ever admit it.

I don't envy the people who decided, after the British version aired to wide acclaim, to make another Sherlock Holmes show. That's some tough competition. But I've been watching the first season of Elementary, and I have to say I'm enjoying it. The cast is growing on me, I like the lack of romantic angst, and the infrequent references to the original stories (inasmuch as I can catch them, not being a die-hard fan) are often worth a chuckle.

The biggest problem that Elementary faces is Watson — specifically, figuring out what she's supposed to bring to the team. As played by Lucy Liu, Joan Watson is an ex-surgeon who initially serves as Sherlock's live-in addiction counselor. With the terms of that job running out, partway through the first season, Sherlock offers her a position being groomed as a detective-in-training: someone who can take on his methods and become an equal part of the sleuthing consultancy.

Unfortunately, this is where the show's writers seem to have run out of steam. They know where they want this Watson to end up, and they've told us about it repeatedly, but they don't know how to get her there. She's not shown doing much studying, as such, and Holmes mentions that she doesn't read his research. As a result, Liu's Watson ends up either solving minor b-plot mysteries, dropping medical clues, or providing a convenient anchor toward which Sherlock can toss exposition. It's possible she's learning by osmosis, but this hardly provides a reason why we should care about her character arc.

It's interesting to see how the BBC Sherlock has taken a different tack with its version of the character. The British Watson, played by Martin Freeman, leans heavily on the actor's likeability and finely-tuned air of irritation to create a companion who partners with Sherlock for the adrenaline rush of it. Freeman's Watson is muscle and heart: he humanizes Sherlock and provides support. Ultimately, the relationship between the pair on the BBC show is one of friends. They enjoy going on adventures together. They have a similar restlessness. But Sherlock doesn't need Watson to solve crimes. When the show begins, he's doing relatively fine without him, although Watson's blogging certainly helps build Sherlock's reputation as a detective.

Joan Watson, on the other hand, is interested in being Sherlock — or, at least, being a consulting detective armed with his deductive methods. And in contrast to the Cumberbatch version, Jonny Lee Miller's Holmes is not nearly as self-sufficient. He's abrasive without being charming, dependent on his father for income, and recovering from a drug problem that destroyed his ability to work. Lance Mannion has commented that this weakens Holmes, but I'm not sure that I agree. Given that the original Holmes was a bit of a Mary Sue (a great observer, master of disguise, amateur boxer and stick-fighter, chemist, polyglot, and former spy) I don't miss seeing a version of the character that's less omni-capable.

Elementary wisely forgoes flashy zoom cuts to "show" how Sherlock examines a scene. They don't seem to have developed much of a substitute, unfortunately, so too often the show falls back on simply having characters explain the mystery to us. But I think this is in part because the mysteries are honestly second priority to where Elementary actually wants to focus: on the relationship between Holmes and Watson, with two possibilities for its ultimate outcome. On the one hand, it's hinted that this version of the great detective is really the result of two people working together — that Holmes and Watson together are the equivalent of the BBC Sherlock. Alternately, we're watching the origin story for a second Sherlock embodied in Joan Watson: one that can avoid the mistakes of drug abuse and arrogance, and benefit from her richer life experience as a surgeon.

The danger in speculating about a TV show this way, I've found, is the tendency to write about the show you wish you were watching, not the one that's actually onscreen. It's an easy mistake to make. I remember being mystified by John Rogers' glowing commentary on Jericho, which does not at all resemble the mediocre show that aired under that name, until I realized that really we weren't watching the same program — that the version Rogers was watching was being filtered through all the cool stuff he could have done with its premise.

And so it may be with Elementary. I'm only three-quarters through the first season, and even I will admit that it's uneven at best. It's possible I'm just a sucker for training montages. But the idea that Watson is not just a point of view character or a sounding post, but just the latest heir to a legacy of nigh-uncanny sleuthing... I have to admit, that's like catnip to me. I've got high hopes for it in the second season, and I'd put up with a lot of flexibility around the source material to watch it happen.

December 10, 2013

Filed under: gaming»design

Policing Procedurals

Even if I'm sticking with Steam for most of my gaming, our new PS4 did get me interested in Warframe, the free-to-play shooter that's available on Playstation and PC. I don't normally care for free-to-play, if only because I feel guilty for never buying anything, but I liked the central conceit of Warframe: procedurally-generated levels and highly-mobile Mass Effect-style combat. In retrospect, I probably should have been more skeptical. To understand why, we have to look back at how shooters have been built over the last twenty years.

There was a time, way before Halo and before franchises like Battlefield ran the earth, when one of the main selling points of a first-person shooter was the quantity of unique weapons that it brought to the table--an actual arms race, peaking with Duke3D which (for all its flaws) had some clever joke guns to go with the ubiquitous pistol/shotgun/chaingun trio. I'm not saying this was a better time, or that they were better games, but there was definitely a sense that the genre was about creative destruction, in the same way that fighting games are about combo systems and special attacks.

Then id Software built a monster for competitive deathmatch: Quake and its successors had an incredibly bland set of weapons, because in "serious" multiplayer the goal is to streamline everything except moving and shooting. This was the second refinement of shooter design, and it focused on the levels themselves, but as topology instead of as setting. Players concentrated on learning the levels so that they could plot a path that would keep them supplied, while denying pickups to the other players. A good Quake or Unreal Tournament player knew the game's weapons and how to aim, but more importantly they knew where to go, and when. Navigation became the mark of quality for a deathmatch bot.

Since then, these tendencies have mellowed as the possibility of more complex interactions and narratives has become available. Environments are built more for realism and story, weapons are more traditional and not usually why you buy the game. Which brings us to Warframe, which has basically none of these things. There's hardly any story, the "levels" are randomly generated from a series of tilesets, and the weapons are part of the free-to-play grind: either buy a new gun with real money, or spend a lot of time crafting one within the game's economy. Unlike the Mass Effect and Gears of War titles it resembles, there's no explicit cover system, but players do have a much wider range of movement options than a typical shooter: there are slides, flips, and wall runs available through various key combos.

If Warframe's computer-generated levels were any good, this would be a different post. Good levels would give players a way to put their acrobatic movement skills to good use, rewarding people who have learned the parkour system and can improvise in response to new environments. But the random generator mostly builds long, boring hallways connecting wide-open, pre-designed rooms, none of which require any particular skill outside of strafing and taking cover behind walls. Since players can't learn the levels and their flow, they can't optimize or get better at moving through them. And since new weapons require an investment of serious cash or time, almost everyone's using the same rifle and the same melee weapon, which means you never see anything that makes you want to spend any money or time in the first place.

The irony of this problem is that someone already got the formula right for doing procedural FPS games, and they did it by almost exactly reversing Warframe's formula. Borderlands has hand-built levels and enemy placement, combined with randomized weapon generation: each gun consists of components assembled onto a set of base bodies, which vary by "manufacturer" with certain preset tendencies and aesthetics. For example, Mariwan guns always inflict status effects (poison, fire, etc), while Jacobs weapons are Western-themed and can often fire as fast as the player can mash the button. Within those simple parameters, however, the results when you pull the trigger can vary wildly.

The result is a game that combines the two old-school driving forces of FPS design — clever level design and weapon variety — with the collector's urge that powers massive multiplayer games (Borderlands even borrows the color-coded quality markings from World of Warcraft, making it easy to evaluate a weapon drop in an instant). The innovation is not proceduralism — games like Diablo have long offered that — but figuring out how to balance it with the formula for a replayable and rewarding shooter. As someone who almost totally lacks a collection instinct, but loves the classic FPS genre, Borderlands 2 hits the sweet spot with remarkably few missteps (it's also surprisingly smart and funny, which is a welcome change).

I don't think procedural generation is impossible for first-person games — indeed, I think it's likely to have a bright future, particularly as web games mature and optimize for delivery size — but it illustrates just how difficult the challenge is likely to be for anyone who attempts it. For all that people talk about the genre as if it's just a collection of bro-heavy manshooters, there is undeniably a huge amount of craft that goes into the fundamental mechanics. As Kevin Cloud notes in Dan Pinchbeck's analysis of Doom (now 20 years old!),

Every genre has its real strengths, but in a shooter... if running and shooting is not fun, doesn’t feel natural, doesn’t feel visceral and powerful, then I think you are going to lose out.
Movement in FPS games is not just about how the player transitions from point A to point B, but about all the obstacles and decisions that make up that route. As such, building procedural content is not impossible, but it needs to provide good options for cover, paths for moving between pickups, and unexpected chances to either ambush enemies or be ambushed. Warframe may do this one day, but right now it's failing miserably. In the process, it's showing how little the developers have really thought about how the game they're building fits into the traditions of its genre.

December 4, 2013

Filed under: gaming»hardware»ps4

Press Play

Belle and I were planning on getting a Roku to replace our five-year-old XBox this Christmas, since the games are drying up and it doesn't make any sense to pay for a Live subscription just to watch Netflix and HBO. I still kind of bear a grudge against Sony for the CD rootkit they passed around years ago, but then my employers at ArenaNet bought everyone a PS4 as a holiday bonus. I am, it turns out, not above being a hypocrite when it comes to free stuff.

You can explain a lot about the last three generations of consoles by remembering that, at heart, Microsoft is a software company and Sony is a hardware company. Why did the XBox 360 suffer regular heat failures? Why does the PS Vita interface look like an After Dark screensaver? Our 360 was clearly on the edge of another DVD failure, so I bear them no particular good will. But you have to admit: up to the point that a given XBox malfunctions in one way or another, Microsoft knows how to build a usable operating system. Sony... well, it's not so much a core skill of theirs.

For example, after you turn on the PS4, and after the hundreds of megabytes of updates are done downloading and installing themselves a few times, you're greeted with a row of boxes:

  • Some kind of activity stream box, because people really wanted another Facebook.
  • Every game you've ever played. In my case, one demo I tried and will never touch again.
  • A web browser, which for some inconceivable reason doesn't use the touchpad on the controller, but has to be controlled via the thumbsticks and triggers, like Capcom wrote a game called Devil May Surf.
  • An ad for Sony's video service, which nobody is going to use instead of Netflix, and which can't be removed.
  • An ad for Sony's music service, which nobody is going to use instead of anything, and which can't be removed.
  • Something called the Playroom, which is useless if you don't have the PS4 camera, but can't be removed (there is a theme here).
  • A single item that contains all of your video streaming applications, even though these are probably where most people spend at least 50% of their time these days.
  • Oh, and of course, an item marked "Library" that shows a subset of the same list, I guess in case you want to see them arranged in vertical rows instead of horizontally.
At least the list is in chronological order by use, but only for certain items: no matter how often I open Amazon's video app, it'll always be cordoned off in a submenu, alongside a bunch of other video providers that I haven't actually installed but which Sony really wants me to know about, like Vudu and Crackle.

Apparently I'm a little grumpy about the menus.

Anyway for us, this is a media player, which means we'd like to have a remote control, but those don't exist for PS4 yet and it can't use regular IR remotes. The controller layout may make sense to someone who owned a PS3, but it's just baffling to me: why is the button normally used to go backwards assigned here to play/pause duties? To be fair, the XBox never really had a great controller story for DVDs either (both of them put fast-forward on the triggers, where you're guaranteed to accidentally hit it while setting the controller down), but at least it tried to be consistent with the rest of the OS.

You can pair a smartphone with the PS4, which one would think could be a chance to show custom controls for media, what with the touchscreen and all. You'd be wrong: the PS4 app dedicates 90% of its surface to a swipeable touchpad, apparently on the assumption that the three directional inputs on the actual controller are insufficient.

The whole time you're watching a movie, of course, the controller will glow like some sort of demented blue firefly, which helps the camera (which I don't have) to see where I am (hint: the couch). Since you can't just turn off the LED, I've got the whole controller set to shut itself off after ten minutes. This solves the glow, and keeps the batteries from draining themselves at an alarming rate, but now when I want to actually use the controller for something — say, to pause the movie because our dog has started making that special "I'm going to throw up" face — it interrupts with a bright blue screen, every single time, to ask me who I am. Meanwhile, my movie keeps playing in the background.

This is worth some emphasis: on the XBox, a console where we actually had multiple accounts, each new controller that was activated would either log in as the current user or just kind of wait in "guest" mode until the player actually signed in. On the PS4, a console where we have one account, to which I was already signed in with our only controller 20 minutes ago, Sony needs to know my identity before I can perform the critical, account-bound task of pausing a movie. Meanwhile, the dog is now standing sheepishly in front of a vomit-stained rug.

I'm a little grumpy about the media functions, too.

I'm well aware it's a little ridiculous to gripe this much about a free game system. It's not that the PS4 is a bad machine — it's on par with your average DVD player in terms of usability — but I tend to feel like maybe they should aim a little higher. I'm really hoping that these kinds of fixes will be easy to update, since most of the UI is apparently built using web technology instead of painstakingly coded native widgets.

What's really interesting about comparing consoles from both companies is that the kinds of things I really miss from the XBox (pinned items, Kinect voice commands, good media apps) weren't there from the start. Microsoft has gone through at least three major revisions since they released the 360 in 2005. Even though there have been regressions (and the ads have certainly gotten bigger over time), the overall trend has been for the better — in part because they've been effectively allowed to throw the whole thing away and start over. As far as I can tell, the PS3 was also improved, even if it wasn't reinvented in the same way. It takes a lot of nerve to make sweeping changes like that, and as well as a conviction that the physical box is not what you're selling — a philosophy that's well-suited to Microsoft's software background, but that even hardware companies can no longer ignore.

I've been so embedded in a constantly-shifting web environment for so long that I sometimes forget that not everything updates on a monthly basis. Sony will be more conservative than Microsoft, but even they will be rolling out patches to the PS4, many of which will probably address my complaints. We live in a world where you can turn around and find that your DVD player, or your phone, or your browser suddenly looks and acts completely differently. That's great for people like me who thrive on novelty, but it now occurs to me just how disorienting this might be for ordinary people. It may be worth considering whether a little stability might be good for us — even if it means preserving the bad with the good — and whether the technical community might benefit from a little sympathy to users overwhelmed by our love of change.

November 22, 2013

Filed under: tech»coding

Plug In, Turn On, Drop Out

This is me, thinking about plugins for Caret, as I find myself doing these days. In theory, extensibility is my focus for the next major release, because I think it's a big deal and a natural next step for a code editor. In practice, it's not that simple.

Approach #1: Postal Services

Chrome has a pretty tight security model applied to packaged apps, not the least of which is a strict content security policy. You can't run code from outside your application. You can't construct code using eval (that's good!) or new Function (that's bad). You can't add new files to your application (mostly).

Chrome does expose an inter-app messaging system similar to postMessage, and I initially thought about using this to create a series of hooks that external applications could use. Caret would broadcast notifications to registered listeners when it did something, and those listeners could respond. They could also trigger Caret commands via message (I do still plan to add this, it's too handy not to have).

Writing plugins this way is neatly encapsulated and secure, but it's also going to be intensely frustrating. It would require auditing much of Caret's code to make sure that it's all okay with asynchronous operation, which is not usually the case right now. I'd have to make sure that Caret is sufficiently chatty, because we'd need hooks everywhere, which would clutter the code with broadcast/response blocks. And it would probably mean writing a helper app to serve as a patchboard between applications, and as a debugging tool.

I'm not wild about this one.

Approach #2: Repo, Man

I've been trying to think of a way around the whole inter-app messaging paradigm for about a month now. At the same time, I've been responding to requests for Git and remote filesystem support, which will not be a core Caret feature. For some reason, thinking about the two in close proximity started me thinking along a new track: what if there were a way to work around the security policy using the HTML5 file system? I decided to run some tests.

It turns out this is absolutely possible: Chrome apps can download a script from any server that's whitelisted in their manifest, write that out to the filesystem, and then get a special URL to load that file into a <script> tag. I assume this has survived security audits because it involves jumping through too many hoops to be anything other than deliberate.

The advantages of this approach are numerous. Plugin code would operate directly alongside of Caret's source, able to access the same functions and modules and call the same APIs that I use. It would be powerful, and would not require users to publish plugins to the Chrome store as if they were full applications. And it would scale well--all I would need to do is maintain the index and provide some helper functions for developers to use when downloading and caching their code.

Unfortunately, it is also apparently forbidden by the Chrome Web Store policies, which state:

Packaged apps should not ... Download or execute scripts dynamically outside a sandboxed environment such as a webview or a sandboxed iframe.
At that point, we're back to postMessage unless I want to be banned from the store. So much for the workaround.

Approach #3: Local hosting

So how can I make plugins work for end users? Well, honestly, maybe I don't. One of the nice things about writing developer tools, particularly oddball developer tools, is that the people using them and wanting to expand on them are expected to have some degree of technical knowledge. They can be trusted to figure out processes that wouldn't necessarily be acceptable for average computer users. In this case, that might mean running Caret as an unpacked app.

Loading Caret from source is not difficult--I do it all the time while I'm testing. Right now, if someone wants to fork Caret and add their own features, that's easy enough to do (and actually, a couple of people have done so already). What it lacks is a simple entry point for people who want to contribute functionality without digging into all the modules I've already written.

By setting up a plugins directory and a little bit of infrastructure, it's possible to reach a middle ground. Developers who really want extra packages can load Caret from source, dump their code into a designated location, and have their code bootstrapped automatically. It's not as friendly as having web store distribution, and it's not as elegant as allowing for a central repo, but it does deliver power without requiring major rewrites.

Working through all these different approaches has given me a new appreciation for insecurity, which sounds funny but it's true. Obviously I'm in favor of secure computing, but working with mobile operating systems and Chrome OS that strongly sandbox their code tends to make a person aware of how helpful a few security holes can be, and vice versa: the same points for easy extension and flexibility are also weak points that can be exploited by an attacker. At times like this, even though I should maybe know better, that tradeoff seems absolutely worth it.

November 14, 2013

Filed under: tech»coding

For Free Ninety Nine I'll Beat 99 Acts Down

Assuming that the hamster powering the Chrome web store stats is just resting, Caret clicked over to 10,000 installations sometime on Monday. That's a lot of downloads. At a buck a piece, even if only a fraction of those people had bought a for-pay version, that might be a lot of money. So why is Caret free? More importantly, why is it free and open source? Ultimately, there are three reasons:

  1. I feel like I owe the open source community for the value I've gotten from it (basically, everything on the Internet), and this is a way to repay that debt.
  2. Caret isn't really just mine. It's heavily influenced by Sublime, and builds on another open source project for its text processing. As such, it feels awkward to charge money for other peoples' work, even if Caret's unique code is significant in its own right.
  3. I think I get more value (i.e. job marketability, reputation, skill practice) out of being the person with a chart-topping Chrome app, in the long term, than I would get from sales.

Originally, I had planned on writing about how I reconcile being a passionate supporter of paid writing while giving away my hobby code, but I don't actually see any conflict. I expect a paycheck for freelance coding the same way I expect it for journalism — writing here (and coding Caret) doesn't directly benefit anyone but me, and it doesn't really cost me anything.

In fact, it turns out that both industries also share some uncomfortable habits when it comes to labor. Ashe Dryden writes:

Statistically, we expect that the demographic breakdown of people contributing to OSS would be about the same as the people who are participating in the OSS community, but we aren't seeing that. Ethnicity of computing and the US population breaks down what we would hope to see as far as ethnicity goes. As far as gender, women make up 24% of the industry, according to the same paper that gave us the 1.5% OSS contributor statistic.

Dryden was responding to a sentiment that I've seen myself (and even been guilty of, from time to time): using a person's open source record on sites like GitHub as a proxy for hireability. As she points out, however, building an open source portfolio is something that's a lot easier for white men. We're more likely to have free time, more often have jobs that will pay for open source contributions, and far less likely to be harassed or dismissed. I was aware of those factors, but I was still shocked to see that diversity numbers in open source are so low. We need to do better.

As eye-opening as that is, I think Dryden's middle section centers around a really interesting question: who profits?

I'd argue that the people who benefit the most from the unpaid labor of OSS as well as the underpaid labor of marginalized people in technology are business owners and stakeholders in these companies. Having to pay additional hundreds of thousands or millions of dollars for this labor would mean smaller profit margins. Technology is one of the most profitable industries in the US and certainly could support at least pay equality, especially considering how low our current participation is from marginalized people.

...Open source originally broke us free from the shackles of proprietary software which forced us to "pay to play" and gave us little in the way of choices for customization. Without realizing it, we've ended up in a similar scenario where we are now paying for the development of software that large companies financially benefit from with little cost to them.

Her conclusion — that the community benefits, but it's mostly businesses who boost their profits from free software — should be unsettling for anyone who contributes to open source, and particularly those of us who see it as a way to spread a little socialist good will. For this reason, if nothing else, I'll always prefer the GPL and other "copyleft" licenses, forcing businesses to play ball if they want to use my code.

November 5, 2013

Filed under: tech»innovation

Patently Nonsense

Now that Apple and Microsoft (with a few others) have formed a shell company in order to sue Google over a bunch of old Nortel patents, and IBM has accused Twitter of infringing on a similar set of bogus "inventions," it's probably time again to talk about software patents in general.

Software Patents: Frequently Asked Questions

Q. When it comes to software patents...

A. No.

Q. Hang on. Are they a valid thing? Are any of these suits valid?

A. See above: no, and no.

Q. Why not?

A. Because software patents, as a description of software, are roughly like saying that I could get a patent for the automobile by writing "four tires and an engine" on a piece of paper. They're vague to the point of uselessness, and generally obvious to anyone who thinks about a problem for more than thirty seconds.

Q. Yeah, but so what? Isn't the point of patents to create innovation? Haven't we had lots of innovation in software thanks to their protection?

A. True, the point of patents is to make sure that people let other people use their inventions in exchange for licensing fees, which is supposed to incentivize innovation. In patents for physical inventions, that makes sense: I need to know how to build something if I want to use it as a part of my product, and the patent application will tell me how to do that. But software patents are not descriptive in this way: nobody could write software based on their claims, because they're written in dense legal jargon, not in code.

Let's take one of the patents in question, IBM's "Programmatic discovery of common contacts." This patent covers the case where you have two contact lists for separate people, and you'd like to find out which contacts they have in common. It describes:

  1. Having one local list of contacts,
  2. getting a second from the server,
  3. comparing them, and
  4. presenting a list of hyperlinks to the user.
That is literally the extent of detail present in this patent. Now, do you think you can build a contact matching system by licensing that patent? Or will you do what every other software company on earth has done, and invent the stupid thing yourself (a task that would take a decent coder a couple of days)?

As for the innovation argument, it's impossible to prove a negative: I can't show you that it wasn't increased by patents. But consider this: most of the companies that we think of as Internet innovators are strikingly low on patent holdings. Reportedly, Twitter owns nine, and has pledged not to sue anyone over them. Google's only applied for a few, although they have purchased many as a defensive tactic. Facebook is not known for licensing them or taking them to court (indeed, just the opposite: enter the Winklevii). For the most part, patent litigation is limited to companies who are either no longer trailblazers (Microsoft), are trying to suppress market competition (Apple), or don't invent anything at all (Intellectual Ventures). Where's the innovation here?

This American Life actually did a pair of shows on patents, strongly arguing that they've been harmful: companies have been driven out of business by patent trolls. Podcasters have been sued for the undeniably disruptive act of "putting sound files on the Internet." The costs to the industry are in the billions of dollars, and it disproportionately affects new players — exactly the kind of people that patents are meant to protect.

Q. So there's no such thing as a valid software patent?

A. That's actually a really interesting question. For example, last week I was reading The Code Book, by Simon Singh. Much of the book is taken up by the story of public key encryption, which underlies huge swathes of information technology. The standard algorithm was invented by a trio of researchers: Ron Rivest, Adi Shamir, and Len Adleman. As RSA, the three patented their invention and successfully licensed it to software firms all over the world.

The thing about the RSA patent is that, unlike most software patents, it is non-obvious. It's extremely non-obvious, in fact, to the degree that Rivest, Shamir, and Adleman literally spent years thinking about the problem before they invented their solution, based on even more years of thinking on key exchange solutions by Whitfield Diffie, Martin Hellman, and Ralph Merkle. RSA is genuinely innovative work.

It is also work that was independently invented by the espionage community several years before (although obviously they weren't allowed to apply for patents). Moreover, a lot of the interesting parts of RSA are in the math, and math is not generally considered patentable. Nevertheless, if there's anything that would be a worthy software patent, RSA should qualify.

It goes without saying that matching contacts, or showing search ads, or scrolling a list based on touch, are not quite in the same league. And it's clear that patents are not creators of value in software. People aren't buying Windows computers or iPhones because they're patented. They're buying them because they run the software people want, or run on good-looking hardware, or any number of other reasons. In other words: software is valuable because of what it does, not because of how.

Q. So software shouldn't have any protection?

A. Sure, software should be protected. In fact, it already is. Code can be copyrighted, and you can sue people who take it and use it without your permission. But that's a different matter. Copyright says you can sue people who publish your romance novel. Software patents would be like suing anyone who writes about boy-meets-girl.

Q. Okay, fine. What's the answer?

A. Ultimately, we need patent reform. The steps for this are the same as any other reform, unfortunately: writing letters, asking your political candidates, and putting pressure on the government. It's not a sexy recommendation, but it's effective. If we could frame these as another type of frivolous lawsuit, the issue may even get some traction.

Personally, though, I'm trying to vote with my wallet. I'd like to not give money to companies that use patents offensively. Incidentally, this is why I'm cautiously optimistic for Valve's Steam Machines: it's much harder for me to not give money to Microsoft, since I play a lot of games on Windows (and XBox). A Linux box with a good game library and a not-terrible window manager would make my day.

Finally, there's a community site that can help. Ask Patents is a forum set up by the people who run the Stack Overflow help group for programmers. It takes advantage of a law that says regular people can submit "prior art" — previously-existing examples of the patented invention — that invalidate patents. Ask Patents has successfully blocked bad software patents from being granted in the first place, which means that they can't be used for infringement claims. Over time, success from finding prior art makes patents more expensive to file (because they're not automatically granted), which means fewer stupid ones will be filed and companies will need to compete in the market, not the courtroom.

October 31, 2013

Filed under: fiction»litcrit

Endgame

It is never a bad time to remember that Orson Scott Card is a terrible person. But this week, as millions of people will go to theaters to see a movie based on his most famed work (sorry, Lost Boys), it is good to also remind ourselves: Ender's Game is not a good book. It's barely even a bad one. Consider the following three essays, ranked in descending order of plausibility:

  • Creating the Innocent Killer, by John Kessel, is a comprehensive debunking of the book's "morality." Kessel details how Card stacks the deck, again and again, so that Ender can do the most incredibly awful things but still be a "good" person.
  • Kessel mentions Sympathy for the Superman, by Elaine Radford, which notes a wide range of similarities between Ender and Hitler, and theorizes that the original trilogy was meant to be a "gotcha" on his audience.
  • Finally, Roger Williams writes a conspiracy theory of his own: that Card didn't even write the original books.

Williams' story is unlikely, I think, but it's too much fun not to mention (and for a long time, his account was the only place you could read about the Nazi connection). Radford makes a stronger case, but chances are much of Ender's similarity to Hitler is just coincidence: Ender ends up on a planet of Brazilians because Card is a hack who went on Mormon mission to Brazil as a young adult, he's a misogynist because his author is one, and he justifies his genocide with a lot of blather about "intention" because Card chickened out on the clear implication of the first book: that his protagonist really was a psychopath that wiped out an entire civilization based on an elaborate self-deception.

It's Kessel's essay that's been the most quoted over the years, and for good reason. It's a brutal deconstruction of the tropes used to build Ender's Game, and ends in a deft examination of why the book remains so popular:

It offers revenge without guilt. If you ever as a child felt unloved, if you ever feared that at some level you might deserve any abuse you suffered, Ender’s story tells you that you do not. In your soul, you are good. You are specially gifted, and better than anyone else. Your mistreatment is the evidence of your gifts. You are morally superior. Your turn will come, and then you may severely punish others, yet remain blameless. You are the hero.

Ender never loses a single battle, even when every circumstance is stacked against him. And in the end, as he wanders the lonely universe dispensing compassion for the undeserving who think him evil, he can feel sorry for himself at the same time he knows he is twice over a savior of the entire human race.

God, how I would have loved this book in seventh grade! It’s almost as good as having a nuclear device.

Like a lot of people, I did have this book in seventh grade (or earlier — I'm pretty sure I read it while attending junior high in Indiana). And I did love it as a kid, for most of the reasons that Kessel states: I was a bright kid who didn't have a lot of friends, felt persecuted and misunderstood, and struggled to find a way to express those feelings. Eventually, I grew up. Looking back on it, Ender's Game didn't really do any harm — like a lot of kids, I wasn't actually reading that critically. It's just kind of embarrassing now, and I definitely don't want to go to a theater and relive it.

Feeling embarrassed by your childhood reading material is a common rite of passage for many people, and science fiction readers probably more than others. Jo Walton refers to this as the Suck Fairy. It's tempting, when this happens, to wish we could go back in time and take these books off the shelves — or stop readers now from encountering them in the first place — but it's probably a better idea to foster discussion (a happy side effect of an active adult readership for "young adult" titles) or have alternatives ready on hand.

Recently I re-read another beloved book from my childhood: The Westing Game by Ellen Raskin. If you haven't taken a look at it lately, you really should. Apart from the titles, the two books have aged in radically different ways — in fact, it's probably better now than it was then. I remember reading it mostly as a puzzle: first to solve it, and then again to appreciate the little clues that Raskin works in. But as for the warmth, the sympathetic characterization, and most of all the humor (seriously, it's an uproariously funny book): I missed out on all of these things when I was a precocious youngster identifying with Turtle and her shin-kicking ways, just like I missed Ender's fascist tendencies.

And so ultimately, I'm not worried about young people reading Ender's Game and being influenced for the worse, because I suspect that what they take from it is not what Card actually wants them. It's sometimes difficult — but also crucial — to remember that the reader creates the story while reading, almost as much as the author does. Should we speak out against hateful works, and try not to give money to hatemongers? Sure. Will I be going to see Ender's Game at the local cinema? Definitely not. But I'll always understand people who have a soft spot for it anyway. Despite my bravado, despite the fact that I dislike everything it has come to stand for, I'm one of them, and I'm not going to let Card make me feel bad about that.

October 21, 2013

Filed under: tech»web

Caret 1+

In the time since I last wrote about Caret, it's jumped up to 1.0 (and then 1.1). I've added tab memory, lots of palette search options, file modification watches, and all kinds of other features — making it legitimately comparable with Sublime. I've been developing the application using Caret itself since version 0.0.16, and I haven't really missed anything from native editors. Other people seem to agree: it's one of the top dev tools in the Chrome web store, with more than 1,500 users (and growing rapidly) and a 4.5/5 star rating. I'm beating out some of Google's own apps, at this point.

Belle's reaction: "Now you charge twenty bucks for it and make millions!" It's good to know one of us has some solid business sense.

The next big milestones for Caret will focus on making it better at workflow, especially through plugin integration. Most editors go from "useful" to "essential" when they get a way for users to extend their functionality. Unfortunately, the Chrome security model makes that much more difficult than it is with compiled binaries: packaged apps aren't allowed to expose their internals to external JavaScript, either through script tags or eval(). The only real extension mechanism available is message-passing, the same as web workers.

Caret is already designed around message-passing for its internal APIs (as is Ace, the editing component I use), so it won't be too difficult to add external hooks, but it'll never have the same power as something like Sublime, which embeds its own Python interpreter. I can understand why Google made the security decisions they did, but I wish there was a way to relax them in this case.

I figure I have roughly six months to a year before Caret has any serious competition on Chrome OS. Most of the other editors aren't interested in offline editing or are poorly-positioned to do so for architectural reasons. The closest thing to Caret from the established players would be Brackets, which still relies on NodeJS for its back-end and can't yet run the front-end in a regular browser. They're working on the latter, and the former will be shimmable, but the delay gives me a pretty good head start. Google also has an app in the works, but theirs looks markedly less general-purpose (i.e. it seems aimed specifically at people building Chrome extensions only). Mostly, though, it's just hard to believe that someone hadn't really jumped in with something before I got there.

Between Caret, Weir, and my textbook, this has been a pretty productive year for me. I'm actually thinking that for my next project I may write another short book — one on writing Chrome apps using what I've learned. The documentation from Google's end is terrible, and I hate to think of other people having to stumble through the APIs the way I did. It might also be a good way to get some more benefit out of the small developer community that's forming around Caret, and to find out if there's actually a healthy market there or not. I'm hoping there is: writing Caret has been fun, and I'd to have the chance to do more of this kind of development in the future.

October 15, 2013

Filed under: culture»internet»excession

Hacker No

A few years ago, I started blacklisting web sites that I didn't think were healthy: gadget sites, some of the more strident political sites, game blogs that just churned crappy content all day long. If it didn't leave me better informed, or I felt like my traffic there was supporting bad content, or if the only reason I visited was for the rush of outrage, I tried to cut it out (or at least cut it down). All in all, I think it was a good decision. I felt better about myself, at least.

This week, I added Hacker News to the list of sites I don't visit. HN is the current hotspot for tech community news--kind of a modern-day Slashdot. Unfortunately (possibly by virtue of being run by venture capital firm Y Combinator), it's also equally targeted at A) terrible startup company brogrammers and B) libertarian bitcoin enthusiasts. Browsing the links submitted by the community there is kind of like eating at dive restaurants in a new city: you'll find some winners, but the price is a fair amount of food poisoning.

For a while, I've been running a Greasemonkey script that tries to filter out the worst of the listings (sample search terms: lisp, techcrunch, hustle). This is not as easy as it sounds, because HN is written using the cutting-edge technologies of 1995: a bunch of nested tables with inline styles, served via Lisp variant that causes constant timeouts on anything other than the front page. But even though I had a workable filter from a technical perspective, at some point, it's time to hang up the scripts and admit that the HN community is toxic. There's only so long you can not read the comments, especially on any thread involving sexism, racism, and other real problems that Silicon Valley would like to pretend don't exist.

For example, here's some of the things I've been trying to ignore:

  • Women don't want to be in tech, just like they didn't want to be doctors or lawyers just don't want to be in tech, any more than they wanted to be doctors, lawyers, or astronauts.
  • Literally blaming women for being sexually assaulted at tech conferences
  • It's great to be homeless, and by "homeless" we mean rich.
  • My favorite: Paul Graham, founder of Y Combinator, admits that he won't fund people with an accent. Here's the fun part: Graham is one of the "major contributors" to FWD.us, Mark Zuckerberg's self-serving immigration reform lobby group. In other words, foreign labor is welcomed, unless they want to start a company.

The tech bubble isn't just financial: these are signs of a community that's isolated from difference — of gender, of opinion, and of class. The venture capital system even protects them from consequences: how much money will Twitter lose this year? The fog of arrested development that hangs over Hacker News is its own argument for increased diversity in the tech industry. And it affects more than just a few comment threads, unless you also think the best use of smart people's time is the development of a $130 smoke detector that talks to your iPad.

Leaving a well-known watering hole like this is a little scary — HN is how I've stayed current on a lot of new developments in the field. It's frustrating, feeling like good information is being held hostage by a bunch of creeps. But given a choice between reading an article a couple of days after everyone else or feeling like I constantly need a shower, I'm happy to work on my patience.

October 4, 2013

Filed under: politics»national»congress

The Crazification Factor: Part n in an Infinite Series

John: Hey, Bush is now at 37% approval. I feel much less like Kevin McCarthy screaming in traffic. But I wonder what his base is --

Tyrone: 27%.

John: ... you said that immmediately, and with some authority.

Tyrone: Obama vs. Alan Keyes. Keyes was from out of state, so you can eliminate any established political base; both candidates were black, so you can factor out racism; and Keyes was plainly, obviously, completely crazy. Batshit crazy. Head-trauma crazy. But 27% of the population of Illinois voted for him. They put party identification, personal prejudice, whatever ahead of rational judgement. Hell, even like 5% of Democrats voted for him. That's crazy behaviour. I think you have to assume a 27% Crazification Factor in any population.

John: Objectively crazy or crazy vis-a-vis my own inertial reference frame for rational behaviour? I mean, are you creating the Theory of Special Crazification or General Crazification?

Tyrone: Hadn't thought about it. Let's split the difference. Half just have worldviews which lead them to disagree with what you consider rationality even though they arrive at their positions through rational means, and the other half are the core of the Crazification -- either genuinely crazy; or so woefully misinformed about how the world works, the bases for their decision making is so flawed they may as well be crazy.

John: You realize this leads to there being over 30 million crazy people in the US?

Tyrone: Does that seem wrong?

John: ... a bit low, actually.

I saw a CBS poll this morning stating that 25% of the public favors the shutdown of the federal government. 80 representatives (that's 18.3%, one third of the Republican caucus in the House and representing roughly 18% of the total population) signed the original manifesto leading to the shutdown. Even if the numbers are a little low, is there any remaining doubt that John Rogers' Crazification Factor remains more accurate and revealing than most of Politico on any given day?

This is what you get when you elect people who don't believe in government to political office. You cannot deal with the Suicide Caucus, because they don't recognize the legitimacy of the rules that the Congress is supposed to operate under (thus the endless parade of funding delays and filibusters over the last seven years). Besides, they don't want to negotiate. They've gotten what they wanted: the government is basically closed for business, and they couldn't be more thrilled about it.

Future - Present - Past