Category Archives: film

Some Thoughts on Glass Onion



It occurred to me as I was starting to draft this post, my first of 2023, that my first post of 2022 was about the Netflix film Don’t Look Up. Well, not about that movie so much as mentioning it in passing, but it still struck me as serendipitous—both because my first post this year is also about a Netflix property, but more significantly about another film featuring a satire on the figure of the genius tech billionaire.

Don’t Look Up, for those who don’t recall or never watched it, is a broad and profoundly unsubtle parable of climate change in which an asteroid’s imminent collision with Earth will be an extinction-level event. Rather than mobilize the globe behind a concerted effort to destroy or divert the asteroid, the American president and the media treat it, well, like they’ve treated climate change. Which is to say, with denial, deflection, and a minimization of the threat until it’s almost too late. And when they finally launch a salvo of ICBMs, the president aborts minutes after launch because of the intervention of Peter Isherwell—a genius tech billionaire played by Mark Rylance—whose engineers have discovered that the asteroid is chock full of extremely valuable minerals. Long spoiler short, Isherwell’s brilliant plan to fragment the asteroid into non-lethal but very harvestable bits fails and everyone on Earth dies.

Let’s stick a pin in that for a moment and turn to Glass Onion, Rian Johnson’s second film featuring master detective Benoit Blanc, who is played once again with glorious aplomb by Daniel Craig. Two films in what I hope will become a prolific series aren’t enough to discern a thematic pattern just yet, but both Glass Onion and its predecessor Knives Out share the common premise of wealthy, entitled people being brought low by a young working-class woman—Anna de Armas in Knives Out, Janelle Monae in Glass Onion—with, of course, the assistance of Blanc and his gleeful scenery-chewing. If Knives Out was more generally a class warfare parable, Glass Onion is a broadside against the ostensible meritocracy of the “disruption” economy. The second film somehow manages to be at once less subtle and more nuanced in its critique: less subtle because billionaire Miles Bron (Edward Norton) and his “Disruptors” are basically archetypes of the social media era1 and because Bron’s downfall at the film’s end is nakedly cathartic schadenfreude; more nuanced because of the film’s critical implications, which are what I want to tease out in this post.

The conceit at the center of the film, which is also its big reveal, is that genius tech billionaire Miles Bron not only isn’t a genius, he’s actively a moron.

As alluded above, this is a cathartic reveal. It is also a serendipitous one in the present moment, coming as it does in the midst of Elon Musk’s heel turn.2 One of the slowest, hardest lessons that we still haven’t entirely learned over the past several years (as this film dramatizes) is not to assume complexity of purpose and motive where there aren’t any; not to assume arcane subtlety of thought when stupidity, cupidity, and/or simple incompetence makes just as much sense. This indeed was the central fallacy of the conspiracy theories imagining Trump as a Russian asset, with Putin manipulating him in a byzantine web of intrigue; it is also the delusion at the heart of the 2020 election denial with its enormous ensemble of plotters that somehow linked the late Hugo Chavez to Italian military satellites and Dominion voting machines.

It is also the conceit at work in theories positing that what we see of Elon Musk’s actions at Twitter are merely glimpses of an otherwise invisible and massively complex game of four-dimensional chess, which we non-geniuses perceive as hapless floundering. I’ve now seen numerous such fantasies explicated in varyingly granular detail—some framed in awe of his brilliance, some as warnings of his Bond-villain plotting. But really, I have to think it’s all like the glass onion of Johnson’s film: a symbol of complexity that is, in actuality, transparent.

Trump was only a Russian asset insofar as he wanted to emulate Putin, which Putin knew would be bad for America. Accordingly, the Russians did their best to fuck with the election, but that was blunt-force action, not a labyrinth of intrigue. The 2020 election was, very simply, an election that voted out an historically unpopular president. And Elon Musk thought he could treat an irreducibly complex cultural and political morass as an engineering problem. In each case, people needed to see complexity instead of looking, as Daniel Craig’s detective Benoit Blanc suggests, “into the clear centre of this glass onion.”

A good murder mystery benefits from a great twist, a revelation that shocks the reader/audience all the more because it seems so obvious in hindsight. Agatha Christie was a master of this: she gave us mysteries in which, by turns, every suspect was guilty, none of them were, and in one instance—a twist that made my students genuinely angry on the one occasion I taught the novel in question—the narrator did it. What’s revelatory in Glass Onion isn’t the identity of the culprit, but his idiocy. By the time Benoit Blanc is doing the classic gather-the-suspects-in-the-library bit, his ensuing speech isn’t meant to reveal Miles Bron as the murderer, it’s to stall for time so Helen (Janelle Monae) can find the proof in his office they need. As Blanc rambles into increasing incoherence, drawing out his monologue as much as he can, he inadvertently finds his way to an epiphany,3 one that offends his sensibility as “the world’s greatest detective”:

His dock doesn’t float. His wonder fuel is a disaster. His grasp of disruption theory is remedial at best. He didn’t design the puzzle boxes. He didn’t write the mystery. Et voila! It all adds up. The key to this entire case. And it was staring me right in the face. Like everyone in the world, I assumed Miles Bron was a complicated genius. But why? Look into the clear centre of this glass onion. Miles Bron is an idiot!

I’m going to go out on a limb and guess that the most prominent tech billionaires aren’t nearly as moronic as Miles Bron is portrayed. After all, in the film Bron’s supposed genius is revealed to be entirely fabricated, the product of theft and systemic mendacity, to the point where it almost strains credulity that he could have successfully managed a multibillion dollar corporation. Almost! The point made over the course of the film is just how much of Bron’s success is predicated on other people’s vested interest in maintaining his mythos. A key scene at the beginning features Lionel (Leslie Odom Jr.), Bron’s chief engineer, in the midst of trying to salve the concerns of the board of directors. The board seems to be getting fidgety about their erratic CEO. Lionel makes a case we’ve heard made a lot in the past two decades: sure, Bron’s ideas seem out there, but that’s just his genius, his capacity for blue-sky thinking!

(I should pause to note how good Odom Jr.’s performance is: he communicates quite deftly, through his tone and facial expressions, how he’s papering over his own misgivings and making an argument he desperately needs to believe).

All of the people invited to his private island in a classic murder mystery setup—his “Disruptors,” as he fondly calls them (more on that momentarily)—are people who are beholden to Bron, and whom he needs to maintain the fiction of his genius.  The point of contention at the heart of the mystery is that Bron stole the work of his former partner Cassandra (Monae). When she sued, all the others perjured themselves in support of Bron because they’d needed to stay in his good graces, in part because he had become the principal patron in all their endeavours, but also because their own success had traded on Bron’s name and reputation for being a genius.

Ten years ago or so, Chris Hayes (of MSNBC fame) wrote Twilight of the Elites, a trenchant critique of the cultural tendency to understand meritocracy uncritically, as an invariably positive thing. As a general principle, Hayes observes, meritocracy doesn’t have much to quibble with: who seriously thinks that the best ideas, the strongest performances, the most talented people, shouldn’t be rewarded? The problem however is less with the principle itself than the conviction that it can be a self-sustaining system. Left unregulated, the libertarian ideal asserts, the best people, products, and ideas will inevitably excel and failure will be relegated, deservedly, to the dustbin of history. Interference in the pursuit of pure excellence—especially by the government—is a recipe for mediocrity and turgidity.

The unavoidable problem with this premise, Hayes points out, is that if your sole criterion is rewarding success—with minimal oversight—you inevitably reward cheating. He cites numerous examples, but the one I found most striking wasn’t in the book itself, but one that unfolded right at the time the book was published, and which Hayes cited in numerous interviews he gave. In late 2012, longstanding rumours about Lance Armstrong’s cheating came to a head. In January 2013, he gave the now-notorious interview with Oprah Winfrey in which he admitted to systematically doping. But as Hayes notes, it wasn’t just that Armstrong cheated pharmaceutically over a long period, but that he created circumstances in which he obliged everyone around him to be complicit—either by actively aiding him, by remaining silent, or, in the case of other cyclists on the American team, also doping in order to keep the team as a whole competitive. Once complicit, of course, it was in everyone’s self-interest in Armstrong’s orbit to perpetuate and maintain the fictions that sustained their livelihoods. What’s more, this cohort of self-interested insiders colluded to silence or suppress anyone seeking to expose the truth.

Hence, to my mind the most significant aspect of Glass Onion is not the broad parody of a billionaire who’s not nearly as smart as he thinks he is or pretends to be; while the spectacle of Helen destroying the symbols of Bron’s wealth and excess that ends the film is profoundly cathartic, it perhaps obscures to some extent the culpability of Bron’s enablers. Helen’s rage, after all, is elicited not just by Bron’s successful destruction of the evidence that would have exposed him, but by the complicity—again!—of his “Disruptors,” who shamefacedly accede to continue lying for him, in spite of the fact that they now know he has murdered two of their number.

In the end, Glass Onion is about the fallacy of disruption. Bron’s Disruptors are as much disruptors as Bron is a genius: he flatters them outrageously in turn to Blanc at one point, enumerating the ways each supposedly acts as a productive chaos agent in their respective fields, but when push comes to shove disruption is the last thing they want—having built wealth and power, their interest is in consolidating and expanding it. This indeed is Silicon Valley writ small: Mark Zuckerberg’s motto might be “Move fast and break things,” but even an indifferent observer will note that it has been a very long time since he has broken anything or moved very fast or very far. It is perhaps ironic that in the current tech landscape, dominated by billionaires and corporate giants, the true disruptors are governmental figures like Elizabeth Warren–those who are quite vocal in their determination to disrupt such tech monopolies as Amazon and Facebook (sorry—Meta) and break them up into smaller, more competitive chunks.

It is however heartening the difference a year makes when comparing Miles Bron to Peter Isherwell of Don’t Look Up. Even just a year ago, the central conceit of an idiot billionaire wouldn’t have felt quite as on the nose. Indeed, Mark Rylance’s performance in Don’t Look Up is more consonant with the general assumptions of the past two decades: his portrayal of Isherwell is that of a genius, self-absorbed and arrogant to the point of sociopathy, but a genius nonetheless. Miles Bron, by contrast, is uncannily apposite to a moment when a critical mass of tech industry fuckery has (finally) called into question unexamined assumptions of tech genius: Mark Zuckerberg’s metaverse obsession and Facebook’s massive value loss; Sam Bankman-Fried’s detonation of crypto currency futures through the simple expedient of colossal financial incompetence; Elizabeth Holmes’ guilty verdict; Peter Thiel’s real-time transformation into comic-book villainy; and of course Elon Musk’s ongoing Twitter meltdown; cumulatively, these and similar misadventures cannot help but make plain something that never should have been far from the collective understanding: that talent, brilliance, and genius in one area not only don’t necessarily indicate capability in others, but that they hardly ever do. Pair this slowly creeping realization with the obvious, observable ways social media and digital culture polarize and factionalize people, and it’s hard to take seriously the persistent techno-utopianism of the Silicon Valley set.

It will however be difficult for many to let go of that mythos, not least because the entire industry is deeply invested in propagating it. It is telling that one of the funniest and most cited moments of the film is when Birdie (Kate Hudson) still struggles to see genius in Bron’s idiocy. “It’s so dumb,” says Blanc, disgustedly. “So dumb it’s … brilliant!” Birdie breathes. “NO!” thunders Blanc. “It’s just dumb!”


1. There’s Birdie (Kate Hudson) a fashion icon turned influencer whose constant inadvertently racist gaffes on social media have become part of her brand; Claire (Kathryn Hahn), a liberal-ish politician whose progressive bona fides coexist uneasily with her indebtedness to Bron; Lionel (Leslie Odom Jr.), the engineer largely tasked with realizing Bron’s ideas, which come in varying shades of whackadoodle; Duke (Dave Bautista) a masculinity guru and men’s rights influencer; and Whiskey (Madelyn Cline) Duke’s girlfriend and arm candy who is herself an influencer with political ambitions, and who might be the smartest of the group. Rounding out the group but not part of it is Peg (Jessica Henwick), Birdie’s long-suffering assistant who has possibly the funniest moment in the entire film.

2. In numerous interviews, Rian Johnson has pushed back against the assumption that Miles Bron is a one-to-one analogue for Elon Musk, pointing out that he wrote the screenplay in 2020 and shooting wrapped well before Musk broke cover from Tesla and SpaceX and fully displayed his inner twelve-year-old in his fraught takeover of Twitter. Miles Bron, Johnson maintains, was written as an amalgam; it was just happy (or awkward) coincidence that the film’s release coincided with the escalation of the Twitter saga.

3. As Blanc fumbled his way to his epiphany, I could not help but remember all the times I’ve under-prepared for a lecture and felt myself start to go off the rails, only to inadvertently work out something in the midst of my rambling that reveals something about the topic that hadn’t before occurred to me. I wasn’t sure whether this part made me feel attacked or seen.

Leave a comment

Filed under film, what I'm watching

Thoughts on expanded universes, part two: Solo and the whole Star Wars thing

I’ve been working on part two of my thoughts on expanded universes series, and it keeps getting away from me—which is perhaps only appropriate. “It grew in the telling,” Tolkien said of The Lord of the Rings, a sentiment echoed by George R.R. Martin. Which is not to compare my modest blogging project here to their work, but to observe that even writing about world-building is, well, an ever-expanding project, never mind actually engaging in the process.

I have a lot written, and the problem is that the subject wants to run off madly in all directions. And given that this problem was exacerbated when, a few days ago, I went to see Solo: A Star Wars Story, I figured that perhaps a few words about Star Wars in general, and Solo in particular might help things out.

So just to be clear: SPOILERS AHEAD.

Screen Shot 2018-05-13 at 2.19.10 PM


First, a Review


solo - chewie

What gravity the Star Wars films possess—which is to say, the establishment of high stakes (the fate of the galaxy, e.g.) and a certain amount of dramatic tension—is pretty much absent here for reasons more pithily summed up by Joshua Rothman in The New Yorker: “We already know what will happen—Han will meet Chewbacca, make the Kessel Run in twelve parsecs, win the Millenium [sic] Falcon in a card game, and end up a rakish bachelor—and this puts any genuine suspense out of reach.” I’d say that this is the inherent problem with some prequels, i.e. knowing where the story ends up, if it also wasn’t an inherent aspect of most genre fiction. The more important question is how we get there. Solo lacks the aforementioned narrative stakes we find in, say, Rogue One, and this film telegraphs its end point(s) even more obviously. There’s little in the way of character-based tension, and nothing in the way of difficult or problematic choices that lead us to where we know Han ends up. It would have made for a more nuanced evolution if Han were even a little bit morally compromised. But no: he begins with altruistic intent (keen to escape his bondage with the girl he loves), and ends with an altruistic act (giving up a fortune for a revolutionary cause), and at every point in the story he makes the choices he does in the name of the former. His betrayal at the end by both Beckett and Qi’Ra is such an obvious plot twist that it doesn’t deserve the name. (Emilia Clarke’s Qi’Ra is, indeed, the most interesting character in the film. Beckett not so much, for reasons I’ll get into below).


The point is that this Han Solo is so very … Disney. When we meet Han in A New Hope, he is a familiar generic character: the Byronic gunslinger modeled on every such character in a western not played by Gary Cooper. Such characters are fascinating because we know, without knowing the particulars, that they have tortured, morally compromised pasts (which is why Lucas’ much-vilified change to the Greedo scene is as much a sin against genre as against Han’s character)—but that they will ultimately use their talent for lawless violence in the service of capital-G Good, and thus find redemption.

But the Han Solo at the end of Solo needs no redemption, for he has not transgressed, except against a totalitarian regime and organized crime bosses. I suppose it remains to be seen whether we’ll get a bunch of other films in which we see Han develop his more cynical mien, but as an “origin” story, we don’t hit the closing credits with much more than a knowledge that Han Solo has always had a snarky and roguish sense of humour, but no sense that these aspects run deep. As I said, this is the Disney Han, with more in common—unlike Harrison Ford’s version—with such handsome rogues as Aladdin than with any of his western genre precursors.

In fact, the contrast I found myself making was between Han Solo and Malcolm Reynolds from Firefly—first, because Nathan Fillion as Mal did cynical self-interest even better than Ford did (though to be fair, he had twice as many hours as a principal character to develop it, and a much better team of writers). But also because the brief glimpses we get of Mal’s origin story do an exceptional job of explicating just why he’s now such a cynical bastard with occasional gleams of altruism. We really don’t need that with Han Solo, because he comes to us in A New Hope as a fully formed trope, but if you’re going to rip of Firefly, you might as well take a lesson from its narrative nuances.

Second, the whole train heist sequence, to say nothing of Beckett’s original crew, felt totally lifted from Joss Whedon’s space western. The second episode of Firefly, which was aired first (because, as we know, Fox execs have the critical acumen God gave walnuts), featured a train robbery by spaceship that goes badly. In and of itself, this isn’t cause to suspect plagiarism; what is, is the parallel between crews: the gunslinging cynical wisecracker in a long coat, the no-nonsense Black woman as first officer, and the glib, cheerful pilot (respectively, Mal, Zoe, and Wash in Firefly, Beckett, Val, and Rio in Solo); and once Chewbacca is on board, we have a very tall dude to act as muscle, Solo’s analogue to Firefly’s Jayne Cobb.

mal et al

solo - crew

Philosophers talk about epistemic closure, but this feels a lot like generic closure: Firefly owes its existence to Star Wars generally, and the character of Malcolm Reynolds to Han Solo specifically. Which isn’t to say that what Whedon did with the series wasn’t new and interesting, but that it wore its homage (ironic and otherwise) on its sleeve. That we’ve come full circle here—in which an entry in the new Star Wars canon apes tropes from a television series that was aping tropes from the original films—is perhaps unsurprising in a mass entertainment context in which recycling and rebooting is a much safer financial bet that creating new material.

Unfortunately, the whole “expanded universe” trend of the moment does seem to be at least as motivated by an aversion to novelty as any genuine interest in the exercise of world-building. And on that note …


Donald Glover is Lando Calrissian in SOLO: A STAR WARS STORY.

Yup, Lando was one of the best parts of the film, which is why he doesn’t come in for any kvetching in my comments.

Solo and the (Revamped) Star Wars Expanded Universe

In Solo’s penultimate scene, we discover who the Big Bad behind the Crimson Dawn and all the other cartels is. And the threatening hooded figure in the hologram is … Darth Maul! Whom we last saw falling down an airshaft in The Phantom Menace, bisected by Obi-Wan Kenobi’s light saber (well, technically by Qui-Gon Jin’s, but … well, nevermind).

Because I tend to follow links down the nerd-hole, my response to this was less “WTF?” than “Really … you’re going there?”

See, in Rogue One, we met a character played by Forest Whitaker named Saw Gerrera—a former member of the Rebellion who had been ousted because he was considered an extremist. Because I read a bunch of online reviews of the film and clicked on the aforementioned links, I learned that Saw was a character from the animated series Star Wars: Rebels, and that his appearance in Rogue One evoked something resembling geekgasms among the most dedicated Star Wars fandom.

Why this is interesting to me beyond simple trivia is a question that brings us back to something I alluded to in my Infinity War post—namely, that we (so far as I’ve been able to figure it) get the phrase “expanded universe” from Star Wars, specifically from all of the stories (novels, comics, video games) embroidering the narrative of the original trilogy and the prequels. When Disney purchased Lucasfilm and commissioned J.J. Abrams to launch the new Star Wars franchise with The Force Awakens, a decision was made to basically invalidate the entirety of the “expanded universe.” Which, all things being equal, was somehow unsurprising, considering that doing otherwise meant Disney and Abrams would have been obliged to adapt a rather involved series of novels (starting with Timothy Zahn’s Heir to the Empire) that detailed the lives of Luke, Han, Leia, et al after The Return of the Jedi.

Instead, they chose to ignore them, and in the process render them “non-canon.” Though I’ve never been a follower of the expanded universe, it was quite obvious that this decision—to coin an expression—caused a great disturbance in the Force.

I’m less concerned by this disturbance (not least because I was not personally disturbed) than by the debates that emerged about what, then, was considered “canon” in the Star Wars expanded universe. And again, I’m less interested in the details of this debate than the significance of the word canon.

In my next post I’ll get into the ways in which terms such as “mythology” and “universe” have come to be used in relation to the popular culture phenomena, but here “canon” seems as good a place as any to start. For myself, as a professor of English, the most significant definition of canon is what we might otherwise designate as “the great works”—that it to say, works of literature that define a given tradition. The English Canon traditionally starts in the Middle Ages with Chaucer (though might also include earlier works like Beowulf), and includes Spenser, Shakespeare, Donne, Milton, Dryden, Pope, the Romantics, up through such modernists as Joyce, Eliot, and Woolf. The problem with the idea of literary canons is that, like the list I just gave, is that they are by definition exclusionary and tend to privilege certain voices—every single author on that list is white, and only a single one is a woman.

I don’t want to get into an argument on this particular topic (I’d say that’s a whole other series of blog posts, but really it’s a library in and of itself), but rather bring up this definition of “canon” as one that is (or has been for about three decades or more) constantly under negotiation (when not, as is more common of late, being challenged outright). In its religious definition, “canon” denotes something transcendent or immutable, as in the Catholic church’s canon law. It also, and this is its relation to the literary understanding, designates those works of scripture which are accepted and considered the proper word of God (as opposed to the Apocrypha).

So on one hand we have the absolutism of religious doctrine, and on the other the more nebulous and negotiable conception of what works define a tradition. The analogues here to the Star Wars expanded universe should perhaps be obvious, though obviously irksome to theologians and literary snobs. (I’m sure there’s a down side to it all, though).

What I’m interested in here and the next few posts are the ways we engage with such fictional worlds, and the way they’re created and delineated. Last post, I talked about paratext as something that circumscribed and defined texts proper. The whole question of what we call “canon” is a large-scale example, whether in terms of what counts as biblical scripture or what narrative elements define the Star Wars universe, versus what we count as “apocryphal.” (When Disney and Abrams eliminated the extant expanded universe at a stroke, they made the glib suggestion that fans could consider those stories as “legends” in the context of the new canon—something that undoubtedly infuriated many, but raised the interesting prospect of seeing an expanded universe within an expanded universe).


But to return to the question of what is “canon”: the appearance of Darth Maul at the end of Solo, replete with robotic legs to replace those removed by Obi-Wan, would seem to confirm what many fans speculated after Rogue One—namely, that the animated series Clone Wars and Rebels were included in the new canon. Apparently, Rebels resurrected Darth Maul (who managed to stay alive through the sheer force of his hate and, presumably, the sheer force of the Force), and made him the head of a criminal underworld conglomerate. Eventually, he had showdowns with both Obi-Wan Kenobi and Emperor Palpatine himself; which leads one to surmise that we’ll be seeing more of him in the Star Wars spinoffs to come. And given that there’s been the suggestion of more “Star Wars stories” dedicated to Boba Fett and Obi-Wan Kenobi, it would seem that these “stories” will not quite be the one-offs that were originally hinted at, but something more resembling Marvel’s universe-building: it becomes easy to see how Han Solo, Lando Calrissian, Qi’Ra, as well as Boba Fett and Obi-Wan and Darth Maul may all find themselves crossing paths in a series of underworld vs. tyranny vs. rebellion films. Which, after all my kvetching about Solo, might redeem that film if in hindsight it proves just to be some elaborate throat-clearing to get the necessities of parsecs and rakish bachelorhood out of the way.


OK, I had some more thoughts on canonicity, but I will save them for what is now part three of my thoughts on expanded universes series. Until then, work hard and be good, my friends.

Leave a comment

Filed under film, Magic Wor(l)ds, nerd stuff


So it’s been three days of Slayage, with one day left to go, and the experience has been amazing. There’s something pretty singular about attending an academic conference where everyone is intimately familiar with the core texts. Normally, the conference experience, while often rewarding, tends to have a lot of papers and presentations that are quite simply mind-numbingly boring. Not because they’re banal or poorly written/presented (though there are those), but because the balance of what people are writing on is pretty far out of your wheelhouse, or so extremely specialized that it simply has no relevance to you. Which is not to say such papers cannot be valuable–I have learned a great deal from papers on topics I never would have read were they not on a panel I was attending–just that in many cases, you find yourself playing catch-up, trying to grasp the substance of a topic with which you are unfamiliar.

The flip side of this unfamiliarity is the need, in writing your own conference paper, to include a certain amount of exposition: you need to be cognizant of the people in the room who know nothing about your topic.

But at a conference like Slayage, everyone knows everything. This is so incredibly liberating: while writing an early draft of my paper, I suddenly realized “Wait … I don’t need to outline the main plot points of The Cabin in the Woods … everyone there will have seen it!” And in some cases, know it far better than me, in spite of the fact that I watched it at least ten times through in preparing my paper.

As an aside: the building in which many of the presentations have been scheduled has an elevator that dings when its doors close … and that ding is pretty much identical to the elevators in Cabin just before they unleash ALL THE MONSTERS. I swear to you, after multiple viewings of this film, when I heard that ding the first time I nearly wet myself.

Ahem. Anyway, the point is that it’s a pretty remarkable experience to be in the company of many, many very intelligent people who are all nerding out about the same set of texts in an extremely intelligent manner. It’s what I imagine conferences must be like for James Joyce or Milton scholars, only less antagonistic. The closest I’ve come to an argument with anyone here was politely disagreeing with someone who thought that Lovecraft was just a throwaway gesture in The Cabin in the Woods.

Speaking of … I will post again tomorrow with pictures and a fuller discussion of the conference, but for now, as promised, here is my conference paper in full. Replete with many slides, because I went to the Linda Hutcheon School of Conference Presentations, which dictates that you must distract you audience from your paper’s flaws with pretty pictures.


My paper today emerges from a broader research project that looks at a handful of contemporary fantasists who employ this genre rooted in magic and the supernatural—and which in such defining texts as The Lord of the Rings and the Narnia Chronicles is overtly religious—to articulate a specifically secular and humanist world-view. I am looking at, among others, George R.R. Martin, Neil Gaiman, Terry Pratchett, Richard K. Morgan, and Lev Grossman … and to this lineup, Joss Whedon is an obvious addition. What I’m arguing today is that, in The Cabin in the Woods, Whedon proceeds from an identifiably Lovecraftian mythos, rewriting it to stage a confrontation between the absolute unreason of Lovecraftian gods and the instrumental rationality of technocratic conspiracy—and in that confrontation critiquing instrumentality of both hues and asserting a humanist argument that is consonant with almost all his previous and subsequent work.

Before I start, three quick caveats: first, I don’t mention Drew Goddard. Whedon and Goddard collaborated on The Cabin in the Woods, but I just talk about Whedon here—so when you hear me say “Whedon” in relation to Cabin, please imagine “and Goddard” following in parentheses. Second, I’m using the terms science, technology, empiricism, reason, and rationality more or less interchangeably to mean “instrumental reason.” I just didn’t want to have to say “instrumental reason” repeatedly. And finally, my definition of humanism here is, by design, very loose; one of the blue-sky goals of this research is to reclaim the concept of humanism from the arid positivism of the New Atheists, and recuperate it from its savaging during the ascendancy of poststructuralism. It’s early days yet, however, so my conception of the humanism I want to champion is still evolving.

There is a video on YouTube of Joss Whedon delivering a speech upon accepting the Harvard Humanist Society’s Outstanding Lifetime Achievement Award in Cultural Humanism. His speech is classic Whedon: a mix of disarmingly irreverent humour and passionate advocacy, culminating in his assertion that “Faith in God means believing absolutely in something with no proof whatsoever. Faith in humanity means believing absolutely in something with a huge amount of proof to the contrary. We are the true believers;” true believers who, he continues, are perfectly able to “codify our moral structure, without the sky bully looking down on us telling us what to do.” What stood out for me when I first watched this speech was the apparent contradiction of Whedon’s avowed atheism and that fact that the television series that made his reputation and career is not only lousy with sky bullies, but effectively predicated on the existence of a supernatural order that includes the Christian god (and, as we learn in season six, heaven). Far from being a contradiction, however, this tension is exemplary of how in Buffy the Vampire Slayer—and just about everything else he has created—Whedon consistently pits human and humanist agency against a seemingly omniscient, omnipresent collective, both of the supernatural variety (Wolfram and Hart, the First Evil), and the technocratic (the Alliance in Firefly, the Rossum Corporation in Dollhouse).

The Cabin in the Woods is thus an interesting example, insofar as it juxtaposes the malevolent mystical collective and the conspiratorial, technocratic one. At first blush the film appears to be a retread of elements from season four of Buffy—the massive underground military operation with a paddock full of supernatural monsters, which ultimately escape with dire consequences—except that where the Initiative attempts to weaponize the supernatural, the Technicians of Cabin are in abject submission to it, employing their hyper-advanced technology in the name of carrying out a primeval blood sacrifice. To frame it more abstractly, the film merges the genres of Lovecraftian horror with that of late-twentieth century conspiracy theory.


Top: The Initiative, Buffy season four. Bottom: The Technicians’ bunker in The Cabin in the Woods

To address the Lovecraftian dimension first: Stephen King once famously characterized H.P. Lovecraft as twentieth-century horror’s “dark and baroque prince.” China Mieville, while granting the spirit of this praise, amends it slightly to account for “the canonical nature of Lovecraft’s texts, the awed scholasticism with which his followers discuss his cosmology, and the endless recursion of his ideas and his aesthetics by the faithful” (xi). Rather than being horror’s prince, Mieville says, Lovecraft is “horror’s pope.”


Considering Lovecraft’s vehement and vitriolic atheism, it is dubious whether he would have appreciated that moniker; on the other hand, despite his atheism, Lovecraft’s fiction articulates a mythos that is heir to the American religious visionary tradition. As Edward Ingebretsen observes, “Lovecraft writes in the traditional cadences of religious discourses” (133) that are particularly reminiscent of such fire-and-brimstone sermons as Jonathan Edwards’ “Sinners in the Hands of an Angry God,” which notoriously centers on the image of God dangling Man by the ankles over the fires of Hell. Repeatedly, Lovecraft posits a vast and ineffable cosmos populated by godlike beings beyond the ken of humanity. He opens his story “The Call of Cthulhu,” arguably the defining text of his mythos, with the following cheerful observation:


A key element I’ll be returning to here is the assertion of science’s absolute limitation, its helplessness in the face of those black seas of infinity. All it can serve to do is reveal to us the truly horrifying nature of existence, at which point our choices are madness or the rejection of the empiricism that brought us to this traumatizing revelation. Lovecraft’s fiction stages accidental encounters between individuals and these “horrifying vistas,” which are not the abyss of the infinite itself, but its symbolic manifestation in such Old Gods as Dagon or Cthulhu. Human existence in Lovecraft’s work is a thin scrim of ignorance in time and space, microbially insignificant next to the Old Gods.


This figuration of vast and nigh-infinite power is essentially religious in nature—or would be but for Lovecraft’s nihilistic inversion, which situates humanity not as the focus, product, or creation of the divine, but rather as utterly incidental to it: fire and brimstone without the chance of personal salvation. Indeed, in his book The Roots of Horror in the Fiction of H.P. Lovecraft, Barton Levi St. Armand observes that Lovecraft articulates a bone-deep Calvinism, with its “close-reasoning logic and unyielding determinism” but without Calvinism’s “metaphysical superstructure”—or in other words, the suffering and torment of the sinner’s life without the ultimate meaning attached to either salvation or damnation.“What we are left with in Lovecraft,” he asserts, “is thus a full-fledged cosmic consciousness, without any overt religious dimension … It is, in turn, the breaking of these natural laws of time and space that produces the sublime emotions of cosmic terror that characterizes his tales” (31-32). And whatever congress his characters do have with Cthulhu or any of the other Old Gods, the result is madness unto death—or, as in the case of the story “The Shadow over Innsmouth,” a monstrous transformation that itself comprises metaphorical madness. As Ingebretsen observes, Lovecraft adopts but distorts the American visionary tradition as represented by the Mathers or Jonathan Edwards,for if “Edwards implied that cosmic terror resulted from the too-attentive love of deity, Lovecraft situates terror in the indifference of [the] malignity of the cosmos” (118). China Mieville makes a similar argument, stating that Lovecraft does in fact see “the awesome as immanent in the quotidian” just as any religiously devout individual might, but for him and his characters “there is little ecstasy there: his is a bad numinous” (xiii).


It is not difficult to discern a distinctly Lovecraftian mythos in Buffy the Vampire Slayer: the idea that humanity is adrift among multiple planes of existence, most of which are populated by demonic forces that, if they think of humanity at all, think of it as a tasty snack; and an unbroken slayer lineage that goes back to Neolithic times, which was itself first created to defend humanity from the demons that pre-existed them. This mythos was expanded as the series progressed, explored more fully in later seasons and in Angel (never more pointedly perhaps than in the death of Fred at the hands of the Old God Illyria, whose contempt for humanity and her characterization of them as “the muck at [my] feet” and “the ooze that eats itself” strongly echoes Lovecraft’s assertion of humanity’s infinitesimal insignificance). The Cabin in the Woods alludes to Lovecraft’s mythos even more overtly: humanity is at the mercy of the Ancient Ones, gods who (like Cthulhu) slumber beneath the earth, known only to the small set of secret societies that worship them.



Cabin is no mere Lovecraft knockoff, however: Whedon deploys the Lovecraftian frame in an almost Miltonic manner, which is to say that it functions as a key to all mythologies, with seemingly every single horror movie trope both encompassed within, and indeed the product of, this broader mythos.


“They’re like something from a nightmare,” says new recruit Truman as he looks on their panoptical surveillance screens at the Buckners, the zombified pain-worshipping backwoods idiots whom Dana has inadvertently summoned. “They’re something nightmares are from,” Wendy Lin corrects him gently. “Everything in our stable is a remnant of the old world. Courtesy of … you-know-who.” Wendy’s statement echoes the way in which Lovecraft’s Old Gods—specifically Cthulhu in “The Call of Cthulhu”—inflect and infect the dreams of humanity. In Lovecraft’s mythos, the Old Gods incite madness and ecstatic worship even in their sleep, giving rise to “Cthulhu cults” in the backwaters of the world, whose devotees are described as an “indescribable horde of human abnormality” (152).


In The Cabin in the Woods, Lovecraft shares the stage however with the familiar (and it seems, at times, inescapable) trope of conspiracy. The Technicians play the role of the top-secret agency with omniscient surveillance capabilities and the seemingly infallible ability to manipulate and control unwitting victims. Situating them as adjunct to the Ancient Ones provides the film with a dual layer of critique: first and most obviously of the horror genre itself; but in juxtaposing the trope of conspiracy with that of an ancient, malevolent supernatural power, it at once draws out and negates conspiracy’s own principal symbolic force, which is the suggestion of divine or godlike powers.

To a certain extent, conspiracy as a trope has always functioned as a form of perverse theism, but that dimension became increasingly striking in the second half of the twentieth century. Don DeLillo, America’s veritable godfather of conspiracy and paranoia calls it “the new faith” (28). Scott Sanders similarly declares, “God is the original conspiracy theory,” and goes on to say that the conspiratorial world is one “governed by shadowy figures whose powers approach omniscience and omnipotence” (177). In Totem and Taboo, Sigmund Freud characterizes religion as essentially conspiracist in origin, comparing the figure the paranoiac to primitive societies who ascribe to their god-king persecutory powers of weather and plague; he makes an identical argument in Psychopathology of Everyday Life. And sociologist Karl Popper suggests that “the conspiracy theory of society” is simply the displacement of “a belief in gods whose whims and wills rule everything” onto the whims and wills of powerful organizations:


Hence, conspiracy narratives themselves frequently have something of the bad numinous at their center, manifesting symbolically as the suggestion of continuity between the technological present and a magical past—which functions more broadly to rewrite history as conspiracy, with the present-day conspirators heir to their ancient predecessors. Or to quote Fredric Jameson from The Geopolitical Aesthetic, the symbolic force of conspiracy narratives “draws not on the advanced or futuristic technology of the contemporary media so much as from their endowment with an archaic past” (17).


And here is where I want to go back to Lovecraft’s implied characterization of the numinous and science as different in kind rather than degree. Much fantasy, especially urban fantasy, either implicitly or explicitly depicts science and magic as on a continuum: in a variation on the old adage that a sufficiently advanced technology must appear as magic, the implication is that technology has the capacity to explain and replicate magic. Season four of Buffy is essentially an extended meditation on this principle.


Angel approaches it from a slightly different perspective, with our growing knowledge of Wolfram and Hart’s inner workings: the “legalization” (if you like) of the supernatural functions similarly to rationalize and domesticate the numinous. (My favourite depiction of this is the change in season five’s opening credits, in which the musical punctuation changes from the image of badass Angel kicking in a door to harassed Angel snapping shut a legal brief).


It is here that The Cabin in the Woods offers a subtle but substantive shift: the film appears to establish this same continuum between the empirical and supernatural, which would be doubly consonant with the trope of conspiracy as a form of displaced theism. The shift however is that the Technicians’ “archaic past” is not continuous with divine power but adjacent to it. The conspiracy evolved as subsidiary: as already mentioned, it is explicitly established as being in the service of an extant (albeit secret) theism. There is no hint of the Initiative here aside from cosmetic similarities. The Technicians do not attempt to domesticate their stable of monsters or weaponize them. The climactic slaughter that unfolds when Marty “purges” the system is superficially similar to what ultimately happens to the Initiative; but while the Initiative’s demise is an obvious allegory for the dangers of hubristically pursuing weapons technology, Marty quite literally unleashes hell.


The significance of The Cabin in the Woods’ shift from portraying science and magic as continuous, to this absolute disjunction between them, is not to allegorize the incommensurability between instrumental reason and the numinous, but to ironically collapse them into the same imaginative space, to show reason’s thralldom to unreason not as unnatural but somehow inevitable. If the film allegorizes anything, it is that dimension of the dialectic of Enlightenment that points to how instrumental reason taken to an extreme—in effect, becoming a religion in and of itself—produces the madness of unreason. The French Revolution devolves into the Terror, exclusionary understandings of humanism facilitate race theory and slavery, the Nazis’ dictatorial technocracy produces the Holocaust, blind pursuit of quantum physics gives us Hiroshima. Perhaps it seems odd, and even perhaps offensive, to discuss a parodic genre film in these terms, but I would argue that among the many, many reasons to love the work of Joss Whedon, one of the most prominent is the fundamental antipathy and suspicion that animates all he does: antipathy to and suspicion of instrumentality, of autocratic intervention, of the collective’s need to impose its will on the village.


The Cabin in the Woods does not end happily, but it ends with an inescapably humanist cri de coeur: Marty and Dana being literally satanic (see, there’s Joss being Miltonic again) as they declare non servium, and assert what agency they have in the face of the forces arrayed against them. The obvious argument against my claim here is that “um … they kind of killed the whole world with their petulance, there.” But I would suggest that the film’s overarching thesis is that it wasn’t Marty and Dana that brought doom—it was the ossification of instrumental reason in the service of madness. Whedon may employ Lovecraft as a foundational basis for much of his work, but he invariably asserts this elementally humanist defiance in its face, and says to both technocracy and religion “a plague on both your houses.”




DeLillo, Don. “American Blood.” Rolling Stone 8 Dec. 1983: 21-28, 74.

Freud, Sigmund. Totem and Taboo: Resemblances Between the Psychic Lives of Savages and Neurotics. Trans A.A. Brill. New York: Moffat, Yard, &co. 1918.

—. Psychopathology of Everyday Life. Trans. A. A. Brill. London: Ernest Benn, 1948.

Ingebretsen, Edward. Maps of Heaven, Maps of Hell: Religious Terror as Memory from the Puritans to Stephen King. New York: M.E. Sharpe, 1996.

Jameson, Fredric. The Geopolitical Aesthetic: Cinema and Space in the World System. Indianapolis: Indiana UP, 1992.

Lovecraft, H.P. The Call of Cthulhu and Other Weird Stories. Ed. S.T. Joshi. New York: Penguin, 1999.

Mieville, China. “Introduction.” At the Mountains of Madness. New York: Modern Library, 2005. xi-xxv.

Popper, Karl R. Conjectures and Refutations: The Growth of Scientific Knowledge. New York: Routledge, 1963.

Sanders, Scott. “Pynchon’s Paranoid History.” Mindful Pleasures: Essays on Thomas Pynchon. Eds. George Levine and David Leverenz. Boston: Little, Brown & Co, 1976. 139-59.

St. Armand, Barton Levi. The Roots of Horror in the Fiction of H.P. Lovecraft. New York: Dragon Press, 1977.


1 Comment

Filed under film, what I'm watching

The Ballad of Joss and Sir Terry, Genre Warriors

More Cabin in the Woods musings. The recap of the Game of Thrones finale will be up soon, promise. (And by “soon,” I mean late tomorrow or early Wednesday. Nikki is currently on the road, and I will be as of tomorrow).

As often happens with my blog posts, this one grew in the telling. For the actual discussion of Cabin and its relation to Terry Pratchett, you want to skip about halfway down.

When I was casting about for a topic to propose for my Slayage paper, I settled on Cabin because it seemed to fit vaguely in with the broader research I’ve been doing on fantasy and humanism. As I have watched and re-watched the film and worked through my arguments, it has become clear that it doesn’t vaguely fit with the broader research so much as it fits perfectly—and has helped me focus and hone my more general thinking as I focus and hone my argument for this paper. I love serendipity.


One big thing that unfortunately won’t make it into the conference paper is just how reminiscent Cabin is of Terry Pratchett’s writing, most specifically his novel Witches Abroad. It should not perhaps be surprising, as both Joss and Sir Terry are in the business of upending generic expectations and critiquing the ways in which genre tends toward reductive formulae that, while working within the genre’s peculiar logic, ultimately come to defy common sense. In The Political Unconscious, Fredric Jameson likens the evolution of genre to the process of sedimentation: when a genre is new—that is, before it is identifiably generic—it is radical and possibly revolutionary and comprises a fresh and unique form of representation. As the form is repeated in various iterations over years and generations, it creates its own set of expectations, and what was once fresh and new becomes ossified as new layers of sediment are laid down.

Which is not to say that all genre fiction, film, and television is reductive or formulaic, just that much of it is. Joss Whedon has frequently averred that he first conceived of the idea for Buffy the Vampire Slayer when watching a typical slasher film and wishing that the ditzy blonde who always dies first would instead turn around and beat the crap out of the would-be killer. The ossification of genre reinscribes narrative patterns and character behavior to the point where—as we all know from horror films—people make choices that make literally no sense. Run back upstairs from the bad guy? Of course. Go make out in a creepy forest after hearing about an escaped serial killer on the news? Why not! You heard a weird noise? Let’s have sex!

There’s a number of reasons why enough academics adore Joss’ work to sustain a conference and peer-reviewed journal; first and foremost is the application of his irreverent sense of humour to a genre that is not merely regressive but frequently actually retrograde in its portrayal of women, gender roles, sexual politics, to say nothing of its deeply conservative moral universe. Indeed, the very title Buffy the Vampire Slayer, which served and still serves to make people dismiss it out of hand, is itself typical of Joss’ approach: it subverts expectations by elevating the character we expect to be comic relief and an early victim to the role of hero.

But integral to Joss’ work is the very humour I mentioned above, which is not (as one might expect from the title) parodic or satirical, but often takes the form of a studied irreverence in the face of the terrible (“terrible” in the truest sense of the word). Call it “strategic snark,” if you like: though both Buffy and Angel manage to be frequently scary and even horrifying, the soul of the shows lies in how the main characters are constantly unimpressed by monsters, demons, and various other supernatural beings who demand terror and awe.



Buffy: So let me get this straight. You’re… Dracula. The guy. The Count.
Dracula: I am.
Buffy: And you’re sure this isn’t just some fanboy thing? Because … I’ve fought more than a couple of pimply overweight vamps that called themselves Lestat.
Dracula: You know who I am. As I would know without question that you are Buffy Summers.
Buffy: You’ve heard of me?
Dracula: Naturally. You’re known throughout the world.
Buffy: Naw. Really?
Dracula: Why else would I come here? For the sun? I came to meet the renowned … killer.
Buffy: Yeah, I prefer the term Slayer. You know, killer just sounds so …
Dracula: Naked?
Buffy: Like I … paint clowns or something.
(Ep. 5.01 “Buffy vs. Dracula”)

This tendency makes itself felt in just about everything Joss does, and certainly everything he writes. To my mind, the most Whedonesque moment of The Avengers is when the Hulk confronts Loki (starts at 0:43):

The snark and irreverence of Joss’ work is more than just comedy, as it articulates a very basic human defiance to instrumental and autocratic expectations. It is no coincidence that his work consistently exhibits a deep suspicion of and antipathy to powerful, conspiratorial groups and organizations dedicated to control, manipulation, and surveillance: the Watchers’ Council and the Initiative in Buffy, Wolfram and Hart in Angel, the Alliance in Firefly, the Rossum Corporation in Dollhouse, and SHIELD in The Avengers and Agents of SHIELD … and it doesn’t make much difference when the organizations are ostensibly on the side of the good guys, they are still treated with fundamental ambivalence.

I suppose if this post has a thesis (besides the obvious observation that Joss and Sir Terry are awesome), it is that Joss’ resistance to generic expectations allegorizes this similar resistance to instrumentality. And what makes his work fundamentally humanist is that he does not oppose the heroic individual against the faceless collective—he is no Ayn Rand—but rather the village. Or, well, the symbolic village, the small group of people representing an often ad-hoc comingling of strengths and flaws. Whedon heroes are never weaker, never more alienated than when they eschew the village to strike out on their own (as happens with Buffy about twice a season). The Scoobies, Angel Investigations, the crew of Serenity, the Avengers … Malcolm Reynolds is no ubermensch, Buffy no uberfrau, for the simple reason that they flag and fail when flying solo.

All of which makes The Cabin in the Woods such an interesting addition to the Whedon canon. As he and director Drew Goddard have said, they were interested in creating, if not a corrective to recent trends in horror, something that would put their spin on the genre—at once acting in homage by making allusions to literally dozens of classic horror movies, but also critiquing what I characterized above as the ossifying tendencies of genre (OK, I’m putting words in their mouths here a bit—neither of them used the word “ossify”). As mentioned in my previous post, Cabin takes the cliché story of a group of college students encountering murderous monsters on what was supposed to be a party weekend in the wilderness, and frames it as an event entirely contrived by a top-secret, vaguely military, conspiratorial agency that manipulates them into precisely that cliché horror narrative, for the purposes of turning them into a ritual sacrifice to the “Ancient Ones”—old gods now dormant beneath the earth who demand bloodletting in exchange for their quiescence.


The crux of the film is the way in which the framing device—the conspiratorial apparatus—dramatizes the artifice of horror film clichés, the deformation of characters into types. The ritual demands five types to be submitted to the slaughter, as the Director (Sigourney Weaver) explains in the film’s final moments:

There must be at least five. The whore: she’s corrupted. She dies first. The athlete. The scholar. The fool. All suffer at the hands of whatever horror they have raised. Leaving the last: to live or die, as fate decides. The virgin.

The Director enumerates familiar stock characters from many, many horror films (and the films of John Hughes, but that’s another essay). Except that in Cabin, the five characters are artificially forced into those roles by chemical means. Jules (Anna Hutchison), the ostensible whore, is characteristically blonde—though we learn at the very beginning that she only bleached her hair recently, and we further learn that the product had been doctored by the technicians to modify her behavior. Far from being a whore or a slut, she is in what appears to be a stable and loving relationship with Curt (Chris Hemsworth). Curt is on full academic scholarship, and in the opening scenes coaches Dana (Kristen Connolly) on the best books to read in a class she’s taking: “Seriously, Professor Bennett covers this entire book in his lectures. You should read this … now, this is way more interesting. Also, Bennett doesn’t know it by heart, so he’ll think you’re insightful.” But as the film goes on, he behaves increasingly like a testosterone-laden meathead, egging Jules on as she dances provocatively for the group, and causing pothead Marty (Fran Kanz) to protest “Since when does Curt pull this alpha-male bullshit? I mean, he’s a sociology major.” Marty himself is the film’s Cassandra, and who ironically fits most naturally into his role as the clown—ironically because he is the sole person unaffected by the technicians’ pharmaceutical interventions (his high-octane pot, we learn, renders him immune), and is the person who utters the common-sense protests most often heard from horror-movie audiences. The “scholar” is Holden (Jesse Williams), who is, in contrast to Curt, actually an athlete, a recent and much-desired addition to their college’s football team. And yet later in the film he dons spectacles and translates the Latin in the diary of Patience Buckner. And Dana, the ostensible virgin, is actually no such thing: we learn at the outset that she has recently emerged from an affair with one of her professors … a fact that does not deter Curt from later obnoxiously urging Holden to “de-virginize” her.

Throughout the film, the characters’ behavior is manipulated by technicians. When Jules resists Curt’s urging to have sex out in the forest, they raise the temperature, make the lighting in the clearing more romantic, and release pheromones. And when one character makes the totally commonsensical suggestion that everyone stick together?

Implied by this square peg, round hole approach (“We work with what we’ve got,” the Director shrugs in response to Dana’s protest that she’s not, in fact, a virgin) is that the artifice of the narrative is more critical than any basis in or resemblance to reality. What is most important is story, that everything unfolds the way it is supposed to, which is to say: they way it has always gone. In the end, Cabin is about resistance to narrative.

Discworld_Josh_Kirby_Witches_Abroad_detailWhich is where it starts to resonate with Sir Terry, and in particular with Witches Abroad. There is a tendency in the Discworld novels toward events unfolding because of a certain narrative inevitability (or as I like to call it, the “narrative imperative). In Moving Pictures, in which the Discworld gets the fantasy version of the silver screen, the Librarian of Unseen University—who is a very large orangutan—sees a tall tower and a pretty blond woman, and so feel mysteriously compelled to abduct her and climb the building. The blurb on the back of the recent Snuff, which is about Watch Commander Samuel Vimes taking a long-overdue vacation, reads: “It is a truth universally acknowledged that a policeman taking a holiday would barely have had time to open his suitcase before he finds his first corpse.”

And so on. But it is in Witches Abroad that Sir Terry really addresses this theme of narrative inevitability. The three witches of Lancre—Granny Weatherwax, Nanny Ogg, and Magrit Garlick—travel to the city of Genua (the Discworld New Orleans) to confront a fairy godmother determined to play out a Cinderella story no matter what the costs. As they travel, they encounter a trail of stories left in the fairy godmother’s wake, the most poignant of which is a version of Little Red Riding Hood, in which the big bad wolf welcomes death because he has been driven mad by the need to play the part ordained for him by the story. And in a more comic encounter, Nanny Ogg dons her brightly coloured striped tights only to have a farmhouse come crashing down on her head, followed by a bunch of confused dwarfs wondering why they feel compelled to sing.

Stories’ “very existence,” Pratchett writes at the start of the novel,

overlays a faint but insistent pattern on the chaos that is history. Stories etch grooves deep enough for people to follow in the same way that water follows certain paths down the mountainside. And every time fresh actors tread the path of the story, the groove runs deeper.

This, he continues, “is called the theory of narrative causality.” What it means is that stories, once told, take a shape, which is why they keep repeating themselves:

This is why history keeps on repeating all the time … So a thousand heroes have stolen fire from the gods. A thousand wolves have eaten grandmother, a thousand princesses have been kissed. A million unknowing actors have moved, unknowing, through the pathways of story.

This is a typical Pratchettian gesture: the Discworld novels started as parodies of fantasy fiction’s more egregious tendencies, but have evolved into a consistently trenchant humanist critique of absolutism and authoritarianism, and valorize pragmatism as both a simple virtue and philosophical system. His theory of “narrative causality” allegorizes the way in which custom can calcify and in the process come to be understood as inevitable. In this respect, Pratchett offers a useful rubric for reading Whedon’s central trope in Cabin. The narrative determinism as described in Witches Abroad is fundamentally similar to the ritualistic repetition of generic horror plots. In both cases there lies at the heart of the texts a resistance to transcendental logic. In Witches Abroad, one of the objects of Pratchett’s critique is fantasy’s tendency to rely on the dual crutches of prophecy and destiny; the stories are not preordained or divinely guided, but establish patterns through retelling, until “It is now impossible for the third and youngest son of any king, if he should embark on a quest which has so far claimed his brothers, not to succeed.” Pratchett’s theory of “narrative causality” is an inversion of the transcendental conception of destiny, fate, and predestination. It is also a far more complex and contingent one: not abandoning the notion of destiny altogether, but figuring it rather as inevitability wrought of repetition and iteration, and however deeply entrenched, ultimately disruptable. Destiny then, in Pratchett’s hands, becomes practically synonymous with genre. That is to say, the narrative expectations Pratchett describes in Witches Abroad—and their inevitability—effectively reflect the way generic expectations govern the telling and retelling of certain kinds of story. Hence destiny in Pratchett’s figuration is not an absolute, externally imposed by a transcendent power, but patterns of behaviour and custom wrought of our own making.


Phew. OK, that’s as far as I want to go with this one. If you made it this far, I feel as though I owe you a beer …

Leave a comment

Filed under film, what I'm working on

How Many Children did Lady Macbeth Have in the Cabin in the Woods?

title_cardTo the long list of reasons why I love my job, you can add this one: in a week I’m going to Sacramento CA to present a paper at the biennial Whedon Studies Conference. Yep, that’s a thing. And as much as it might sound vaguely comic-con-ish, it’s actually a serious academic conference that has been happening for twelve years. It should not be such a surprise, really, considering the fondness (and by “fondness,” I mean “obsession”) that a great many academics have for Buffy the Vampire Slayer and Joss Whedon’s other creations.

My good friend Nikki Stafford, whom you may remember from such roles as my partner in crime in our Game of Thrones posts, has been bugging me to go for years. She has been to most, if not in fact all, of the conferences so far, at least one time as a keynote speaker. And given that I am on the cusp of my very first sabbatical (yet another reason to love my job), I thought what the hell … it’s time.

And it’s also in Sacramento, which is just down the road from San Francisco. So, double score.

I’m presenting a paper on The Cabin in the Woods, a film Joss wrote and Drew Goddard directed, which was filmed in 2009 but didn’t get released unto 2012 because of distribution issues. If you haven’t seen the film, go watch it right now. Or at least stop reading this blog post, because SPOILERS.

For those who haven’t seen it but laugh in the face of spoilers, the premise of the film is that a group of five college co-eds go to a remote cabin, where they encounter in the basement a variety of creepy tchotchkes. As it turns out, each of them, handled in a certain way, will summon a specific monster (or cluster of monsters) that will kill them in succession. As it happens, Dana (Kristen Connolly) inadvertently summons the Buckner clan, a family of “zombified pain-worshipping backwoods idiots” by reading a Latin incantation out of the journal of Patience Buckner.


The conceit of the film, and what makes it a brilliant inversion of the horror genre, is that the five main characters are being manipulated by a clandestine group of technicians in a hi-tech facility underneath the cabin. The whole point of having the five oblivious co-eds play out a cliché horror movie narrative is to make them a sacrifice to the “Old Gods”—ancient, powerful beings who pre-existed humans and who demand ritual sacrifice, without which they will rise from their slumber and destroy the world. And so a sort of global conspiracy has arisen, with different countries performing their own sacrifices but all essentially working together as a fail-safe, so if one fails others will succeed and keep the stroppy Old Ones quiescent.

My paper, which is still in the process of being written (hey, I started on it a week and a half before flying to the conference—this is me being on the ball) is a consideration of the Lovecraftian influences on Cabin, and the ways in which Whedon rewrites H.P. Lovecraft’s “Old Ones” mythos. I’ll post the text of the paper after I’ve presented it … for now, I’m just using this blog to ruminate over elements of the film, and speculations about it, that won’t be making it into the paper itself.

Why won’t they make it into the paper? Because some of the notes I was writing today were veering dangerously close to fan fiction. Cabin leaves a lot of unanswered questions, especially in terms of history, and I found myself today maundering over possible origin stories for the film’s present-day military-industrial conspiratorial scheming. The implicit suggestion is that this ritual sacrifice has been going on for time out of mind—since the dawn of human civilization and before. And because I have one of those minds that can’t help puzzling over such questions, I find myself wondering: how did people get by before the advent of such omniscient technology such as is on display in the film? The lead “technicians” Stitterson and Hadley (Richard Jenkins and Bradley Whitford, the latter essentially playing the role as if Josh Lyman had gone into covert ops rather than D.C. politics) nudge their five victims into classic stupid horror movie behavior by releasing chemicals and pheromones, changing the lighting and temperature, and just generally making use of the impressive technology at their fingertips to better facilitate the ritual slaughter. And yet they ultimately fail, as do all the other stations around the world (spoiler). So if this all-powerful technological juggernaut fails, how on earth did previous sacrifices succeed?


Nate Fischer Sr., Fred Burkle, and Josh Lyman.

Of course, once we move into speculation on such an issue, for which there is little or no exposition in the film, we’re engaging in a version of what a former professor of mine called the Lady Macbeth’s children question. Apparently (I’ll have to take his word for it, as I’ve never encountered it myself), once upon a time Shakespeare enthusiasts speculated at length about how many children Lady Macbeth had. The only reference to her (possibly) having had children is when she coldly declares she would kill her own child to win Macbeth the throne: “I have given suck, and know / How tender ’tis to love the babe that milks me: / I would, while it was smiling in my face, / Have pluck’d my nipple from his boneless gums, / And dash’d the brains out.” But from this utterance apparently sprang interpretations of the play based in whether she had actually had children, and how many. Or perhaps it just functions as an exemplar of the fatuity of speculating too far outside the text. Either way, I can’t help doing it with The Cabin in the Woods—in part because I think I have a reasonable case to make. Not in front of a scholarly audience, mind you, but what use is a blog if I can’t use it for dorky speculation?

So: how did humanity fare with the sacrifice before they could build elaborate underground complexes dedicated to carrying it out? One possibility, which I’ll call the Lovecraft scenario, is that secret societies have existed since the dawn of civilization and before, who passed the secret of the Ancient Ones on down through the generations, and in premodern days it was easier to carry out human sacrifices without ruffling the sensibilities of the larger population. And perhaps there were even volunteers to do the ritual dying. Such a scenario is not, after all, too far from the mythology Whedon created for Buffy, in which the Slayer and the Watchers Council stretch in an unbroken line back to Neolithic times.


The other possibility, which I’ll call the Gaiman scenario, suggests that the contemporary technological apparatus facilitating the sacrifice, and its concomitant conspiratorial secrecy—and the intricate process of the sacrifice itself—are products of modernity. Perhaps in ancient times the ritual did not need to be nearly so elaborate, as the larger portion of humanity engaged in worship and sacrifice, and the copia of blood offerings satiated the Ancient Ones. In the final moments of the film, as it slowly dawns on survivors Dana and Marty (Fran Kanz) that they are the subjects of ritual sacrifice, Marty plaintively wonders at the hoops they’ve been made to jump through: “A ritual sacrifice?” he asks. “Great. You tie someone to a stone, get a fancy dagger and a bunch of robes. It’s not that complicated.” In this bewildered comment, I’d argue, lies a key to the film as a whole. Why is this ritual so complicated? In the Gaiman scenario, the Ancient Ones were not necessarily secret but worshipped in various guises, from Marduk to Anubis to Quetzacoatl to Zeus to Mithras to Elohim; but as humanity emerged from its dark ages, worship became less primal and more formalized, and hence less satisfying to the gods. Here emerged the conspiratorial cabals dedicated to placating them, and as modernity took humanity away from not just primal worship but religion generally, the ritual sacrifice became necessarily more elaborate and sophisticated, to compensate for its infrequency.

All of which is fun to speculate on, but as I mentioned, it veers dangerously close to fan fiction. Yet I would argue that the Gaiman scenario is consonant with the film’s broader themes, and in working it out in my head I think I’ve arrived at the core of my paper’s argument. What is brilliant about Cabin is the way it stages a confrontation between Enlightenment rationality (manifested in the technicians’ military-industrial technology) and what China Mieville has called H.P. Lovecraft’s “bad numinous”: Cabin is rooted in an identifiably Lovecraftian mythos, in which humanity inhabits a thin scrim of ignorance in time and space, insignificant grubs compared to the Old Gods. Lovecraft’s vision is religious in nature, but without the meaningfulness humans glean from a relationship to the divine. The motifs of madness and unreason run throughout his work.

H.P. Lovecraft's old god Cthulhu: bad, bad  numinous!

H.P. Lovecraft’s old god Cthulhu: bad, bad numinous!

The conspiratorial nature of the technicians’ ministrations is also key, for the film plays on classic, even cliché tropes of conspiracy and paranoia: the massive yet invisible omniscient organization (as Fredric Jameson notes, the “minimum basic components” of a conspiracy narrative are “a potentially infinite network [and] a plausible explanation of its invisibility”); the fetishization of the technology of surveillance and control; the one paranoid Cassandra (Marty) whose warnings are ignored as the rantings of a lunatic (or in Marty’s case, chronic pothead); but most importantly, the way in which the conspiracy comes to function as a supplemental or substitute religion.


Seeing as how I wrote my dissertation on conspiracy and paranoia, I might as well quote myself making this very point:

[C]onspiracy sometimes seems to have something of the divine to it: “Conspiracy,” writes Don DeLillo, “is the new faith.” Scott Sanders similarly declares, “God is the original conspiracy theory,” and goes on to say that the conspiratorial world is one “governed by shadowy figures whose powers approach omniscience and omnipotence.” In Totem and Taboo, Sigmund Freud relates the character of the paranoiac to primitive societies (“savages”) who ascribe to their god-king persecutory powers of weather and plague; he makes an identical argument in Psychopathology of Everyday Life. And sociologist Karl Popper suggests that “the conspiracy theory of society” is simply a form of perverse theism, of “a belief in gods whose whims and wills rule everything.”

Hence, conspiracy narratives frequently have something of the bad numinous at their center, manifesting symbolically as the suggestion of a continuity with a conspiratorial past—or more broadly, with the positing of history as conspiracy. Or to again quote Jameson, the symbolic force of conspiracy narrative “draws not on the advanced or futuristic technology of the contemporary media so much as from their endowment with an archaic past.” He’s speaking specifically here about The Crying of Lot 49, but the point holds for a surprising number of conspiracy narratives—as indeed, as it should be obvious, it does for The Cabin in the Woods.

Wait, I think I've seen this movie before ... where's Ash when you need him?

Wait, I think I’ve seen this movie before … where’s Ash when you need him?

Cabin’s unique twist, however, is that while typical conspiracy narratives constitute a substitute theism and draw symbolic force from the suggestion of continuity with an archaic past, Cabin’s conspiratorial apparatus is explicitly established as being in the service of an extant (albeit secret) theism; and while I can speculate on its continuity with an archaic past, as I did above, the film itself sets up the conspiratorial organization in symbolic opposition to that past as manifested in the Ancient Ones. And yes, I do mean in opposition, for while the technicians’ conspiratorial network—which is ostensibly global—is in the service of the Ancient Ones, that service is explicitly a matter of abject submission. The climactic sequence when Marty and Dana release all of the monsters in the technicians’ bestiary—which of course then proceed to horrifically kill all of the people in the underground lair—drives this point home with literal vengeance (and is, indeed, a characteristically Whedon gesture: the weaponization of the supernatural and its tendency to backfire feature highly in Buffy season four, in Firefly with River’s “modifications,” and in the Heroes­-esque attempts to contain emergent superpowers in Agents of SHIELD).


Yup. It gets a bit messy when you unleash hundreds of bloodthirsty monsters all at once.

To return to why I think the Gaiman scenario holds water in all this: the contemporary moment of the film allegorizes the divorce of instrumental reason and the numinous (bad or otherwise), even as rationality in the form of technology not only submits to unreason (the Lovecraftian Ancient Ones) but produces it (the murder of innocents). If we consider the evolution of Cabin’s ritual as the gradual distanciation of science and reason from the numinous, the film becomes a potent allegory for the ossification of science and religion into incommensurability, with neither providing a rational, humanist moral center.


At any rate, that seems to be where my paper’s argument is going … now if I can just get there without having to ask how many children Lady Macbeth had in the cabin in the woods, it might just work.

1 Comment

Filed under film, what I'm working on

Much Ado About Joss Whedon


I finally saw Joss Whedon’s rather charming and whimsical Much Ado About Nothing last night, which he filmed in secret entirely at his house and sprang on an unsuspecting world a little more than a year ago. The secrecy and the casual, house-party way in which it was filmed—over a scant two-week period, and casting a group of Whedon regulars—mesh well with the general feel of Shakespeare’s play. Much Ado is an idle play set in a moment of respite and recuperation, with Prince Don Pedro of Aragon taking up temporary residence in the home of Duke Leonato, in the aftermath of a war just ended. And of course, hijinks ensue: young Count Claudio is in love with Hero, Leonato’s daughter; the Prince’s disgraced bastard brother Don John seeks only to revenge himself on everyone; and the Duke’s niece Beatrice engages in a “merry war” with the Prince’s adjutant, the witty but misogynistic Benedick. In these idle days of relaxation, as everyone awaits the upcoming wedding of Hero and Claudio, two plots unfold: Don John’s scheme to disgrace Hero and ruin the nuptials, and the Prince’s to make Beatrice and Bennedick fall in love.

The 1993 film version by Kenneth Branagh, starring himself and his then-wife Emma Thompson as Beatrice, remains one of my favourite pieces of comfort-food cinema. Filmed in Tuscany, the scenery alone is worth the price of admission; but even more spectacular is Shakespeare’s dialogue realized by two of the greatest Shakespearean actors of this or any generation:

The film isn’t universally brilliant: Robert Sean Leonard’s performance as Claudio is so wooden they might as well have given the role to a table, and the less said about Keanu Reeves’ Don John the better. But one of my favourite performances is Denzel Washington as Don Pedro:

All this is by way of saying that Joss Whedon had his work cut out for him, at least as far as this viewer was concerned—not because I expect him or his actors to live up to the RSC standard, but because it is difficult for me to listen to the language of Much Ado without hearing the 1993 version in my head.

Which is why Whedon’s intimate, understated film is so good. There is a distressing tendency to overplay Shakespeare, to shout lines from the rooftops that are best treated with subtlety (I’m looking at you, Stratford Festival) … Kenneth Branagh has at times been guilty of this himself, but he mostly knows when to rein things in (or perhaps that was Emma’s influence—anyone else notice how his filmmaking did a nosedive after they split?).


Alexis Denisof as Bennedick and Amy Acker as Beatrice

Joss Whedon, however—as just about any fan of Buffy the Vampire Slayer or Firefly or, really, any of his work can attest—has an exceptionally good ear for dialogue, for its rhythms and nuances. That he would do well with Shakespeare’s, especially a play loaded with so many double entendres and ironic quips, should be no surprise. His Much Ado feels pretty much exactly like what it is: a weekend house party of old friends, which is ideal for the tone of the play. Given that the entire film takes place in a single house, it has the feel of a bottle episode of a TV show (albeit with a much larger ensemble), and this creates a sort of intimacy, wanted and unwanted: characters constantly have tete-a-tetes in the various rooms of the labyrinthine house, but are also frequently observed by other characters; again, at times by design (such as when Beatrice and Bennedick “overhear” the others talking about them), or when Don John and his lackies spy on the proceedings. As Shakespearean scholars will tell you, there is a pun in the play’s title: “nothing” in Elizabethan England was pronounced “noting,” and this is indeed very much a play about people noting things about other people. Nothing much happens that is not observed.


Things being noted.

I confess I was worried that some of the actors would have difficulty with the language; overall, the performances were uneven, which I have to assume was due at least in part to the short filming time and not much opportunity to really rehearse. No one was abysmal, however, and the cast was a smorgasbord for devoted fans of Whedon, as all of the lead characters are actors who have appeared in his television shows and films. Alexis Denisof (Buffy, Angel) was perhaps the most disappointing: he never quite fell into the rhythm of the dialogue, and his timing was less than perfect. It is perhaps telling that his best moments were broad comedy, such as when he is showing off for Beatrice, doing pushups and crunches. In contrast, Amy Acker (Angel, Dollhouse, The Cabin in the Woods) is a lovely surprise as Beatrice, more at home with the Shakespearean language than anyone else in the cast. She grasps both Beatrice’s acerbic wit and her tamped-down loneliness, and communicates with wonderful subtlety the deep and smoldering attraction she harbours for Bennedick (something not, alas, mirrored in Denisof’s stilted performance). Whedon chooses to start the film with a wordless prologue, in which Bennedick sneaks out of Beatrice’s bedroom, hesitating a moment at the door while she pretends to sleep. Establishing that the two of them have this prior relationship provides a welcome subtext for the action that follows: the assumption being that Bennedick steals away to go off to war, not, perhaps, expecting to return—which gives more depth to his later line “When I said I would die a bachelor, I did not think I should live till I were married,” and a double meaning to Beatrice’s comment “I know you of old.”


Amy Acker as Beatrice, Jillian Morgese as Hero.

The other vague disappointment (though only vague) was newcomer Jillian Morgese as Hero. She wasn’t bad, precisely–just more of a nonentity, which I suppose we have to partially blame Shakespeare for, as Hero is more ornamental a character than anything else (when even Kate Beckinsale suffers this problem in the 1993 version, one has to look less at the performance and more at the text). Sean Maher (Firefly) is good in the somewhat thankless role of Don John, suitably villainous without resorting to overt mustache-twirling; Reed Diamond (Dollhouse) is stolid and noble as Don Pedro; Fran Kanz (Dollhouse, The Cabin in the Woods) is vaguely adorable as the gormless Claudio, especially considering how incongruous it feels to have the stoner Marty from Cabin well-groomed and –dressed.


Clark Gregg as Leonato

I was particularly happy to see Agent Coulson himself, Clark Gregg (The Avengers, Agents of SHIELD) in the role of Leonato—I’ve been fond of him as an actor for a long while, mostly for his Aaron Sorkin roles as the eleventh-hour savior in Sports Night and the recurring spot on The West Wing as FBI special agent Casper. Gregg is an actor with a talent for conveying quiet authority and mischievousness simultaneously, and Whedon uses him to great effect this way in Much Ado.


Tom Lenk as Verges, Nathan Fillion as Dogberry.

Perhaps unsurprisingly, however, one of the highlights was Captain Tightpants (Nathan Fillion, for those of you unfamiliar with Firefly) in role of the constable Dogberry. Perennial Whedon geek Tom Lenk (Buffy, Angel, The Cabin in the Woods) is his deputy Verges. In this modern setting, Dogberry and Verges and their hapless underlings are hired security, ensconced in a basement room with closed-circuit monitors. What I said above about how wonderfully understated Whedon has made this film is particularly striking with the rude mechanicals: Fillion resists the urge that has claimed so many other actors in this role, namely to play it with slapstick bombast. His Dogberry is measured, serious, and precise—all of which makes his self-important malapropisms that much funnier. He and Lenk play their “watchmen” as rent-a-cops who desperately want to be cops, right down to some CSI Miami-worthy sunglass acting in the final scenes.

On the whole? Don’t expect RSC-level performances, and you’ll be fine. Joss Whedon always impresses me with his shrewd intellect and his subtlety (even when blowing up New York), and Much Ado is no exception.


Filed under film

Ender’s Game and Empathy


Warning: the following contains spoilers for Ender’s Game.

If I had to sum up my reaction to the film adaptation of Ender’s Game in a single, Simpsons-inspired word? Meh.

I suppose it shouldn’t come as much of a surprise that the film was underwhelming, any more than I should be surprised at how un-disappointed I was by this fact. I know there are many who, repulsed by Orson Scott Card’s homophobic rhetoric, want the film to bomb; I also suspect that there are many devoted fans of the novel who hoped (whatever their thoughts on Card’s politics) that the film would be a triumph. I myself am however quite satisfied at the fact that the film will likely pass from the public consciousness with nary a ripple, which is to my mind a more potent rebuke to Card’s anti-gay vitriol.

It is tempting to think that the novel is simply not amenable to adaptation, but there were enough high points in the film to suggest otherwise. The overarching problem is that, while generally well made and was at some points visually stunning, the whole exercise proved rather affectless. I can’t in all honesty lay that at the feet of the actors, either—there weren’t many weak points, and Asa Butterfield did as great a job as Ender as was possible, given the limited range of his material. In a cast comprised largely of children (or young adult) actors, this is no mean feat.


One of the novel’s crucial themes lies in the consideration of what makes a brilliant battle commander and, concomitantly, how to make a brilliant battle commander. Ender Wiggin, we learn at the outset, possesses the ideal genetic balance: intelligence, audacity, ruthlessness, charisma, imagination, and empathy. It is this last element that the novel teases out so brilliantly and the film completely botches—mainly because the novel demonstrates how Ender’s capacity for empathy develops, how it makes him understand his enemies, and how it brings him close to the brink of madness, in a series of well-developed sequences and encounters. The film, by contrast, chooses to elide most of those experiences in favour of having his adult keepers say repeatedly that his empathy makes him brilliant. The moment from the novel—replicated faithfully in the film—in which Ender makes this point explicit, confessing to his sister that his ability to love his enemy is what makes him so effective at destroying him, comes after a long series of protracted war games at the orbital Battle School. The film chooses to truncate those war games into a handful of scenes that are woefully inadequate in communicating Ender’s development as a commander.

ender battle room

It goes without saying that when adapting literature to film, a significant amount of the source material has to get left out. The secondary storyline in the novel about Ender’s siblings Peter and Valentine is entirely excised: Peter, so powerful a character in the novel, is reduced in the film to a few moments of screen time, enough to establish his violent and abusive tendencies. Valentine plays a larger role, though not by much—representative in the novel of Ender’s violent and empathic tendencies, they become little more than ciphers in the film. That being said, it was not this elision that marred the film, but the lack of any sense at all of Ender’s development as a commander: his adaptive abilities, his overcoming of the various stigma he’s given, earning the trust and loyalty of his soldiers, and above all the increasing isolation he suffers as his reputation and authority grows.

And honestly, it would not have taken much to correct this lack. I don’t often argue for films to be longer, but they could have easily added fifteen or twenty minutes to Ender’s Game—especially considering that that extra time would have mostly been devoted to the Battle Room, a zero-G environment in which the recruits learn and practice tactics in the fighting of mock battles. Few people who have read the novel would dispute that the Battle Room is Orson Scott Card’s greatest imaginative creation—to the point where the novel has been put on a variety of military school or officer training curricula in the U.S. It is here that Ender proves his mettle, as Colonel Graff, the commandant of Battle School (played by a satisfyingly gruff Harrison Ford in the film), sends him into battle against increasingly ridiculous odds.

Rendering the Battle Room cinematically was always going to be the most challenging dimension of adapting the novel, and the filmmakers did a superb job—which makes it doubly frustrating that we don’t have much action unfold here. The cinematography captured the dizziness and vertigo that I have to imagine would afflict anyone in zero-G.

But again, we don’t get enough of Ender’s development, which I would argue is crucial to the story. Everything that follows after he leaves Battle School for Command School is rooted in the lessons he learns there, and the profound ambivalence he develops for his own talent. What makes Ender’s Game such a good novel is its refusal to glorify violence and warfare, which is not to say the battle sequences aren’t thrilling to read. Even (or perhaps especially) the mock warfare of the Battle Room, however, takes a constant toll on Ender physically, emotionally, and psychologically. At the same time, we see the machinations of the military as they manipulate Ender and his fellow recruits, changing the rules of the game on the fly, moving the goalposts, selectively isolating or tormenting (or lauding) their charges, like a huge, elaborate psychological experiment. When some of the officers have scruples, others acknowledge the monstrosity of their project—and, if they are to defeat the alien threat, its necessity. In the novel, Colonel Graff resigns himself to facing charges of war crimes … if they win the war. In the film he phrases it more bluntly, barking at his adjutant that destroying Ender psychologically will matter not at all if they’re all exterminated by the aliens.

Ender’s Game, in this respect, functions as an extended ethical debate: what cost survival? The final bait-and-switch, in which Ender destroys the aliens’ home world while under the illusion that he is playing just one more training simulation, carries so much more force in the novel precisely because we have been witness to his ambivalence and anxiety about his own monstrosity. In the end he becomes a monster in spite of his ambivalence—that he has that decision taken away from him (or that he’s encouraged to make it under false pretenses) is the final, most egregious injury dealt him by those in command. In the film however, in spite of some powerful acting by Asa Butterfield, the emotional impact of the “reveal” (to say nothing of its surprise) is fundamentally subverted.


There have been a number of reviews of the film and commentaries of the novel apropos of the film’s release puzzling over this central question of empathy. Most crucially, a lot of reviewers and critics have expressed a frustrating cognitive dissonance between a novel that celebrates Ender’s capacity for empathy with an alien species, and its author, who seems incapable of similar empathy for certain fellow humans. What do we make of that? I raised this question last winter when I taught Ender’s Game in a third-year SF class, and I also asked how we deal with a novel whose author expresses a hateful worldview and advocates criminalizing a sexual lifestyle practised between consenting adults. I’m not really any closer to answering that question now than I was then. An obvious approach would be, as a significant number of people have advocated, boycotting his works (and also this film, which obviously I did not do). It poses a knotty question: do I teach Ender’s Game again? (I had put it on my SF course’s reading list before discovering the full extent of Card’s political activism). I’m loath to put more money in his pockets, something requiring thirty-odd students to buy his novel would do. But as I hope this review has made clear, Ender’s Game is a fantastic book to teach, precisely because of its ethical dimensions.

A recent article by Jonathan Rauch made what I thought was an excellent point: that screeds like Orson Scott Card’s various fulminations against gays and the “gay agenda” might rouse the homophobic passions of a few, but more and more—as LGBT individuals become increasingly visible, vocal, and heterosexual anxiety becomes thus increasingly allayed—such intemperate assertions are recognized for what they are, paranoid and ludicrous hate. As Rauch writes:

Some of the things [Card] has said are execrable. He wrote in 2004 that when gay marriage is allowed, “society will bend all its efforts to seize upon any hint of homosexuality in our young people and encourage it.” That was not quite a flat reiteration of the ancient lie that homosexuals seduce and recruit children—the homophobic equivalent of the anti-Semitic blood libel—but it is about as close as anyone dares to come today.

Fortunately, Card’s claim is false. Better still, it is preposterous. Most fair-minded people who read his screeds will see that they are not proper arguments at all, but merely ill-tempered reflexes. When Card puts his stuff out there, he makes us look good by comparison. The more he talks, and the more we talk, the better we sound.

Considering Rauch’s words, I have to wonder if the controversy over Ender’s Game isn’t perhaps something of a gift—both for the cause of forwarding gay rights, and for those of us who teach these sort of things in the classroom. The thought of putting money in Card’s pocket makes me vaguely ill, but I also cannot deny that whatever his politics, the man wrote a damn good novel … and that however much money I make for him putting it on a course (anyone know the percentage the average author gets in royalties? Times about $12 for the paperback, times, say, thirty-five students?), I have to hope that raising these issues in the classroom more than outweighs the mischief he can do with the $65 dollars or so my class would earn him.

Perhaps I’ll even show clips from the film.

Leave a comment

Filed under film, wingnuttery

Two Discussions about Orson Scott Card

Apologies for this blog’s inadvertent hiatus. I actually have an awful lot of things in the hopper, and once classes start I’ll be posting more frequently, with regard to what we’re reading. I’ve got a Breaking Bad post in the works, as well as the long-promised follow-ups to my initial post on fantasy and cruelty. What can I say? It’s summer.

But for today, it’s all about everyone’s favourite SF homophobe, Orson Scott Card.

Why I’ll Go See Ender’s Game

enderThis past winter I taught Orson Scott Card’s novel Ender’s Game for the first time in my science fiction class (which I was also teaching for the first time). I put it on the course list without thinking, by which I mean its inclusion was something of a no-brainer for me. I’d first read the novel about twelve years before and reread it several times since, and I looked forward to the chance to discuss it in a classroom setting. I knew, vaguely, that Orson Scott Card (OSC from here on in) was something of a religious conservative, but as there was no suggestion of that in the novel it never bothered me.

The true scope of OSC’s political and religious conservatism came glaringly to light after I’d put in my book orders for the term, when a number of articles he’d written advocating, among other things, armed revolt against the “gay agenda” and for the recriminalization of homosexuality, received a storm of publicity. Between the buzz about the film adaptation of Ender’s Game in progress and the series of court decisions in favor of gay marriage, OSC’s anti-gay opinions became impossible to ignore, as did his political crusading.

It raised an interesting but fraught problem, one which we addressed at length in class: how do we approach a novel that, in itself, has a great deal of merit, when its author not only holds opinions we find vile and reprehensible, but actively uses his not-inconsiderable wealth and fame to try and marginalize and disenfranchise a certain segment of the population? The opinions themselves are not so much the issue—if we eliminated from our reading and viewing all the work of artists we thought were assholes, we wouldn’t be left with much. But the inescapable fact about OSC is that purchasing his books contributes to his bottom line, both rewarding him financially and augmenting his influence.

There has been a great deal of discussion and argument about this question online—Alyssa Rosenberg, as usual, has some excellent thoughts here and here—with some people advocating for a boycott of the film of Ender’s Game. Though I’ve gone back and forth on the question, I know I will myself go see the film. I’m reluctant to put money in OSC’s pocket, but the past few months have convinced me that all of the publicity isn’t actually doing OSC any favours. Gay marriage, as the expression goes, is an idea whose time has come—and OSC’s very vocal opposition has raised his profile in a way that is starting to impact him negatively. While SF and fantasy fandom is hardly a hotbed of pro-gay activism, it does possess a significant and vocal constituency in that respect, which managed to scuttle a Superman story arc that DC Comics had hired him to write. And Summit Entertainment is being very conspicuous in keeping OSC inconspicuous in the lead-up to the release of Ender’s Game, leaving him off the publicity slate. He has actually become quite toxic, a fact he can’t be unaware of, especially in light of the current popular disgust with Russia’s anti-gay laws and the IOC’s timidity. It makes me wonder how an unreconstructed American religious conservative feels, knowing he’s making common cause with Vladimir Putin?

(He recently resigned from the board of directors of the National Organization of Marriage. It’s nice to imagine that this storm of bad publicity led to this resignation, and that he’s retreating from being so vocal in his homophobia, but I suspect not.)

One of the arguments for boycotting Ender’s Game, besides the fact that it will enrich a bigot, is that if the film is a great success, it will validate OSC. I’m far more sympathetic to the first position; far from validating OSC and his opinions, the potential popularity of Ender’s Game will, I suspect, create a cognitive dissonance between that story’s basic humanity and its author’s hateful politics. I say this with a certain amount of confidence, as I know that already happens with the novel—in my SF class, many of my students expressed shock that the person who created Ender Wiggin and craft such a compelling story could also be so paranoid and irrational. There is always the possibility that some people are or will be so taken with Ender’s Game that they’ll give his anti-gay rants (and his particular species of paranoid batshit generally—see below) some credence. But I have hope that OSC’s raised profile, coupled with an idea whose time has come, will do him and his opinions more harm than good.

Unlikely Events that will Totally Happen

As Dave Weigel observed recently in Slate, OSC’s attempt to keep a lower profile has led the furor over his anti-gay writings to subside somewhat. But, Weigel maintains, this is good, for “the gay marriage foofarah was a distraction from Card’s much more fascinating political paranoia.” He points to a blog post OSC wrote in May titled “Unlikely Events” in which he promises in the first sentence to predict “how American democracy ends.”

Except not really. “No, no,” he protests in his next sentence, “it’s just a silly thought experiment! I’m not serious about this! Nobody can predict the future! It’s just a game. The game of Unlikely Events.” What follows is a lengthy prevarication about the differences between fiction and history. Fiction, he says, depends on plausibility, and the task of the fiction writer is to make a causal series of events not just likely but inevitable. Historians, conversely, require evidence, and the reason prognostication almost invariably ends up being wrong is because history does not have fiction’s convenient form of causation.

Fair enough, I suppose. He goes on to point out that historical lies have a great persistence, because they almost always reinforce some people’s story they tell themselves about history. Also fair, though the trio of examples he offers are somewhat head-scratching:

Historical lies have great persistence. There are still people who think that Winston Churchill “failed” at Gallipoli; who believe that Richard III murdered his nephews, though the only person with a motive to kill them was Henry Tudor; who believe that George W. Bush lied about WMDs in Iraq.

Oh … where to begin? Right here OSC demonstrates, inadvertently, that the distinction he wants to make between history and fiction is far more nebulous than he allows. Gallipoli was an unmitigated disaster, and it was Churchill’s brainchild. I have yet to read anything claiming that the operation was actually a success, but I’m sure such arguments are out there; and while a lot has been done to recuperate the reputation of Richard III, the question of whether he murdered his nephews is far from settled—it is, indeed, the object of much debate still. (Ironically for OSC’s blithe assertion, the single most influential argument for RIII’s innocence was a novel—the wonderful Daughter of Time by Josephine Tey). In both of these cases, the “lies” OSC cites have been, and continue to be, matters of debate and discussion.

And the less said about the WMD claim, the better. Moving on.

All of this is in the service of a rather disingenuous throat-clearing—fiction is about causation, history about evidence, and anyone who predicts the future is doomed to get it wrong. BUT … that being said, of course, there is a dire end to American democracy, and OSC, SF writer extraordinaire, has seen it. Or, to quote his post,

Yet this doesn’t mean prediction is useless or meaningless. There were plenty of people who foretold the disaster that Hitler would bring to the world if he came to power in Germany, and those predictions were exactly fulfilled … The only reason people were taken by surprise was that they simply refused to believe (a) what Hitler himself said he would do, and (b) the previous related examples from history.

Hmm. Interesting example to use. Never mind the fact that Obama’s most vociferous opponents love comparing him to Hitler—what I want lock in here is the idea of people doing what they promise to do. I wish Obama had done what he promised in the 2008 campaign—or, well, more of it. But he hasn’t. And there is a huge, delusional wing of the American right—including our friend OSC—who want to find him guilty of a host of things he hasn’t done, and never promised to do. But keep the thought of promises and avowed intentions in mind, because I’ll be coming back to it.

For now, I just want to laugh with the mirth of the damned at OSC’s dystopian scenario. To quote The Princess Bride, let me explain … no, there is too much—let me sum up. Basically, Michelle Obama will be the president after Barack, and he will continue to reign through her.


Michelle Obama is going to be Barack’s Lurleen Wallace. Remember how George Wallace got around Alabama’s ban on governors serving two terms in a row? He ran his wife for the office. Everyone knew Wallace would actually be pulling the strings, even though they denied it.

Michelle Obama will be Obama’s designated “successor,” and any Democrat who seriously opposes her will be destroyed in the media the way everyone who contested Obama’s run for the Democratic nomination in 2008 was destroyed.

Of course, this is an unlikely scenario—even with the willing and slavish assistance of the mainstream media, which OSC maintains have always been in Obama’s camp—so Obama will need assistance in seeing his dictatorial vision through.

As OSC admits, unlikely. But plausible! Plausible, if you buy the canard that the mainstream media is entirely in the pocket of the Obama Administration, and that their unthinking acquiescence to his every whim translates into similar acquiescence on the part of every member of the Democratic Party (including, presumably, Hilary Clinton—but OSC seems to think that Obama completely destroyed her chances by hanging her out to dry on Benghazi). Of course, this nefarious plan runs up against the fact that there are many right-thinking Americans like OSC. However will Obama overcome their opposition?

By mobilizing inner-city (i.e. black) gangs into a national police force. Seriously:

Where will he get his “national police”? The NaPo will be recruited from “young out-of-work urban men” and it will be hailed as a cure for the economic malaise of the inner cities.

In other words, Obama will put a thin veneer of training and military structure on urban gangs, and send them out to channel their violence against Obama’s enemies.

Instead of doing drive-by shootings in their own neighborhoods, these young thugs will do beatings and murders of people “trying to escape”—people who all seem to be leaders and members of groups that oppose Obama.

Already the thugs who serve the far left agenda of Obama’s team do systematic character assassination as a means of intimidating their opponents into silence. But physical beatings and “legal” disappearances will be even more effective—as Hitler and Putin and many other dictators have demonstrated over and over.

And thus does the Republic die. I read these lines over and get weary at the thought of pointing out the basic flaws in OSC’s scenario, so fortunately I can just like to Dave Weigel’s succinct and searing demolition  of it in Slate. I’m less interested here in how absurd it all is than with just how disingenuous OSC is in setting it up. He titles the post “Unlikely Events,” and is careful to point out the fact that prognostication is almost always wrong. BUT … as a fiction writer, etc. etc., and as a student of history—again, etc. etc.—he is peculiarly situated to offer a plausible scenario. Or to put it more succinctly: this will never happen, except that it totally will.

xtian nationI wouldn’t have thought twice about this piece of paranoid scribbling had it not been for the fact that I’d recently read the new novel Christian Nation by Frederic Rich. The premise is alternative history, positing what might have happened if John McCain had won the 2008 election and, mere months into his presidency, died of an aneurism. Under President Palin (shudder), the United States finds itself on the road to Christian theocracy, culminating in civil war in 2020 and a totalitarian evangelical government.

As a novel, Christian Nation is a miserable failure—principally because it is poorly written, with one-dimensional characters, and a hackneyed and shaky narrative. The premise is intriguing, but requires too much exposition: Rich gives us as one of his principal characters a preternaturally serene and intelligent gay Indian man named Sanjay, who plays the Cassandra role in the years leading up to and immediately following the rise of Sarah Palin to the presidency (again: shudder). I have to assume that the vast majority of people who read this novel, like me, will do so because of the premise and because they bear great antipathy to militant evangelicals. But I promise you that, however much you agree with Sanjay and however much he warnings alarm you, you will be so pissed off with him … because for the first third of the novel, everything he says starts with “But did you know …” and proceeds to enumerate yet another little-known fact about Christian fundamentalist political ambition.

At the same time, as annoying as he gets, Sanjay’s screeds are why you should read the novel. Rich has done his research: the best thing I can say about Christian Nation is that it doesn’t unfold as a liberal fabulation about how we fear evangelical theocracy might happen so much as a point-by-point explication of what they want to do. Sanjay’s irritating conversational tic is the author’s way (clumsily) of communicating the fact that nothing he depicts after Palin’s ascendancy (third time: shudder) is actually out of step with what numerous Christianists from the 1960s onward have called for in books or from the pulpit.

Which brings us back to the OSC statement I highlighted, about how people were only surprised by Hitler because they hadn’t expected him to actually do the things he promised he’d do. This distinction is important, for it emphasizes (ironically) everything wrong with OSC’s post and everything right about Rich’s novel. I confess, when I read Christian Nation, I kept thinking “this is like Left Behind, just for liberals” (and, I’m sad to say, not much better written). The frustrating thing about Christian Nation, in hindsight, is that it would have worked much better as non-fiction … or as a two-part endeavour, which outlined the background of evangelical political desires, and then proceeded to say “let’s imagine …” At least that way, we could have avoided the inane characters and Sanjay’s irritating conversational gambits.

By which I mean to say, at least Rich has based his “unlikely events” in the voluminous series of marching orders evangelicals have been giving the faithful for years. OSC’s fantasy, for all his prevarications about knowing history, is just fantasy. Oh, he gives one piece of “evidence” for his prognostication, vis à vis his urban enforcers:

Obama called for a “national police force” in 2008, though he never gave a clue about why such a thing would be necessary. We have the National Guard. We have the armed forces. The FBI. The Secret Service. And all the local and state police forces.

The trouble is that all of these groups have long independent histories and none of them is reliably under Barack Obama’s personal control. He needs Brown Shirts—thugs who will do his bidding without any reference to law.

I think I’ll let Dave Weigel repudiate this:

This is a revealing bit of craziness, and one you occasionally hear from members of Congress. Obama never called for a “national police force.” In a July 2008 speech he used the words “civilian national security force” to describe how he’d “expand AmeriCorps to 250,000 slots,” “double the size of the Peace Corps,” and “grow our foreign service.” That was five years ago, and he actually failed to do it.

Not to be a snob about it, but anyone looking logically at the Obama record from then to now might notice that he hasn’t actually created a civilian strike force answerable only to him. (How its budget would exist outside of congressional appropriations I do not know).

You know what? Now that I get to the end of this discussion of OSC’s batshit wingnuttery, I’m seriously rethinking paying ten bucks to see Ender’s Game.


Leave a comment

Filed under film, maunderings, wingnuttery

World War Z, or A Spectacular Vindication of my Zombie Thesis


I’ve been making some notes toward a future post that will ask the question “how do you know when a genre reaches saturation point?” If I write it as I’m currently conceiving it, I will look specifically at vampires and zombies and their current glut in popular culture … mainly because that way I get to title the post “vampires vs. zombies!”, but also because, well, it’s hard to think of better examples of saturation. How do we know when we reach that point?

Well, to offer a preview on the zombie end of the equation: beyond simply seeing a new zombie film every week, you’re probably reaching critical mass when there are almost as many quasi-parodies relying on audiences’ familiarity with the principal tropes as there are straight-up examples (Shaun of the Dead, Zombieland); the genre begins to acquire critical respectability by, say, migrating to prestige television (The Walking Dead); the figure of the monster becomes domesticated into a romantic hero (Warm Bodies).

Or … Brad Pitt produces and stars in a high-end zombie film. Yup. When the A-Listers get involved, we’ve reached some sort of tipping point.

Like many, many people, I have been waiting for the release of World War Z with both excitement and trepidation. I read Max Brooks’ novel about four years ago, and loved it. I was actually surprised at how good it was—completely serious and thoughtful, and also remarkably well-written. It is also—for reasons I’ll get into below—a significant revision of the standard zombie/walking dead narrative. It developed, very quickly and deservedly, a devoted following, and probably did more than any other text (including Max Brooks’ earlier book The Zombie Survival Guide) to spur discussion and argument in the multitudinous online zombie forums.

So when there was word that a film was in the works, that of course inspired all kinds of excitement … and then dread, as rumours of struggles with the scripts, with filming, and internecine studio fights accumulated. When the first trailer was aired, a large proportion of Z fans collectively lost their shit, predicting that the film would be absolutely nothing like the novel.

To which I said: well, obviously.

I went to see it this weekend, and though I doubt there are many among those fans who have not get gone themselves, I will say two things. First, if you’re hoping for an utterly faithful adaptation, do not go see this film. Second, if you’re hoping (or were hoping) for an utterly faithful adaptation, you’re delusional and probably need to seek help.

This review, such as it is, will be in three parts. In part one, I’ll be talking about the novel, and why there is no way under heaven a film adaptation could work. In part two, I’ll talk about the film … and, well, how it kinda works. (Yes, I was surprised too). And in part three, I will talk about my thoughts on how this all fits, thematically and otherwise, into the ongoing cultural phenomenon that is the zombie genre.

I didn’t mean for this post to get so long. So, anyone who has read the novel and/or doesn’t care about my thoughts on it might want to just read part two.

1. Zombies Go Global

As anyone even vaguely familiar with the zombie genre knows, the drama is almost invariably localized—by which I mean, the world of the embattled survivors shrinks down to their immediate environment fairly early on. There are often flashes of a broader crisis caught from sporadic television or radio broadcasts, but before long the wider world goes dark and the scope of the action is reduced to the island of illumination cast by the protagonists. Yet underneath it all is the almost-invariably-unanswered-question of is there anyone else, and if so, where? In many cases, part of the plot hinges on finding the way to safe haven; but even when haven (safe or not) is found, the plight of the broader world remains unknown.

World War Z is a deeply impressive novel for the simple reason that that Max Brooks sat down and—systematically and exhaustively—thought through the twinned questions how would a zombie apocalypse happen? and how would the world respond? What he then produced was a series of “testimonials” from around the globe. The conceit of the novel is that an investigator for the U.N., tasked with compiling an “after-action report” some ten years after the end of the zombie war, finds half of what he compiled deleted from the official document as being too influenced by the “human factor,” i.e. too personal and subjective. The “present book” is then a compilation of those deleted elements—personal stories from people around the world telling of their experiences of the zombie war.

As I say, the novel is impressive for its scope of extrapolation: proceeding from present-day political and technological realities and producing a fairly convincing portrait of how a zombie apocalypse might fall out. But it is doubly impressive for how well written it is, for Brooks commits himself to telling the stories of no fewer than forty stories—each with a distinct narrator. He doesn’t go all As I Lay Dying on us or anything (which is a blessing), but makes each testimonial subtly different in tone and narration—enough to distinguish between the characters, not so much as to distract from their stories. What emerges is a convincing patchwork of human survival stories, at the heart of which is the (mostly) common theme of community and civic responsibility.

This theme is at once subtle and strikingly at odds with the genre at large. More often than not, the post-apocalyptic scenarios depicted in zombie narratives present harsh ethical questions about survival and sacrifice: who is worthy of inclusion in the survivors’ group, what kind of behaviour becomes dangerous and threatening, what lengths are we willing to go as a societal microcosm to survive? These questions are familiar to anyone who watches The Walking Dead, and they are by no means absent from World War Z—a significant number of early testimonials outline measures taken by governments and agencies the world over to contain and isolate the threat, and then contain and isolate the survivors, including the sacrifice of entire populations and cold-eyed calculations about who is valuable and who is not.

(As an aside, I have little illusion about my own value to society, post-apocalyptically. It occurred to me a long time ago that those best suited to survive are the antithesis of liberal academic types—if there ever is a zombie apocalypse, those who make it through will likely be anti-government paramilitaries and end-timer fundamentalists who all have walled compounds in remote areas well-stocked with canned goods. Evolution weeps).

But while such cold calculations are present in World War Z, the novel tends to concern itself more with the massive reorientation of society and economy necessary to combat the undead threat. A few characters become central voices in this respect, key among them Arthur Sinclair, the director of the United States’ Department of Strategic Resources—formed specifically for this purpose. He says:

[“Tools and talent”] … A term my son had heard once in a movie. I found it described our reconstruction efforts rather well. “Talent” describes the potential workforce, its level of skilled labor, and how that labor could be utilized effectively. To be perfectly candid, our supply of talent was at a critical low. Ours was a postindustrial or service-based economy, so complex and highly specialized that each individual could only function within the confines of its narrow, compartmentalized structure. You should have seen some of the “careers” listed on our first employment census; everyone was some version of an “executive,” a “representative,” an “analyst,” or a “consultant,” all perfectly suited to the prewar world, but all totally inadequate for the present crisis. We needed carpenters, masons, machinists, gunsmiths. We had those people, to be sure, but not nearly as many as were necessary. The first labor survey clearly stated that over 65 percent of the present civilian workforce were classified F-6, possessing no valid vocation. We required a massive job retraining program. In short, we needed to get a lot of white collars dirty.

Sinclair, it is worth noting, is described as the son of an inveterate New Dealer; and though he had rejected his father’s lessons and run “as far away as Wall Street to shut them out,” he found himself using them to harvest “the right tools and talent.” (As an aside: it is beautifully serendipitous that, in the audiobook of World War Z, Arthur Sinclair is voiced by Alan Alda). Sinclair has a recurring voice in the novel, and is reflective of Brooks’ larger communitarian preoccupations.

This hopeful and indeed vaguely utopian dimension to the novel (spoiler: humanity wins) is effectively unique in the genre; while some narratives end on a note of hope (28 Days Later), in many such endings are deeply ambiguous (the original Dawn of the Dead) or at times ironic (Shaun of the Dead). World War Z is, in its very framework, an account of victory and lessons learned.

But it doesn’t get to that point without a significant number of harrowing and thrilling stories told alongside its entirely pragmatic and methodical formulae for fighting the undead hordes. There is a huge amount of dramatic fodder here … but the earliest misgivings about a prospective film echoed my own, specifically: how do you recreate the scale and scope of this global narrative in a two-hour movie?

Well, the bottom line is you can’t. Not to sound like a broken record or anything, but cinema is the wrong medium for anything resembling a faithful adaptation of this novel. The big screen doesn’t work, because everything is necessarily accelerated—the meditative, reflective quality of the novel, to say nothing of its reams of different stories, voices, and interlaced narratives. The better vehicle for Z—and I know you see where I’m going with this—would be television. Ideally, HBO or one of the other prestige cable stations … though I suppose a major network might not make too big a hash of it (theoretically). Brooks’ novel, given its shifting voices and narratives, would be much more amenable to episodic, long-form storytelling. One can easily imagine (or I can, anyway) a television series in which each episode features a different testimonial, intercut with the interviewer’s difficulties in traveling around a depleted, post-zombie world.

Of course, such a format would not be amenable to Brad Pitt—not if he was determined to star in it, at any rate.

2. But if you must cram it all into two hours …

Warning: spoilers ahead.

All that being said, World War Z was surprisingly good. As I said above, it will almost certainly offend anyone who demands fidelity to the novel … but as far as it went, it did a remarkable job of splitting the difference.

How does it accomplish this feat? Well, as already mentioned, cinema tends to accelerate things—and a ticking clock is one of the best ways to ratchet up the tension. Rather than have Gerry Lane, Brad Pitt’s doughty and rugged U.N. inspector, travel the world when all has been won to collect stories, you have him racing against the pandemic to collect stories—in the hopes of finding “patient zero” and figuring out how all this began. All things being equal, it’s not that bad a device (and if I can be smug for a moment, I’d more or less figured that much out from the trailer). So chasing what few thin leads he has, he flies with a team of Navy Seals and a brilliant young virologist to South Korea, and then then Israel, and then … well, I won’t spoil everything. Not yet. Suffice to say he loses his team, including the virologist, by increments along the way.

We begin with the comfortingly domestic images of Gerry Lane’s suburban life. (The opening scene is actually a nice little nod to the beginning of the 2004 Dawn of the Dead, with an eerily silent entrance into the parental bedroom—except that the children are obnoxiously energetic rather than undead). Traveling later in gridlocked downtown Philadelphia, the family is overtaken by the vanguard of the zombie apocalypse; but fortunately Gerry, only recently retired from the U.N.—and, as it turns out, wanted by them to seek out the origins of the infection—has connections enough to get extracted by a navy helicopter and brought to an aircraft carrier off the east coast.

More Brad Pitt-620x

From there begins his sort-of-global quest to seek out the grail of patient zero. Some of the novel’s flavour of international crisis is retained, though not much—after all, Lane only travels to South Korea, Israel, and finally Wales. The only true fidelity to the novel comes when he visits Israel and interviews a senior Mossad agent, who accounts for Israel’s seemingly-too-convenient zombie preparedness (among other things, completing their wall) by outlining a philosophy of intelligence-gathering I won’t bother to repeat. Indeed, Israel seems like an oasis of peace in the midst of a world gone mad—protected by a high wall, but allowing all uninfected in. The spectre of a common enemy would appear to have obviated hatreds, as we see Arabs and Muslims in significant numbers among the refugees, and one group is happy enough with their saviours to burst into song over a sound system while waving Israeli flags.

Alas, as Gerry Lane learned in South Korea, the undead are attracted by noise. (Lessons he has picked up by this point: the time between infection and zombification is about twelve seconds; infection is spread by bites but not, oddly, by transference of bodily fluids; and the best way to avoid the undead is by keeping quiet). As the grateful refugees’ song swells in volume, the undead swarm the other side of the wall, antlike, creating that ladder of bodies that we saw in the trailers, and which excited such harrumphing among devotees.

WWZ 1-620x

The breaching of the Israeli sanctum is at once utterly at odds with the novel, and the most remarkable sequence of the film. Anyone who is a firm advocate for Romero-esque “slow zombies” will probably want to give this film a pass. Not only do the undead sprint as fast or faster than the infected in 28 Days Later and the zombies in the 2004 Dawn of the Dead, they are veritable acrobats, leaping through the air to take down fleeing humans and turning at times into literal tidal waves of undead. (This, too, is at odds with the novel: Max Brooks’ zombies are the classic shuffle-and-moan types).

So, purists may wince, but the Z-film zombies’ speed and tendency toward insectile swarming makes for some truly thrilling cinema. In the film’s third act we revert to a more standard zombie-evasion sequence; if World War Z contributes anything new to the genre, however, it is this imagery of antlike swarming that takes the concept of the “undead horde” to a new level. But I will speak more about that in my third section.

At this stage Gerry Lane has already lost his brilliant young virologist, but that proves to not really matter. For one thing, he discovers that the hunt for patient zero is pretty much a futile endeavour—things have progressed too far too discern the pandemic’s epicenter, and much of the world is, his Mossad contact informs him, “a black hole.” However, the virologist did not die in vain, for he managed to impart some wisdom about the nature of viruses in an elaborate metaphor about how Mother Nature is a serial killer who, “like all serial killers, wants to get caught.” Gerry Lane notices that the zombie hordes puzzlingly ignore some individuals, passing them by like rocks in a stream while attacking others. In a handful of rather overwrought flashbacks, he makes the intuitive leap that zombies will ignore diseased prey … so if people can be infected with dire but not fatal illnesses, they will be safe.

Of course, he has to test this theory, and so (using his U.N. connections) manages to convince his flight out of Jerusalem to change course and head for a World Health Organization lab in Cardiff. But nothing can be that easy—as anyone who has seen the trailer knows, at some point in the film Pitt is on a plane that becomes overrun by zombies, and which experiences explosive decompression when part of the fuselage blows out. Fortunately, the plane goes down close enough to Cardiff for Gerry Lane and a young female Israeli soldier he saved from infection by lopping off her hand (shades of The Walking Dead, but whatever), to walk from the plane crash to the nearby World Health Organization facility, in spite of his injuries and, y’know, her massive blood loss . They make it to the WHO facility, are treated for their injuries, and manage to make contact again with Gerry Lane’s U.N. overlords. Gerry convinces the resident doctors of the viability of his theory … and of course, being a WHO facility, they have tons of terribly infectious bacteria and viruses squirreled away in a secure lab.

It is here that the film takes its turn into zombie movie cliché … because of course, guess where the lab they need to get to is? If you guessed “in the bowels of a labyrinthine, zombie-infested wing of the facility,” you have just leveled up as a cinema nerd. Congratulations!

Yes, the “B” wing of the WHO facility, which has been cordoned off by the survivors, contains about eighty zombified medical personnel, all of whom are visible on the facility’s closed-circuit cameras. (It was at this point that watching The Walking Dead interfered with the movie experience. Eighty walkers? Send in Rick Grimes and Darryl. They’ll clear those dead sumbitches in a jiff). While I give credit to the film that the sequence that follows—in which Gerry Lane, his new Israeli friend, and one of the WHO doctors (heh) make their way as silently as possible into B wing—is pretty tense and scary, it highlights for me one of the main failings of trying to bring this novel to the big screen: namely, that the larger scope of the novel, its great complexity and nuance, is necessarily lost in the name of making the film, recognizably, a zombie film. Though the sequence is well done, it is not just reminiscent of every other zombie movie, but (perhaps more significantly) all of the video games that have invaded the genre as well. That it all comes down to the protagonist in a series of Resident Evil­-esque antiseptic, institutional hallways evading the undead is unsurprising, but something of a letdown after the genuinely innovative Israel sequence.

A few last thoughts before I move on:

  • Mireille Enos is criminally underused in this film. Criminally. Whatever your thoughts on the series The Killing, or on her damaged character therein, she’s a pretty remarkable actress. She really only gets to do anything interesting in the first act as she and Brad Pitt flee with their daughters. But after that, all she does in languish on a naval vessel, looking longingly at the satellite phone he gave her, waiting for it to ring (and at one point nearly killing him when she calls and his phone rings at a really, really bad time).
  • Attention Lost fans: some of the film’s publicity mentions that Matthew Fox, aka Dr. Jack Shepherd, is in the film. Which he is. For all of about twenty seconds.
  • Gerry Lane has almost superhuman powers of perception. Besides making the intuitive leap about the camouflaging qualities of disease, he also figures out the twelve-section infection rule when he watches someone get reanimated in the middle of the mayhem in Philadelphia.
  • There is almost no blood in this film. Some zombie devotees have complained about this fact, but I frankly don’t miss it. Gore certainly has its place in this genre, but the most frightening parts of this film had less to do with the prospect of violent disemboweling than with the specter of the horde itself.

3. The Masses as Weapon of Destruction

I’ve been cultivating a theory about the zombie genre and its massive popularity for several years now … which is a little bit like saying I have an idea about why people like ice cream. Zombies are an infinitely adaptable movie monster, and a theory that asserts that the genre is popular because it depicts the nightmare of conformity is no more or less correct than one starting people love zombie films because they offer survivalist fantasies. So as long as we can agree that zombies can be anything to anyone (depending on the selection of texts), let me make my case.

Simply put, zombies are a manifestation of our ambivalent and fraught relationship to mass culture itself.

While Zack Snyder’s 2004 remake of George A. Romero’s Dawn of the Dead (1978) bears only a passing resemblance to its source material, it does maintain one crucial element largely absent from the genre at large: boredom. In both versions, survivors of the zombie plague manage to barricade themselves in suburban shopping malls securely enough that they have the leisure to get bored. With all of the malls’ bounty available to them, they want for nothing, and indeed take advantage of it to indulge in Rabelaisian carnival. In both cases these interludes are bookended by crisis and terror, but are for that reason even more gratifying as depictions of unchecked consumption.

In Snyder’s version, after a gratuitous montage of the survivors at play in the mall, the boredom asserts itself as we watch them lounging on the roof, playing a game with the owner of a gun shop down the road. Too far apart for spoken communication, they trade messages via text scrawled on white boards. Having played chess in this manner, they now play a different game in which the mall survivors scan the undead horde milling about below with binoculars, seeking out celebrity lookalikes. Andy, the gun shop owner, then attempts to find the doppelganger and kill it with his rifle. After Jay Leno and Burt Reynolds have been dispatched in this manner, Ana (Sara Polley) admonishes the loathsome Steve (Ty Burrell):

ANA: You guys had really rough childhoods, didn’t you? A little bit rocky?

STEVE: Hey, sweetheart … let me tell you something. You have my permission, I ever turn into one of those things? Do me a favour. Blow my fuckin’ head off.

ANA: Oh yeah, you can count on that.

Of course, because of the immutable laws of dramatic narrative (the gun on the wall in the first act, as it were), we know that Steve will in fact turn into “one of those things,” and Ana will in fact be the one to blow his fucking head off. But there is a more serendipitous dimension to the sequence now: actor Ty Burrell, a relative unknown when he made Dawn, has since seen great success with the sitcom Modern Family, and has in fact been transformed—not into a zombie, but a celebrity.

While that observation may seem somewhat glib, the celebrity-shooting sequence is deeply suggestive in the context of zombie-saturation in pop culture. One way to read the sequence is as revenge against mass culture, a symbolic expression of the hatred and resentment of celebrity that is the flip side of our fascination with it. The rise of the internet and social media has created a culture of celebrity-shaming that the victims of tabloid rags even twenty years ago could not have imagined, an airing of the collective id that is as pernicious as it is pervasive. That one of the consistent tropes of zombie films is the necessary eschewal of contemporary connectivity and the technology that makes it possible is suggests this very hostility to it—but in a rather spectacular form of the return of the repressed, mass culture won’t stay dead, but mobs us and seeks to consume us anew.

This is why the true nightmare of zombie films is not the prospect of a lone ghoul lurking around the corner, but the critical mass of the dead surrounding and swarming the living. Dark corridors and blind corners are a necessary trope in zombie films (case in point, the third act of World War Z as discussed above), but in and of themselves a zombie or two has nothing to really distinguish them from any other movie monster lurking in the shadows ahead. Their threat, ever since George A. Romero transformed them from the golem-like automatons of voodoo legend into the flesh-eating hordes in Night of the Living Dead (1968), rests in their weight of numbers.


Which is why they share a symbolic lineage with other dystopic figurations of the masses, from Shakespeare’s Roman mobs to such philosophical warnings as found in Matthew Arnold’s Culture and Anarchy or Ortega y Gasset’s The Revolt of the Masses. Granted, none of them were overly concerned with zombies per se, but all characterized the mob mentality as mindless, voracious, and dangerous. Again, it is thus not surprising that a consistent trope in the zombie genre is the common apocalyptic gesture of culling, the purging of society that leaves only a handful of survivors. Apocalyptic narratives provide the space for the spectacular individual to emerge, which is a big reason why (I would argue) so many people love to go on at length about how they would survive, what they would do, and share their plans with like-minded fantasists: everyone wants to imagine that he or she is just that person who would survive and kick some serious zombie ass in the process.


At the heart of the zombie genre is an individualist ethos, one that plays out in tension with the necessity of living within an ad hoc community of survivors—who, if they are in fact to survive, tend to need to keep to themselves. There is little to be gained in the zombie genre by actually finding one’s way to the remnants of military or civic authority—in 28 Days Later it proved disastrous; in The Walking Dead, everyone nearly died when they found safe haven at a CDC facility at the end of season one, and the most recent season pitted the ragged Grimes band against “the Governor,” whose facade of civic peace in the community of Woodbury proved to be a despot’s kindly mask; an encounter with the police in the British series Dead Set ended badly (for the police, fortunately). Wondering whether there is actually a zombie film out there in which encounters with military or political authority doesn’t end badly, I posed the question to friends on Facebook (hey, I don’t use research shortcuts for this blog), and (1) apparently, I must immediately watch Cemetery Man and Carnival of Souls; (2) the other three suggestions were Shaun of the Dead and I Am Legend—the first of which is a fair point, as the army shows up at the end to save the day (oops, spoiler); and the second of which has a more ambivalent relationship to the structures of official power, but does end with some of the survivors making it to safe haven (um, spoiler again)—and World War Z itself.

Which brings me back to the ostensible subject of this post. What set World War Z apart to start with, and what it does manage to retain to a small degree in the film, is the rejection of the overtly individualist ethos pervading the genre. The novel is all about a certain global collectivity. Tellingly, at one point Max Brooks has one of his characters allude to his book The Zombie Survival Guide as useful, but too focused on American contexts. And as he makes clear, while sacrifices are made in the name of saving civilization, the point is that civilization is saved. And perhaps more importantly, civilization is shown as being worth saving.

I will defend the film against those who say (to quote one random commenter I read just today), that the film “shits all over the source material.” Is it faithful? Of course not. How can it be? But it does a pretty good job of keeping the key themes in place. That being said, the most disturbing aspect of the film is the one I’ve already said is the best—the breaching of the Israeli perimeter. It is in this sequence that, as the title of this post suggests, I found a spectacular vindication of my zombie thesis. In the film’s opening credits, we see a montage of television clips, some from news programs and some from what look like nature shows. Again, I find an echo of the 2004 Dawn of the Dead, which manages to give a glimpse of the global crisis before everything goes dark and the perspective is limited to survivors:

Present in the World War Z opening credits are images of swarming ants, which obviously become significant later in the film. (Does anyone else remember that short story “Leiningen Versus the Ants”?) The undead swarm, and become an unstoppable horde; but what is pernicious is the consonance of these scenes with other such films in which well-equipped, technologically advanced soldiers find themselves fending off the mindless masses of an undifferentiated mob. The Israel scenes in World War Z, effectively, are visually of a piece with films like Black Hawk Down, Aliens, and Zulu.

Zulu_40black hawk down

The Middle Eastern setting visually cues the (white, North American) viewer to the stereotypes of third world cityscapes, and the sequence—consciously or not—cites images of embattled first world warriors fending off (in this case literally) a rabid horde. As Raymond Williams astutely observed way back when he first published Culture and Society (1958), once people started to mass in urban spaces, “mass” very quickly became synonymous with “mob” and the best way to dehumanize people in the modern world was to associate them indelibly with the masses. In the novel, the Israeli solution is a victory for humanity; in the film, as Hendrik Hertzberg points out, the Israeli largess is repaid with what we assume is annihilation—“The scrambling West Bank zombies just keep coming,” he says, and “we are left to infer that everything probably would have still been O.K. if only the gates had been kept shut.”


The political overtones of this sequence aside, the swarming hordes of the undead comprise the other side of the mass culture coin. If Dawn of the Dead revenges itself on celebrity culture, that revenge is short-lived, as the survivors don’t actually, you know, survive (oops, spoiler). And if the horde of zombies present in Dawn, or Shaun of the Dead, or 28 Days Later, or Zombieland all represent the stultified lives of first world consumers, the swarm in World War Z is of a piece with the denizens of Mogandishu in Black Hawk Down—an undifferentiated mass of racial, geographical, and cultural others.

So if I may begin to conclude what is certainly the longest blog post I have ever written … the film adaptation of World War Z retains some key elements of the novel to the point where it does not in fact “shit all over the source text” … but it loses much of Brooks’ innovation. His novel is not quite sui generis—Mira Grant’s “Newsflesh” trilogy (Feed, Deadline, and Blackout) depicts a (more or less) function post-zombie society—but it remains, as far as I have read and watched thus far, the sole zombie narrative that does not devolve into anti-establishment, absolute individualism. And while the film tries very hard, it does slip unfortunately into zombie cliché.

Leave a comment

Filed under film, what I'm watching