I’ve been blogging since summer of 2005, meaning I’ve been at this almost twenty years. My online logorrhoea has spanned two blogs—initially, An Ontarian in Newfoundland (inactive but still on the web), and this one—as well as a handful of posts on Medium. And recently I started a Substack titled The Magical Humanist, which is (or will be) one-half of a collaborative project between myself and my wife Stephanie. The other half, still in the early phases, will be a series of video essays on topic pertaining to science fiction, fantasy, and genre fiction (and film and TV) more generally. I’ll be writing and narrating the videos, and Steph will be doing the production and editing, as well as contributing some original art (sample below).
A handful of images from Steph’s slowly growing portfolio: Tolkien with his pipe, a wee cute Aragorn, MAGA Cthulhu, and a somewhat confused orc.
On the Substack page I’ll be posting the transcripts of the videos along with bibliographical references; if the original draft of a given script is too long for the video (which I intend to keep to a rigid fifteen minute maximum length), I’ll post the unedited version. I’ll also be posting occasional essays too long and ponderous for video treatment, as well as shorter pieces detailing my leisure reading and viewing (“Extracurriculars”) and the stuff I’m currently teaching or otherwise working on academically (“Curriculars”).
The Magical Humanist in both its video and textual formats is, properly speaking, a project—by which I mean, it will build, iteratively, on my understanding of what I call “magical humanism,” a philosophy and fictional mode that falls at the intersection of philosophical pragmatism (as embodied by such thinkers as John Dewey and Richard Rorty) and certain precincts of contemporary fantasy (most specifically epitomized by Terry Pratchett).
It’s early days, but I’ve managed to post four pieces so far—twolonger essays, one instalment of “Extracurriculars,” and a shorter essay about the virulent response from the MAGAsphere to Bishop Marion Budde’s plea in her National Prayer Service sermon (with Trump in the audience) for compassion and empathy. Before posting the piece, I wondered: is this appropriate to my page’s subject, and the larger project it entails?
I’ve gotten accustomed to posting occasional rants on this blog and my previous one, even though both had an ostensibly different purpose. I started “Ontarian” on moving to Newfoundland as a way of communicating thoughts on my new life as a university professor in a new province without resorting to the annoyance of mass emails. As the years passed however (I retired the blog in 2011), and the novelty of my new life wore off, it increasingly became a space in which to variously rant, vent, and ruminate on whatever topics, subjects, or obsessions drove me to the keyboard.
So, in 2011 I switched platforms and started “It’s All Narrative,” which I originally designed to be a more professional space in which to talk about anything narrative-related, which provided a pretty massive rubric. In hindsight, I should have called it “Thinking Out Loud.” (For reasons I’ll get into momentarily, that name change is now on the table.)
I was hesitant about posting the most recent piece to The Magical Humanist because I don’t want it to devolve into a place where I rant about the latest political enormity making me hypertensive again. In the end, I went ahead with it—because ultimately the sentiments spoke to the themes preoccupying The Magical Humanist’s project. And given that magical humanism as a philosophy is largely an attempt on my part to articulate my own moral and ethical objections to an ascendant revanchism of which Trumpism is a signal symptom, it’s unsurprising that three out of my four posts engage in one way or another with the 2024 election.
But still. Because I do want to keep The Magical Humanist mostly on topic, and because I’m loath to retire this blog, I’ll keep It’s All Narrative active (he says in his first post here since September, but who’s asking?) for those times when I am actually just thinking out loud.
TL;DR: this year I’m being unequivocal about students’ use of AI. I’m banning it.
And yes, this is very much an exercise in spitting into the wind, fighting the tide, etc. Choose your metaphor. AI’s infiltration of every aspect of life seems inevitable at this point, but to my mind that makes an education in the humanities in which students engage with “The best which has been thought and said” (to use Matthew Arnold’s phrasing) all the more crucial, and all the more crucial that they do it without recourse to the banalizing effects of AI.
I’m profoundly sceptical of the claims made for AI’s potential, and not just because twenty-five years of Silicon Valley utopianism has given us the depredations of social media, monopolistic juggernauts like Amazon and Google, and a billionaire class that sees itself as a natural monarchy with democracy an also-ran annoyance. In the present moment, the best descriptor I’ve seen of AI is a “mediocrity machine,” something evidenced every time I go online and find my feeds glutted with bot-generated crap. “But it will get better!” is not to my mind an encouraging thought, not least because AI’s development to this point has often entailed the outright theft of massiveamounts of intellectual property for the purposes of educating what is essentially a sophisticated predictive text algorithm.
But that is not why I’m banning use of AI in my classrooms.
To be clear: I say that knowing all too well how impossible it is to definitively ban AI. Unlike plagiarism proper, AI’s presence in student essays is at once glaringly obvious and infinitely slippery. With plagiarism, so long as you can find the source being copied, that evidence is ironclad. But AI is intrinsically protean, an algorithmic creation that potentially never produces the same result twice. And yet it’s always obvious, whenever I get an essay that is grammatically perfect but intellectually vacuous. A ChatGPT-authored essay is basically the uncanny valley in prose form: it looks like a reasoned and structured argument, but it lacks the idiosyncrasies and depth that make it, well, human. I’ll say this much for AI—it has pulled off the feat of making me grateful to find an essay riddled with grammatical errors and unreadable sentences. In the present moment, that’s the sign of a student actually doing the work.
But then, that’s the basic point: it’s doing the work that’s important. I recently read an interview with Texas teacher Chanea Bond on why she’s banning AI in her classroom. It’s a great interview, and she helped clarify some of my thinking on the subject. But what depresses me is that her stance is even remotely controversial—that she received a great deal of pushback on social media, much of it from fellow educators who see AI as a useful tool. To which Bond said: No, in Thunder. Which was not a response emerging from some sort of innate Luddism, but from her own experience of attempting to use AI as the useful tool so many claim it to be. “I envisioned that AI would provide them with the skeleton of a paper based on their own ideas,” she reports. “But they didn’t have the ideas—the analysis component was completely absent. And it makes sense: To analyze your ideas, they must be your ideas in the first place.”
This, she realized, was the root of the problem: “my students don’t have the skills necessary to be able to take something they get from AI and make it into something worth reading.” To develop a substantive essay from an AI-generated outline, one must have the writing skill set and critical acumen to grasp where the AI is deficient, and to see where one’s own knowledge and insights build out the argument. Bond’s students were “not using AI to enhance their work,” she observes, but are rather “using AI instead of using the skills they’re supposed to be practicing.”
And there’s the rub. In my lifetime, and certainly in my professional academic career, there has never been a time when the humanities has not been fighting a rearguard action against one form of instrumentalization or another. Tech and STEM are just the most recent (though they have certainly been the most overwhelming). The problem the humanities invariably faces is that it’s ultimately rooted in intangibles. Oh, there’s a whole host of practical abilities that emerge from taking classes in literature or philosophy or history—critical reading, critical writing, communication, language—and perhaps we could do a better job of advertising these benefits. But I’m also loath to play that game, for reducing humanities education down to a basic skillset not only elides its primary, intangible benefit, but also plays into the hands of administrators seeking to reduce the university to a job creation factory. In such a scenario, it’s all too easy to imagine the humanities reduced to a set of service courses—English transformed to basic composition, literature jettisoned; languages taught solely on the basis of what’s of practical use in industry; philosophy a set of business ethics classes; communication studies entirely instrumentalized, stripped of all inquiry and theory; and entire other departments simply shuttered, starting with classics and history and working down from there.
It may appear I have drifted from my original topic, but this sort of thinking is exactly the kind that believes AI can replace critical, intellectual, and creative work. What is both risible and infuriating about this assumption is that whatever sophistication AI has attained has come by way of scraping every available resource to be found online—every digitized book, every image, every video, a critical mass of which is other people’s intellectual property, all plundered to feed the AI maw. (And still, all it can produce are essays that top out in the C range in first year English classes.) I don’t think AI’s cheerleaders have grasped what a massive Ouroborosian exercise this is: as the snake consumes more and more of its tail there is less and less of it. The more we collectively rely on AI to do our thinking for us, the less creative, new, and genuinely revolutionary thought there will be to feed the beast. And as a result, the more stagnant and banal becomes the output.
Last year I reread 1984 for a course I was teaching and was struck anew by the degree to which Orwell is less concerned with totalitarianism’s mechanisms of brutality than with the curtailment of people’s capacity to articulate resistance and dissent. His notorious Thought Police are just the crude endpoint of power’s expression; the more insidious project lies in removing the ability of people to think thoughts worth policing. Winston Smith labours at the Ministry of Truth revising history to square it with the Party’s ever-changing reality, but it is his lover Julia’s work in the Ministry’s fiction department that struck me this time around. Her job, she tells Winston, is working the “novel-writing machines,” which consists of “chiefly in running and servicing a powerful but tricky electric motor.” She off-handedly refers to these machines as “kaleidoscopes.” Asked by Winston what the books she produces are like, she replies, “Oh, ghastly rubbish. They’re boring, really. They only have six plots, but they swap them round a bit.” These novels function as a societal soporific, titillating the masses with stories by turns smutty and thrilling, but which deaden their readership by circumscribing what is possible to imagine.
Julia’s description struck me as uncannily prescient, conceiving as it did a sort of difference engine version of Chat GPT. The point of Julia’s work is the same as that of Newspeak, the language replacing English in Orwell’s dystopian future. Newspeak radically reduces available vocabularies, cutting back the words at one’s disposal to a bare minimum. In doing so it curtails, dramatically, what can be said and therefore what can be thought—eliminating the possibilities for resistance by eliminating the ability to articulate resistance. How does one foment revolution when one lacks a word for revolution?
I’m banning AI, insofar as I can, because its use cranks the law of diminishing returns up to eleven. In the process, it obviates the most basic benefit and indeed the whole point of studying the humanities, which can’t be reduced to some abstracted skill set. The intrinsic value of the humanities lies in the process of doing—reading, watching, thinking, researching, writing; then being evaluated and challenged to dig deeper, think harder, and going through the process again. There is of course content to be learned, whether it’s the history of colonialism, the Copernican Revolution, the aesthetic innovations of the Renaissance, or dystopian novels like Orwell’s; all of that is valuable because all knowledge is valuable, but inert knowledge is little better than trivia. Making connections, puzzling out knotty and difficult texts, and most importantly making sense of it to yourself and engaging in dialogue about it with others—it is this process wherein lies the great intangible benefits of the humanities. The use of AI is kaleidoscopic in the sense Julia alludes to in 1984, providing the illusion of variation and newness, but ultimately just a set of mirrors combining and recombining the same things over and over.
My wife Stephanie and I recently returned from ten much-needed days in Barbados. The last time we went on vacation was four years ago—also to Barbados, in the auspicious month of March 2020. So, yeah … we ended up having to come home early when the Prime Minister got on TV and told Canadians abroad to come home. Which sort of put a pall on the otherwise wonderful holiday.
So we were more than ready for our return. It had been an unusually busy school year, and I was looking forward to one of my favourite activities while vacationing: reading. In the weeks leading up to our departure, I started three piles of books. Pile one was novels I’ve purchased but not yet read; pile three was unread nonfiction; the middle pile was the ever-shifting lineup of books I would bring with me, which changed day by day as I moved titles back and forth between the piles. Based on the fact that we had ten days, I kept the reading pile to what I thought was a reasonable volume, often swapping out two or three shorter texts for one chunky one, or vice versa.
I should add that what I considered a “reasonable volume” of reading evoked profound scepticism from Steph, who felt I was being excessively optimistic in how much I would be able to read in ten days—which to her mind was an issue because of the space the books would take up in our luggage.
(And don’t get at me about the virtues of e-readers. I will stipulate to all those virtues, while still hewing to my luddite ways with paper books. Reading should be pleasurable, reading on vacation doubly so, and I will always love the feel and look of a physical book infinitely more than a slim slab of silicone, no matter how convenient the latter.)
I ultimate settled on four chunky reads: Infinity Gate by M.R. Carey; Sugar in the Blood by Andrea Stuart; Bomber by Len Deighton; and Babel by R.F. Kuang.
And yes: I read them all.
M.R. Carey, Infinity Gate. Carey has rapidly become one of my favourite contemporary SFF authors. He is most famous for The Girl With All the Gifts, a brilliant zombie apocalypse novel that was adapted into a quite good film starring Glenn Close, Gemma Arterton, and Paddy Consadine. He followed that up with The Boy on the Bridge, set in the same world with an intersecting storyline. He then wrote the rather astonishing Rampart Trilogy: The Book of Koli, The Trials of Koli, and The Fall of Koli. Like his zombie novels, the trilogy is also post-apocalyptic, set in Britain in a distant future in which a catastrophe has almost wiped out humanity and reduced them to premodern circumstances and all of nature has mutated to be hostile to the survivors.
Infinity Gate had been sitting on my shelf since last September. On seeing it I purchased it right away and started reading. But about twenty pages in I forced myself to stop. The school year was shaping up to be the busiest I’ve had in my academic career, and I couldn’t let myself indulge in a long novel (500 pages) that would suck me in. So it sat, unread, until the day before our departure.
I was not disappointed. Though it departs from Carey’s previous post-apocalyptic preoccupations, it nevertheless hits many of the same thematic sweet spots. It begins in a familiar place: a world on the brink of environmental and civilizational collapse. A Nigerian physicist, recruited to a lavishly funded Hail Mary project to discover a last-ditch solution for the world’s imminent demise, invents technology that allows her to move between different possible Earths—a mode of stepping between alternative realities in an infinite multiverse.
I’ve recently been kicking around a post or two musing about the recent prevalence of multiverse-based stories, from Everything Everywhere All At Once to the Spider-Verse movies, so I’ll have a lot more to say about Infinity Gate soon. But I do want to note that one of the great things about Carey’s world-building—aside from the simple fact that a chunky novel has more depth than your average Marvel property—is that he really engages with the implications of an infinite multiverse. Which is to say: every possible iteration of our planet exists, which includes countless timelines in which no version of humanity evolved, or in which the Earth is unliveable, or indeed where the planet simply never came together. On the other hand, there are millions of Earths with variations on primate-evolved humanity, and millions in which the intelligent species evolved from canines, felines, lizards, herbivorous mammals, and so on.
And as we quickly learn, thousands of these other Earths have developed interdimensional travel, and have formed a massive alliance called the Pandominion—into which the Nigerian physicist accidentally trespasses.
Andrea Stuart, Sugar in the Blood. Barbados has always had a special place in my heart, not least because my maternal grandmother’s family emigrated from there to Canada in the first decades of the twentieth century. We can trace our heritage back to the seventeenth century, a fact that, as I matured and learned more about the history of colonialism and the slave trade, inspired an increasing discomfort and ambivalence. We can, for example, locate our ancestors’ plantation on eighteenth century maps. And so, as much as I love Barbados, I’m also always aware of my family’s fraught history there.
Sugar in the Blood is a story of a similar such history, though from the perspective of a Black author whose enslaved ancestor was the bastard offspring of a wealthy planter. Andrea Stuart is a Barbadian-British historian; she traces her family back to her original white ancestor, an English blacksmith who took a chance on the promise of the colonial “adventure” and arrived on the island in 1630. Stuart then follows her ancestors’ struggles and successes, telling the broader history of Barbados, the Caribbean, and colonialism as she goes.
It is not an especially happy or uplifting history. When her white ancestors finally attain a measure of success and wealth after several generations of precarious farming, it comes from the blood and pain and death of countless enslaved Africans who were brought in great numbers to cultivate the cane fields that yielded “white gold,” the sugar that built obscenely large fortunes and filled the coffers of the British crown. When my mother first read this book and called to tell me about it, she accidentally called it Blood in the Sugar. Catching her error, Mom noted ruefully the serendipity of the misstatement, reflecting that it might almost be a more appropriate title. Stuart is frank and brutal in her recounting of Barbados’ centrality in the burgeoning sugar trade and its human cost, describing with unsparing detail the usually short lives of the enslaved as they worked in searing heat to harvest the knife-like sugar cane, as well as the dangers of operating the mills and presses that processed it.
As you might imagine, there is a not-insignificant cognitive dissonance that occurs when reading this history while on vacation in the place it occurred. That history lives in the present moment: in the food and drink, in the local dialect, in the vestiges of the British colonial presence. The sugar industry, though no longer the primary driver of the island’s economy—tourism now comprises the greater share—is still there: cane fields cover much of the island’s interior, rum remains a key export. And as a twenty-year resident of Newfoundland, it is always odd to find salt cod on many menus down there—itself a legacy of the triangular trade, as is the ubiquity of rum in Newfoundland.
As Stuart relates, Barbados is a post-imperial success story, having possessed one of the Caribbean’s most stable democracies since it won independence in 1966. And however fraught its history, Bajans are fiercely proud of their island.
Len Deighton, Bomber. I’d never read any of Deighton’s fiction prior to Bomber. The only book of his I’d read was Blood, Tears, and Folly, his acerbic history of WWII; as a systematic debunking of the gauzy mythos of “the good war,” it is second only to Paul Fussell’s Wartime for its unstinting refusal to sentimentalize any aspect of that conflagration. Bomber is very much of a piece, a polemic in the form of historical fiction that takes aim at the Allies’ strategic bombing campaign against Germany.
I picked up Bomber because I’ve been immersing myself in this particular history for almost two years now—research that started as the basis for an article on Randall Jarrell’s war poetry. In characteristic fashion it has mushroomed into an ever-expanding preoccupation that has several possible writing projects emerging. I’ve now read somewhere in the neighbourhood of seven or eight histories, with a few sitting on my shelf waiting to be read. (I also, for my sins, subjected myself to Masters of the Air, the recent mini-series about American B-17 crews. I’ll probably have more to say about it in a future post; suffice for the moment to say that, as the third in a series that started with Band of Brothers and continued with The Pacific, the quality of the storytelling has fallen rather precipitously.)
Bomber is the kind of historical fiction that manages to be compelling as fiction while also functioning solid history. It is a sprawling narrative that encompasses a huge cast of characters, including a group of British Lancaster crews, German night-fighters and their base personnel, the mission planners at RAF bomber command, the operators of a German radar station tracking the bombers, and the citizens of a German town that becomes the accidental target of the RAF raid and suffers a Dresden-like catastrophe. All of which takes place within a twenty-four hour period.
As a work of fiction, the novel is astonishing: Deighton handles the complexity of the narrative deftly and manages to imbue all the numerous characters with depth and nuance. Early on, it becomes clear what will happen at the story’s climax: the bombers will miss their intended target and instead bomb the town we encounter, a town with no military structures or strategic value. Knowing this does not denude the building tension as the bomber stream makes its way through the night. Just the opposite: Deighton communicates the textures of the town and its people, the good, bad, and ugly, to the point where what finally befalls them is nothing short of horrifying.
As a work of history, the novel is a trenchant indictment of Arthur “Bomber” Harris’ leadership of the RAF Bomber Command. As I’ve written about on this blog previously, Harris was a bloody-minded sociopath who dismissed the American strategy of daytime targeted bombing of military and industrial targets. Leaving aside the fact that the American claims to accuracy were wildly overblown, Harris pursued the “area bombing” of city centres on the premise that this would (1) kill or de-house thousands of German workers, and (2) break the German spirit so that they rose up and threw off Hitler’s regime (for this reason, the strategy also went by the name “morale bombing”). Spoiler: it did not work. At all. It did however result in a nearly 50% attrition rate for bomber crews, the destruction of countless cities and towns (many of which had no military value), and the deaths of hundreds of thousands of civilians—some in such hellish cataclysms as the firebombing of Hamburg and Dresden. Deighton’s description of the effect of just such a firebombing at the culmination of the novel is genuinely harrowing.
R.F. Kuang, Babel. R.F. Kuang is a writer who’s been on my radar for some time, but I’ve only now managed to read any of her work. She is one of a new generation of SFF authors who bring a much greater, and much-needed, diversity to genre fiction. (Or, from the perspective of genre’s reactionary rump, she’s one of the legions of woke writers ruining SFF with her politically correct scolding. See here, and here, and here to get my thoughts on that particular attitude.) She has a trilogy of fantasy novels starting with The Poppy War that I am now eager to read, and more recently came out with Yellowface, a non-genre novel about an Asian-American writer who has her manuscript stolen by her white friend (also sitting on my shelf, waiting to be read).
But Babel is the one I’ve been keen to read since I first read about it. It takes place in the early 19th century in an alternative-history Great Britain in which silver possesses certain magical qualities and the British Empire—approaching the first apogee of its global power—is determined to use it to advance its imperial interests. It is not silver however that is inherently magical—rather, silver is the medium for magic that is effected through language. More specifically, when you take two words in different languages that can mean the same thing, inscribe them on either side a silver bar, and then speak them aloud, this produces a magical effect consonant with the words’ meaning. The crux however is that the greater the dissonance between the words, the more powerful the spell; or to put it another way, what remains untranslatable is the source of the power. And so in Kuang’s alternative 1830s, language is an exploitable commodity. As the British Empire expands and its influence spreads, this causes a linguistic convergence that reduces the power of spells employing English and European languages. Hence, the Babel Institute (a tower, naturally) at Oxford University labours to learn increasingly exotic tongues even as British military and commercial interests seek out new sources of silver.
To this end, the Babel Institute comprises an unusually racially and ethnically diverse student body, as it sponsors fledgling scholars from around the world whose fluency in their native languages is invaluable. Of course, the privileges and status afforded the “Babblers,” (1) does not inoculate them against the bigotry of the university’s rank and file, nor of the town of Oxford at large; (2) comes with a series of insidious strings attached; and (3) inspires in many of the students a profound ambivalence as they realize their linguistic talents are being employed in the service of imperial depredations of their home countries.
I won’t spoil the story by sharing any more—suffice to say I’ll be looking for an excuse to include this novel on future courses.
Well, it’s Labour day–which to me is the real New Year’s Eve. My calendar has been tied to the cycle of the academic year since kindergarten, so the desperate festivities that happen on December 31 always have seemed to me … well, misguided.
This September is of particular note in my family, as my niece Morgan is starting university. My brother Matt, his wife Michelle, and my nephew Zachary (himself just two years out from that big step) drove Morgan out to Kingston, Ontario to get her settled at Queens for the next great adventure of her life. As Matt noted, “It was a silent three-hour drive home.” By all reports, Morgan is already thriving, making new friends and signing up to try out for the women’s baseball team. She asked my mother whether it was OK to be both terrified and excited; my mom told her that combination of emotions always signifies something momentous and memorable.
Morgan starting university has an odd symmetry for me. She was born eighteen years ago. I went to meet my new niece at the hospital and then, three days later, hopped in my car to drive to Newfoundland to start my new job at Memorial University. Which means my university career is now old enough … to go to university.
Eighteen years in this job! You’d think that in all that time, as well as the eight years as a TA and sessional lecturer through my PhD at Western, I would have taught a poetry course. But no–this fall will be the first time, as I gear up for ENGL 3262: American Poetry 1922-1968. I was worried it would be undersubscribed, as poetry courses often have been in years past, but it’s full. There’s even one student on the waitlist.
At any rate, I wrote a post on Medium I’ve titled “The Opposite of Poetry.” What, might you ask, is the opposite of poetry? Well, you’ll just have to read it. It’s basically a variation on the introductory lecture I’ll be delivering this week.
I am really looking forward to this course, but also a bit sad. This summer a former professor of mine passed away. I say “former professor,” but I didn’t actually take any classes with Stephen Adams … Stephen was more of an informal mentor, a kind and generous presence. He was technically a specialist in 20th Century American poetry, with an emphasis on such modernists as Ezra Pound, but really his remit was just poetry in general. All of it. He had an encyclopedic mind and an unmatched ear for the music of verse–something he put to use in his book Poetic Designs, which is to my mind the best guide to poetic structure and prosody there is. What’s more, it’s an eminently readable book, no mean feat when your subject is rhyme and meter.
After he retired, Stephen continued researching and writing, producing the book The Patriot Poetsjust a few years ago. I designed my course syllabus cover in homage:
The course, indeed, is dedicated to Stephen. There’s an “In Memorium” section in the syllabus. He and I corresponded quite frequently over the past few years, often when I wanted to pick his massive brain over various poetic topics. I was looking forward to telling him about the course as it unfolded. I’ll have to be happy with carrying his memory forward into the course, both because I’ve assigned Poetic Designs as required reading, but more importantly in the wisdom he shared with me that I can pass on to my students.
WARNING: In case you’re one of the three people in the world who hasn’t seen the movie yet, this post contains spoilers for Barbie.
Barbie ends with Stereotypical Barbie (Margot Robbie, henceforth just “Barbie”), having become human, being delivered to an appointment of some sort by Gloria (America Ferrera) and Sasha (Ariana Greenblatt), the human mother and daughter who have befriended her. They wish her good luck, and she gets out of the car, entering a nondescript building.
My assumption, at this point, was that she was either starting her first day of a new job or arriving at a job interview. I figured it was decent odds the job would actually be at Mattel™. But after almost two hours of watching Greta Gerwig’s exceptionally smart film, I should have known better; I should have been better prepared for Barbie telling the receptionist that she was there to see her gynecologist.
A simple job or job interview would have been pedestrian; a job at Mattel™ would have been too neat, too much of an obvious endorsement of the naïve utopianism embodied by the Barbie brand. Perhaps more significantly, such an ending would provide a sense of continuity undermining the film’s more shrewdly subversive elements. By contrast, meeting with an OBGYN is a rather more startling note on which to end, though of course it shouldn’t be—for someone suddenly endowed with a functioning reproductive system where none had been before, it makes perfect sense.
And while gynecologists cover a wide range of care not necessarily specifically related to childbirth or conception, the obvious implication is that Barbie is either pregnant or seriously considering it.
[EDIT: Got a lot of pushback on this assertion, which is fair: whatever my qualifier, it’s not necessarily “the” obvious implication that Barbie’s in baby-making mode. I should say rather that it’s a possible implication, or even just go with the indefinite article and say it’s an obvious implication. But yeah, my own lack of any necessary interaction with a gynecologist makes me a wee bit myopic here. Unfortunately, I can’t think of any Ken-based puns about male obtuseness about obstetric care.]
There are a few ways to read this ending. A less charitable reading might see it as a betrayal of the feminist messages otherwise powerfully conveyed in the previous two hours of cinema—seeing it, in other words, as a normative gesture suggesting that, once human, Barbie is compelled to submit to a biologically determined destiny of motherhood.
That is one way to read it. I would disagree, however. Quite emphatically. (To be clear, I have not seen this interpretation anywhere. So far as I know, it’s a straw man of my own creation.) Another tack would be to see Barbie’s prospective procreation as consonant with the way the film explores the theme of mothers and daughters: the characters of Gloria and Sasha comprise the story’s moral and emotional core, with Barbie and her various other iterations providing a comic but poignant symbolic counterpoint to the fraught and complex question(s) of how to be a woman in the world. Barbie’s transformation into a human, after all, happens when she meets her creator and symbolic mother Ruth Handler (Rhea Perlman), whose last name she assumes.
There’s a lot going on in this movie, and I’ll be getting to a fraction of it in this post. I would never have imagined a film about Barbie, which was made under the auspices of Mattel™, would or could contain multitudes. But here we are. A friend and colleague of mine posted to Facebook that he’d never have imagined that he’d emerge from the theatre thinking about all the ways he’s now planning to reference Barbie in his literary theory class this fall. I had an identical reaction: given that I’m teaching an introduction to popular culture this fall and a seminar on postmodernism in the winter, there’s going to be a lot of Barbie popping up in my lectures.
I might write posts along those lines in the future, but at the moment it’s my upcoming utopias/dystopias course with which the film is really resonating. (I actually just emailed all my currently enrolled students that their summer homework is to go see Barbie. Given that it’s a reasonably good chance most of them already have, it’s not really much of an ask.)
This will be the second time I’ve taught the course, though it will be the first time I’ve done it in a physical classroom. I last taught it in the winter term of 2021, while we were all still on lockdown and all teaching was being done remotely (and yes, teaching a course on utopias and dystopias in literature during a global pandemic is a bit on the nose—or it would have been if my other class that term hadn’t been a graduate seminar on post-apocalyptic literature. I suppose you could say I was leaning in.) What became immediately apparent then, and what I’m working through again now, is the fundamental asymmetry one finds in utopias vs. dystopias. Which is to say: while we don’t lack for utopian literature stretching back to Plato’s Republic, two things leap out: (1) aside from Plato, Thomas More’s genre-defining novel Utopia, and such novels as Erewhon and Herland, there aren’t many titles that will be familiar; (2) the volume of utopian literature is dwarfed by a magnitude by dystopias. In fact, the entirety of the list of utopian works on Wikipedia is matched by just the YA dystopian fiction from just the past twenty years.
Which, to be fair, is hardly surprising. Perfect, idyllic societies don’t tend to be fodder for gripping narratives; by contrast, everything going to shit, where everyone either needs to grab a shotgun to kill zombies or flee from omniscient authoritarians, makes for exciting reading. Hence dystopia’s claim to the vast amount of market share.
But that asymmetry of market share is balanced conceptually, by which I mean that where dystopia is straightforward and easy to understand, utopia is far more complex and fraught. It is one of those concepts that fractures and expands and grows more elusive the more you examine it. It is both an impulse and a goal, a collective ideal and profoundly idiosyncratic. There comes a point where it can become simply too broad to be a tenable concept, given that it can be applied to any and all endeavours seeking to improve the human condition. Not least of which are works of creative imagination like filmmaking or fiction-writing: Northrop Frye characterized literature as “collective utopian dreaming,” in which even the bleakest and most nihilistic dystopian ideation is, in the act of its creation, a utopian gesture.
Or to put my point more succinctly: utopian thought is a necessary precondition for dystopian dreaming.
But what does this have to do with Barbie? You might well ask.
Or perhaps not: the utopian elements of Barbie are quite straightforward. Or they start out that way, at any rate. Barbie lives in Barbieland with all the other Barbies who have been iterations of the original doll: Barbies of every race and ethnicity, Barbies of every body type, Barbies of every profession. Barbie is the embodiment of capable, competent, accomplished womanhood. This enviable state of being is of course reflective of Barbie’s evolution, as she kept pace with societal change—or, more accurately, as feminist inroads in mainstream culture made it profitable for Mattel™ to create a far more diverse range of Barbies.
Barbieland as presented in the film, it is immediately apparent, is an idealized space of play. It is where Barbie and all the other Barbies exist in the imaginations of the little girls for whom they’re made.[1] It is also, crucially, a space of stasis: Gerwig communicates as much in our introduction to Barbieland, which features Barbie going through her routine of rising, dressing, “eating,” and driving around her sun-drenched pink environs in which all the different Barbies live their similarly perfect Barbie lives.
The discordant note entering this perfect harmony is change, which seeps in from the Real World. Barbie suddenly has thoughts of death, courtesy of her real-world owner—whom we learn isn’t the daughter Sasha but the mother Gloria, who has taken to playing with the Barbie doll out of a sort of sad nostalgia as Sasha grows out of her childhood innocence into the fiercely cynical and sarcastic adolescent intelligence that all adults fear. Thoughts of mortality precipitate change for Barbie: her feet, previously perfectly shaped to her high heels, flatten out; her breakfast “milk” goes sour; and, horror of horrors, she develops a scintilla of cellulite.
On learning that these changes are the result of Real World problems intruding on the Barbieland reality, Barbie is puzzled. We fixed the Real World, she says—the proliferation of Barbies of all races and professions showed that women can be and do anything, and thus resolved all those pesky issues.
This moment, which seems at first glance a comic expression of Barbie’s naivety, is key to the film’s utopian critique. For one thing, it skewers the naïve utopianism—the idea that making the Barbie line inclusive and aspirational substantively effects women’s empowerment—that animates the Barbie brand.
At the same time however it more subtly recognizes the utopian impulse of play and imagination. The point isn’t that an inspirational doll can effect systemic change, but that it opens a conceptual space in which to do so imaginatively. Fredric Jameson refined Northrop Frye’s assertion about art and literature as collective utopian dreaming, stating that it functions to resolve—symbolically and imaginatively—unresolvable real-world contradictions. Gerwig’s film is very aware of its corporate and consumerist framework; no amount of irony or satire changes the fact that Mattel™ signed off on it, and that, for all its subversive tweaks, it still functions as an extended advertisement for a doll that started as an idealization of white, blonde femininity (and no matter how lacerating Sasha’s speech was about Barbie’s pernicious effects on women’s body images, she’s still wearing a pink dress by the end). These, indeed, are the contradictions at the heart of Barbie; Gerwig, to her great credit, makes no attempt to resolve them in a facile manner or otherwise paper over them. As I said at the outset, Barbie ending the movie working for Mattel™ would have been just such a facile resolution.
Instead, Barbie contrasts the brittle fragility of naïve utopianism with the more complex utopian impulse: Gloria’s nostalgic play with the Barbie doll which precipitates the action proceeds from sadness and loss. Perhaps this is why it intrudes on Barbie’s perfect world—not the sort of play that imagines an ideal future, but which mourns the passing of an ephemeral past. One imagines however that for Gloria it is, for lack of a better word, therapeutic. It symbolically resolves the contradiction of what she wishes for her daughter’s future and what she misses from her past.
Nor is there any delusion on Gloria’s part that she can freeze that past in time. When Barbie later tells her that she just didn’t want anything to change, America Ferrera’s expression—sad, fond, wistful—and her intonation of “Oh, honey” articulates the film’s thematic crux. The naïve utopianism of Barbieland is naïve specifically because it imagines its stasis to be the ideal. The pernicious element in utopian thought lies in seeing it as an achievable end and that end as eternal and unchanging. Complicating the distinction between utopia and dystopia is how each contains the seed of the other. How many dystopian narratives begin in what initially seems like a perfect society? How many dystopian narratives are thinly-veiled fantasies about collapsing our arid and trivial modernity and replacing it with something authentic and primal?
It makes sense that Barbieland will continue on, restored from the Kens’ abortive experiment with patriarchy, continuing to be a space of fantasy and utopia and populated with whatever new Barbies Mattel™ creates. That Barbie herself leaves it behind to be human makes narrative sense—she’s not about to be satisfied with her old life—but also thematic sense. The film establishes Barbieland’s utopianism not just as an imagined space, but a space of creation in which fantasy takes physical form. The ending in this respect allegorizes growth and change. Possibly Barbie will have a child of her own, possibly not, but one way or another she will age and die, with each day being different from the previous.
KENDNOTES
1. “But what about Ken?” There’s a whole lot that could be said about the way the film depicts Ken (Ryan Gosling et al), but the principal point is that Ken was created to be ancillary to Barbie, and therefore that is what he is in Barbieland.He was always a concession to heteronormativity, a necessary male partner, but one who could never (a) overshadow Barbie, (b) be anything but bland and asexual, and (c) therefore have nothing approaching a real personality. Ken’s existential ennui proceeds from this indeterminate status that doesn’t allow him to have any real purpose and precludes any possibility that he may consummate his “relationship” with Barbie.
As our narrator (Helen Mirren) tells us in the opening sequence, Ken exists solely for Barbie; his only purpose is to be noticed by Barbie. This, of course, along with Ken’s ruinous (and hilarious) importation of patriarchy into Barbieland in the film’s second half, has not unpredictably gotten under the skin of the usual suspects in the culture wars. For all their sturm und drang (e.g. Ben Shapiro burning a stack of Barbies, dudebros losing their shit over Justin Trudeau and his son posing in pink at the theatre), I suspect that much of the angst proceeds from indignation over Ken’s relegation, seeing it as the film’s wholesale dismissal of men as useless rather than an astute observation about Ken’s literal role in the Barbie mythos.
Just for starters, I have a new Medium post up. This one was supposed to just be a regular blog post, but, as so frequently happens with me, it grew in the telling. It’s a bunch of thematically linked musings precipitated by finally watching the HBO adaptation of Station Eleven (so good!), the ongoing drama of the WGA/SAG-AFTRA strikes, and starting to prep my Fall classes in earnest. As is usual with my thought processes, I was struck by the serendipities: the relevance of the strikes to the popular culture class I’m teaching, the ways in which Station Eleven plays with the distinctions between high and low culture, the fact that I’m teaching Station Eleven in my Utopias/Dystopias class, and that I’ll be asking both classes a question arising from these musings.
Which is: in the event of a civilization-ending apocalypse that wipes out electrical infrastructure, assuming you survive, what would you do for entertainment in your down time? When not foraging for food or killing zombies, how will you nourish your soul?
Considering the critical mass of post-apocalyptic ideation over the past twenty years, it’s not a question that has had much play.
Anyway, I get into it in the Medium post.
Prepping my classes is oddly energizing. As I mentioned in my previous post, I didn’t teach in the winter term, so I’m excited about getting back into the classroom. I’m particularly looking forward to my first-ever poetry class. I’ve taught a lot of poetry in the twenty-six years since I was a TA at the start of my PhD … but I’ve never taught a course exclusively about poetry. A number of years ago, my fellow Americanist in the department and I completely revamped all the American literature courses, basically burning it all down and rebuilding. Third year classes are our genre classes: drama, fiction, poetry. My colleague is the American theatre encyclopaedia, so he has taken drama, and I’ve taught the fiction class numerous times. But I didn’t want the poetry class to lie fallow forever, so I put it on my roster this year.
I’m covering the years 1922-1968—basically from the first cannon-shot of modernism (i.e. The Waste Land) to the symbolic end of 60s counterculture, from Pound and Eliot to Ginsberg, with stops along the way at imagism, the Harlem Renaissance, WWII, mid-century women poets, and voices from the Civil Rights era.
I was always trepidatious about teaching a poetry class, for the simple reason that I was concerned about enrolments. Poetry classes have had a tendency to be undersubscribed, and in the university’s current economic straits, the powers that be are quicker to cancel classes than usual. I was marshalling my arguments for keeping the class in the event it garnered students in the single digits (the class is capped at 35), and was prepared to let me other classes be oversubscribed.
Turns out I needn’t have worried.
As of writing this, the class is full. Apparently, students want to read poetry after all. I really have to stop underestimating this generation—they’re always surprising me in heartening ways.
I haven’t taught our Communications Studies course on popular culture in six years; when starting to put it together for this fall, I realized one of my decisions would be which streaming services I would oblige my students to subscribe to. There are other ways to do things, of course, but streaming is now where the lion’s share of content resides. After a certain amount of deliberation, I decided to opt for just one: Disney+. There were several reasons for this. One, Disney is now so all-encompassing that it includes all Star Wars properties, all Marvel, Pixar, as well as a host of stuff that seems weird to find on the Disney service (What We Do in the Shadows, for example, or It’s Always Sunny in Philadelphia are hardly content one would associate with the Mouse). Two, that very all-encompassing quality will comprise a very useful meta-question when thinking about the history of the culture industry and its present state. Three, the ongoing feud between Ron DeSantis and Disney just begs to be interrogated in a class such as this. Four, Disney has Hamilton, which will let me revisit the play—when I did our pop culture class the first time, in fall of 2016, we did Hamilton, but were limited to the cast recording. Coinciding with the election of Trump, Hamilton went from being a musical embodiment of the Obama era to a naïve relic of “hope and change” in the space of a day. It will be interesting to consider Lin-Manuel Miranda’s theatrical phenomenon after the past seven years.
Finally, the real seed of this choice came when I watched Andor and thought “Oh, if I ever teach pop culture again, this is totally going to be on the course.”
Finally, Utopias and Dystopias. This course was on the books when I started at Memorial eighteen years ago, the creation of a senior colleague now retired. I coveted it from the start, and finally got a chance to teach it in winter 2021. Teaching a course on utopias and dystopias remotely during a pandemic lockdown was … well, a little on the nose, shall we say? Though I was also teaching a graduate seminar on post-apocalyptic literature at the same time, so I was sort of leaning into the theme I suppose.
That first go-around I did not include Orwell’s 1984—I went in with the vague assumption that all my students would have read it. Turns out none of them had, which I found odd. So I put it on the course this time, and am currently re-reading it for the first time in I don’t know how long. I went on a big Orwell kick last year and read a whole bunch of his essays, inspired by Rebecca Solnit’s Orwell’s Roses. Getting stuck back into 1984 after having done all that reading, and after having not re-read it for so long, is quite lovely. And terrifying.
Oh, and for those wondering, based on my poster image—yes, we’ll be doing an episode of The Last of Us. In fact, we’ll be doing that episode concurrently with Station Eleven under the general theme of “what will survive of us is love.”
That’s it for now. Again, I leave you with a cat: Gloucester, not quite coping with the recent spate of un-Newfoundland hot weather.
I have this blog as my browser home screen, which means that when I go a long time between posts, every time I open Firefox I’m scolded by the months that have passed since I last wrote something here. So … February 9th until today is just shy of five months. This blog is no stranger to long hiatuses, but that’s ridiculous.
Especially considering I haven’t lacked for anything to write about. I have copious notes toward a whole host of things, some of which are now too far gone to be relevant, some of which are more substantive. But then, that’s part of the problem: as anyone who knows me will attest, I tend toward the prolix. And I’ve come to think this blog isn’t the best forum for long thinky pieces; really, what I should do here is something like a weekly check in. Short, fun, friendly.
To that end, I started a Medium account, which will be the place for my thinky pieces. What I’ll do when I post there is post here as well with a TL;DR synopsis and a link.
Which begins today! (Happy Canada Day, BTW). I’ve been working on something for far longer than had been my intention, which I’d hoped to get up early in June in honour of Pride Month. Well … a day late. But here nonetheless. It’s a longish read, but hopefully engaging and thought-provoking. It’s titled “Sir Terry vs. The Gender Auditors.”
TL;DR: A year ago or so a Twitter fight erupted between people seeking to posthumously recruit Terry Pratchett to the anti-trans “gender critical” (GC) fold and those who said, essentially, “Have you READ his books?” As a paid-up member of the latter group, I delve into precisely how Terry Pratchett’s fiction articulates a philosophy (which I call magical humanism) emphatically at odds with the GCs.
I took an unconscionably long time to write this essay for a variety of reasons, but one was a not-infrequent recurrence of self-doubt—I’m leery of being a cishet guy holding forth on LGBTQ issues. At the same time, I have watched the ongoing anti-queer backlash unfolding with fear and heartache, as well as a feeling of helplessness. Silence doesn’t feel like an option. So take that as you will and hopefully I’m not being presumptuous in my allyship.
Anyway … hopefully this will be the first of many. I have a bunch of other things in the hopper, and I’ll be aiming for a monthly posting (though that is certainly overly optimistic). Meanwhile, I’ll have more frequent short blog posts here, especially as classes approach and I get deeper into prep work and I’ll need a nerding-out space.
Also: one of the other reasons I’ve been absent from this blog? I wrote a novel! I finished the initial draft on April 14, and I’m now into the third round of revisions. More on that, and more on the process of writing it, in upcoming posts.
And in the spirit of more and breezier blog posting, here’s a cat. Gloucester the cat, to be precise, in his natural habitat (i.e. a bag).
I’m on strike. My faculty union at Memorial University, after fourteen months of frustrating and fruitless negotiations with an utterly recalcitrant administration, called a strike vote. Ninety-three percent of members voted, of whom ninety percent (myself included) called for a strike. And so now we are in our second week of walking the picket lines.
I’ve never been on strike before, so this is a new and interesting experience for me. I have no idea how it compares to other such job actions, but I can confidently make two observations that may seem to contradict each other: one, everyone is desperate to end this in a satisfactory manner and get back into the classroom; two, everyone is having a blast.
To emphasize the second observation, let’s keep in mind that this strike is happening in February in Newfoundland, which means the weather has ranged from bad to shitty—at best, inoffensive gloom, at worst sleeting rain and blizzards. After our first day I invested in thermal underwear, good mittens, snow pants, and several pairs of thick socks. But once so fortified, walking the picket line for two and a half hours a day has become something I look forward to, because it has meant spending time with my colleagues, who are to a person smart, dedicated, funny, compassionate, and profoundly, inspiringly dedicated to their teaching and research. Those two and a half hours fly past as we chat, joke, talk about our research and writing projects, and—most importantly, perhaps—have very intensive discussions about the strike itself and the broader issues at stake.
.
That being said, none of this is a lark. Everyone is concerned for our students; we worry incessantly about the adverse effects this might have on their term; and we worry about the future of the university—both our specific institution and the “university” more generally. What is most heartening and keeps our hope and energy up is that our students are firmly behind us. Every day they come out to the line with signs of their own, often bearing coffee and donuts, shouting and singing their solidarity and voicing the same ire at our current tone-deaf and bafflingly obtuse upper administration. I have little doubt that this strike would sputter and die if we found our students foursquare against us; even just ten years ago, there would likely have been, at the very least, ambivalence and a more pervasive skepticism about tenured professors making a comfortable living demanding more.
To my mind, the signal shift in the present moment is exemplified by our administration’s consistent failure to frame the strike in those very terms. By the old rules of the game, it should be the easiest thing in the world to vilify the striking faculty as a bunch of sheltered, tenured Sunshine List elites making unreasonable demands in a time of economic straits. They have certainly tried, but it seems for once that people—both our students and the public more generally—aren’t buying it. The administration has attempted to make this all about professors asking for more money, but for once the more complex argument is finding a receptive audience. It’s not the salaries of tenured professors that has people’s attention, but the pittance paid to precariously-employed contractual professors on one hand, and the $450,000 salary of the university president on the other.1 It’s not the putative ivory-towered academics understood to be out of touch, but the university’s managerial class—who for the duration of the strike thus far have issued occasional and increasingly petulant messages making verifiably false claims, refused to accommodate students uncomfortable with crossing picket lines, and forbidden administrative staff and per-course instructors from joining faculty on the picket line on breaks and lunch hours. It’s not so much about professorial compensation as it is about collegial governance and faculty having more of a say in the university’s future.
The fact that these knottier, more complex issues seem to be eclipsing the easier to understand snooty-professors-want-moar caricature is heartening; in my more hopeful moments I think it signals a shift, moving our cultural center of gravity away from the neoliberal dominance of the past few decades to something more humane and empathetic. I’m reasonably convinced that the experience of the pandemic is at the root of this apparent shift: we have a generation of students who endured two years of remote learning, who found their professors to be sympathetic people understanding of their travails and saw them also struggle to do their best in bad circumstances. All of which unfolded in a larger societal context in which prior verities about work and recompense came into question: the category of “essential worker” extended to people working minimum wage jobs in grocery stores; people fortunate enough to be able to work remotely realized they could do the same job in half the time while wearing pyjamas; quality of life became a more pronounced concern, something made plain by the general reluctance for people to return to shitty jobs simply for the sake of having a job; meanwhile, the problem of wealth disparity became ever more glaring as the wealthiest sectors did not share the pain but grew even wealthier.
Much of this is difficult, if not impossible, to quantify. Hence, I should be cautious and note that what I’m describing is less an objective, empirical reality than a vibe. But it is a profoundly powerful vibe that currently thrums through the energy on our pickets. And, well, I’m a humanities professor: qualitatively considering and analyzing vibes is more or less my stock in trade. To put it another way, I work in intangibles.2 In the context of a corporatized university whose administrative class has become increasingly preoccupied with “outcomes” and “finding efficiencies,” this has meant fighting a protracted rearguard action against a pervasive attitude (epitomized by but not limited to university administrators) to which intangibles are anathema.
I’ve devoted a lot of thought over my career as an academic to the question of how to argue for the value of intangibles. Walking the picket lines with my brilliant colleagues and talking with the many, many students who come out to support us has made one thing clear to me: if I’m looking for a concrete manifestation of this intangible value, it’s here, in the human beings who comprise the university.
.
The most common refrain among the students articulates this sensibility: professors’ teaching conditions, they say, are our learning conditions. And in the end, it is the classroom that is the most fundamental university space, and the students’ experience that is—or should be—the central focus of the university project. Because if not, then what are we doing otherwise? Professors with time and resources to do research bring that depth and breadth of thought to the classroom; perhaps more importantly, contractual and per-course instructors who aren’t run ragged with massive teaching loads, under constant financial stress, and who have a reasonable chance at converting their precarious positions into full-time careers, are going to be far more effective in the classroom (the fact that so many of them are exemplary educators now speaks to an inhuman level of dedication).
As I write this, the faculty union and the university bargaining team are back at the table. I hope they resolve this satisfactorily so we can resume our real work. But since before this started, it has felt as if the administration is speaking a language with no meaning for the rest of us. And if that continues, I and my colleagues are in for the long haul.
#FairDealAtMUN
NOTES
1. Memorial’s current president Vianne Timmons is paid $450,000 as a base salary, along with an $18,000 housing allowance, $1,000 monthly vehicle allowance, $25,000 research, and the standard travel perks afforded her position (which most recently included travel to Monaco for a conference of—wait for it!—arctic university administrators). The process for “finding” and recruiting Timmons cost the university $150,000. A CBC report about negative responses to Timmons’ lavish compensation quoted someone familiar with her hiring process as saying “Only a handful of people are qualified to lead a university like MUN, and finding that person takes time and money.” I think it’s safe to say, especially given Timmons’ utter lack so far of public statements about the strike—given that a university president’s central task is presenting a public face of the institution—that we’re not getting our money’s worth, and that the remarkable solidarity on display is at least partly a backlash against the assumptions quoted above.
It also needs to be emphasized that this state of affairs is pervasive across academe, something usefully discussed by Amir Barnea in the Toronto Star.
2. Again, I am a humanities professor, so my perspective in ineluctably informed and shaped by that context and training. But I should note that the intangible value of a university education—the intrinsic value of the university experience—transcends discipline. Whether your degree is in philosophy, chemistry, or engineering, if your sole metric of value is your ultimate salary, you’ve sadly missed the larger point. In the end, as I conclude above, it is the human dimension that defines the university.
The Rig is a new British series on Amazon Prime that I watched for two reasons: one, it stars Iain Glen; two, the trailer teased it as a Lovecraftian horror set on an oil rig in which something from the deeps makes its presence known in increasingly disturbing and threatening ways.
.
So OBVIOUSLY this was something I needed to watch, both from personal interest and, as someone who has now taught three classes on H.P. Lovecraft and weird fiction, from professional obligation. And it was … good. It was obviously done on a budget and there were moments of didacticism that made it feel more like a Canadian than a British show,1 as well as some clunky writing just more generally, but it was interesting and definitely worth watching. What makes it worth posting about is that it embodies a number of themes and tropes that are representative of the ongoing evolution of the New Weird, which are what I want to talk about here.
The setup is quite simple: the crew of an oil rig off the Scottish coast, the Kinloch Bravo, are looking forward to heading home as their rotation ends. A crisis on another platform diverts the helicopters, however, so everybody has to sit tight for however long that takes to be resolved. But then a tremor shakes the rig and a thick, spooky fog rolls in, and all communication from internet to the radio goes down. Tempers, already frayed from the helicopters’ diversion, erode further. A crewmember named Baz goes aloft in an attempt to fix the radio transmission; he falls and is badly injured. His injuries, however, which should have killed him, start to heal on their own. At the end of the first episode he staggers out to where the crew is assembled on the helicopter deck to warn “It’s coming!” Meanwhile, just before his appearance, it is noticed that the fog is composed partly of ash.
And that’s the first episode. One of the things I quite liked about The Rig is that it settles in for a slow burn over its six episodes. Though punctuated here and there by moments of shock or surprise, it’s mostly about two things: the personalities of the crew and their (often fraught) relationships, and the gradual figuring-out of just what is going on. And if both of these components tend toward overstatement and occasional hamfisted exposition, the story is nevertheless compelling. In short, the drilling of the undersea oil fields has released or awakened an ancient life form, bacterial in nature, which lives in the ash that falls out of the mist. When a human is infected by way of a cut or simple excessive exposure to the falling ash, the organism takes up residence and makes the host more amenable to it by fixing injuries and expelling impurities. For Baz, this means he heals quickly; it also means the gold fillings in his teeth fall out, as they’re treated by the organism as foreign objects. So, on the balance, not bad for Baz—but another crewmember doesn’t fare so well, as his history of addiction and many tattoos make the process fatal for him. (The scene in which he is about to get into the shower and suddenly finds all the ink in his skin running is a great uncanny moment).
What’s most interesting in The Rig, and what makes it worth discussing in some detail, is that the “invader” isn’t necessarily monstrous and isn’t overtly malevolent. The agony Baz experiences, we come to understand, isn’t torture or assault but the entity trying to communicate. The ultimate climactic crisis isn’t about whether our heroes can defeat and/or destroy the invading force, it’s whether they should. The problem posed isn’t how to repel an invader, but how two intelligent species can communicate and whether they can coexist. In later episodes, Mark Addy shows up as a company man named Coake2 who has an explicitly instrumentalist and zero-sum response to such questions: as seen in the trailer, he spits “Nature isn’t a balance, it’s a war!” Later, he informs one of the crew that “[the corporation] Pictor’s business is resource management! We find them, we use them up, we move on! And that includes human resources.” He boils it down: “We’re useful or not. We have value or not. They’re coming for me because I have value. Anyone who doesn’t gets left behind.” He delivers this rant as, against the explicit orders of the rig’s captain Magnus MacMillan (Glen), he prepares to put into action a plan to destroy the entity.
This confrontation, coming as it does in the final episode, clarifies lines of conflict that had been usefully muddled and uncertain to this point. Kinloch Bravo is under threat, but that threat has been ambiguous. As the nature of the entity becomes clearer, the nature of the danger it poses becomes less so. If there is a consistent bogeyman throughout, it’s the company itself, a faceless corporate entity regarded with a sort of low-grade antipathy by everyone. Rumours that the platform is set to be decommissioned—rumours later confirmed—circulate, with people’s anxiety about future employment in tension with the understanding of the pernicious impact of their industry on the environment. As one character observes, a long life of gainful employment is meaningless if, in the end, the sky is on fire.
The ambivalence of the crew (or some of them, anyway) to their industry squares with their ambivalence to their company: Coake’s rant is in some senses merely an explicit confirmation of the more diffuse sense pervading the crew of their expendability—that they, like the oil they drill, are resources to be extracted. The character of Hutton (Owen Teale) anticipates Croake’s sentiments with an embittered speech in the final episode: “I used to think we were the steel, holding it all together,” he tells Magnus, “even if the rest of the world didn’t see it. But now I know we’re the well. Because every trip, every person that gets chewed up, every chopper that goes down, it just takes a bit more, and a bit more … till it hollows you out.”
Hence, the emergence of the entity—the “Ancestor,” as Rose (Emily Hampshire) comes to call it—is threatening not because it offers destruction, but because it represents an alternative way of thinking and being. It is a collective organism; the humans it “infects” can communicate with it, after a fashion, and with each other. Its threat to the lives of the rig workers and people more generally is commensurate with the threat it perceives from them. It is not itself malevolent or casually destructive.
Its ancient provenance and apparent immortality—Rose establishes that it measures time on a geological scale—as well as its capacity to infect people, puts it very firmly in the Lovecraftian tradition of cosmic horror. However, the wrinkle that it potentially poses no existential threat is a significant departure from the standard conventions of the genre. To be certain, there is a definite Lovecraft vibe in The Rig in the frequent refrains about how we know less about the ocean depths than the surface of the moon, or in Alwynn’s pithy observation (featured in the trailer above) that “If we keep punching holes in the earth, eventually it’s going to punch back.” The implicit sentiment that the madcap drive for oil exploration in the name of profits will take us into dangerous territory is not dissimilar from Lovecraft’s opening paragraph of “The Call of Cthulhu”:
We live on a placid island of ignorance in midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age.
“Cosmic horror” of the Lovecraftian variety is built around existential dread arising from the realization of humanity’s insignificance in the face of the infinitude of vast, cosmic entities. The “old gods” like Cthulhu and Dagon that populate Lovecraft’s stories find echoes in the “Ancestor” of The Rig, but the way in which it infects certain crewmembers is more in line with “The Shadow Over Innsmouth,” Lovecraft’s story in which he grafts his Cthulhu mythos onto an allegory of his horror of miscegenation. (For those unfamiliar with Lovecraft, an important bit of context is that he was terribly racist). The story’s narrator is touring historically interesting towns in Massachusetts. As part of his itinerary, he arrives in Innsmouth—a run-down old fishing community with some interesting architecture, an odd historical relationship to the reef just off its shores, and people who have a vaguely fish-like aspect about them. TL;DR: having long ago made a Faustian deal with the Old God who inhabits the reef, the people of the town have become fish-human hybrids. The narrator discovers that he is descended from Innsmouth stock and will inevitably be thus transformed.
I cite “The Shadow Over Innsmouth” because it set a standard in weird fiction rooted in the horror of this revolting otherness, usually framed in slimy cephalopodic terms: the tentacle-faced god Cthulhu being the apotheosis.
He seems nice.
.
What is so fascinating in weird fiction is less the lugubrious racism of Lovecraft than the ways in which his idiom has been adapted and transformed by such contemporary authors as China Miéville, Jeff Vandermeer, Charlie Jane Anders, and Annalee Newitz (among many others). One key shift in the “new weird” is the quasi-erasure of Lovecraft’s instinctive antipathy to the uncanny Other. What surfaces instead (so to speak) is a more complex interrogation of that otherness and the consideration that (a) it isn’t necessarily evil or destructive; (b) it might offer alternative modes of thinking and being; and (c) perhaps it is the next stage of evolution. The Rig’s principal literary allusion is thus not Lovecraft, but an author whose apocalyptic imagination was somewhat more nuanced.
About five minutes into the first episode (four minutes and thirty seconds, to be exact) we see the character Alwynn (Mark Bonnar)3—the calm sage wisdom archetype—reading John Wyndham’s novel The Kraken Wakes.
Mark Bonnar as Alwynn and his not at all significant choice of reading.
.
Wyndham’s novel, published in 1953, is about an invasion by an alien species that can only live under conditions of enormous pressure, and so arrive as a series of mysterious fireballs that land in the deepest parts of the ocean. At first the fireballs are thought to be just some weird cosmic phenomenon, but then it slowly becomes apparent that the Earth’s deeps have been colonized by an intelligent species. For the reader’s benefit, Wyndham provides one of the more obvious voice-of-the-author characters you’re likely to encounter, an outspoken scientist named Alistair Bocker who is more clear-eyed and prescient than anyone else about the newcomers and the threat they pose, though he is dismissed by most for the better part of the novel as a crank. The narrator and his wife, however, a pair of journalists, befriend Bocker and so have a front row seat to his outlandish theories, which are, of course, all correct and the actual reality of what’s going on.
The Kraken Wakes has had something of a revival in literary study because it squares nicely with recent waves of climate fiction (cli-fi) and ecocritical literary theory. The third phase of the aliens’ war against the landlubbers—after phase one, in which shipping is sunk, and phase two, which involved “sea-tanks” coming ashore and assaulting coastal towns and cities—is the melting of the polar ice caps to raise the sea levels. This last attack precipitates the collapse of civil society, as scarcity takes hold and nations fall into civil strife and factional warfare. Hence, one can see how it resonates now, seventy years after its publication.4
The Rig, for all of its less subtle tendencies, is a thoughtful contribution to the growing body of creative works thinking through the crisis of the Anthropocene, which is among its other aspects a philosophical and imaginative crisis. Alwynn only offers comment on The Kraken Wakes twice; when first asked what it’s about, he replies cryptically, “A collective failure of imagination.” The rising sea levels that drive humanity away from the coasts and topple governments are the culmination of this failure: while Wyndham’s Dr. Bocker does eventually opine about the impossibility of co-existence with another intelligent species, he makes clear that the instinctively aggressive response to the submerged interlopers will end badly. His calls to attempt contact are ignored, dismissed, or simply laughed off as naïve. When the first of a series of nuclear weapons is dropped into the deeps, he effectively throws up his hands in disgusted resignation.
That this dynamic is reiterated in The Rig—replete with the allusions to Wyndham—makes for an interesting thematic turn in the contemporary context. The series is by no means a straightforward climate change allegory but is sympathetic to the people whose livelihoods depend on the fossil fuel industry. That these people are, as Hutton makes clear in his speech, as much an extractive resource as the oil itself puts the emphasis on the inhumanity of the industry and the collective failure to imagine alternative economies, alternative ways of being. The Rig makes an interesting, if unlikely, companion piece to Kate Beaton’s recent graphic memoir Ducks: Two Years in the Oil Sands, which chronicles her time working in the oil patch as a means of paying her student loans off. The book’s consonance with The Rig is precisely in this depiction of people as objects of extraction.
.
The conflict at the heart of The Rig is not between collective and individuated intelligences, but between a collective intelligence and one desperately invested in seeing itself as individuated. The latter state is given to the sort of zero-sum thinking proffered by Coake: you either have value or not, and those without value can be justifiably excised from the equation. Though never stated explicitly (one of the series’ rare moments of such restraint), the collective intelligence of the “Ancestor” is the principal threat to the corporation—itself an ironically amorphous entity, but which is reliant on its workers and consumers believing themselves locked in a zero-sum individualistic society for which there is no alternative. Environmental movements have always advocated seeing ourselves as parts of a collective entity, as survival as a species is never down to any single one of its creatures. Such thinking is anathema to capitalism, which has done its best to convince people that, in Margaret Thatcher’s words, “There is no alternative.” It has indeed become a commonplace observation that it is easier to imagine the end of the world than the end of capitalism.
The second and final time Alwynn comments on The Kraken Wakes is in response to a crewmate’s question, “How’s it working out?” To which he responds, “Not good for us.”
NOTES
1. Weirdly enough, the more earnest didacticism that I always associate with CBC dramas is mostly provided by a Canadian actor. Emily Hampshire plays Rose, a company representative on the platform who seems to operate outside the usual chain of command. At first she appears as though she’ll be one of the bad guys, as she’s a company mouthpiece. But she chooses her side quite emphatically, which is at least partially based on her erstwhile desire to be a paleontologist; this educational background makes her the person best able to grasp what the “Ancestor” is, but also makes her the source of most of the show’s lecture moments when she provides lessons in geology.
Hampshire most indelibly played Stevie Budd on Schitt$ Creek, which also makes her a weirdly uncanny presence in this dramatic and vaguely apocalyptic setting.
2. Three thoughts. One, The Rig gives us a troika of Game of Thrones alumni: Iain Glen aka Ser Jorah Mornont as Magnus, captain of the rig; Owen Teale, who played Ser Aliser Thorne, Jon Snow’s principal antagonist at the Wall—here as the malcontent Hutton, he’s playing essentially the same character, albeit with a redemptive moment at the end; and of course King Robert Baratheon himself, Mark Addy. Thought the second: Mark Addy has a peculiar, almost nasal quality of voice, that is extremely endearing when he plays a good bloke as he does in The Full Monty, but which is extremely sinister and grating as the nasty Coake. Three: though we see his name spelled “Coake,” it’s a homonym for “Koch” and I wondered if the series is deliberately referencing the fossil-fuel billionaire Charles Koch and his late brother David.
3. Mark Bonnar’s Alwynn is the other lecturer-type in the show. He is also so much like an elder version of David Tennant that my partner and I started referring to him as Grandpa Tenth.
4. I’m delighted that John Wyndham is having a revival. I read The Chrysalids in grade nine English and proceeded to tear through his other work—discovering as I did that The Chrysalids is not close to his best. The Kraken Wakes was the one I wanted to read but it was out of print for years; it was only on finding a copy in a used bookstore that I was able to read it, and it immediately became my favourite. Several years ago in a second-year SF class I taught Day of the Triffids and was struck by its prescience—not the kind of prescience that has made Kraken Wakes a cli-fi staple, but a kind of genre prescience, given that the basic structure, narratively and thematically, anticipates zombie apocalypse.
It occurred to me as I was starting to draft this post, my first of 2023, that my first post of 2022 was about the Netflix film Don’t Look Up. Well, not about that movie so much as mentioning it in passing, but it still struck me as serendipitous—both because my first post this year is also about a Netflix property, but more significantly about another film featuring a satire on the figure of the genius tech billionaire.
Don’t Look Up, for those who don’t recall or never watched it, is a broad and profoundly unsubtle parable of climate change in which an asteroid’s imminent collision with Earth will be an extinction-level event. Rather than mobilize the globe behind a concerted effort to destroy or divert the asteroid, the American president and the media treat it, well, like they’ve treated climate change. Which is to say, with denial, deflection, and a minimization of the threat until it’s almost too late. And when they finally launch a salvo of ICBMs, the president aborts minutes after launch because of the intervention of Peter Isherwell—a genius tech billionaire played by Mark Rylance—whose engineers have discovered that the asteroid is chock full of extremely valuable minerals. Long spoiler short, Isherwell’s brilliant plan to fragment the asteroid into non-lethal but very harvestable bits fails and everyone on Earth dies.
Let’s stick a pin in that for a moment and turn to Glass Onion, Rian Johnson’s second film featuring master detective Benoit Blanc, who is played once again with glorious aplomb by Daniel Craig. Two films in what I hope will become a prolific series aren’t enough to discern a thematic pattern just yet, but both Glass Onion and its predecessor Knives Out share the common premise of wealthy, entitled people being brought low by a young working-class woman—Anna de Armas in Knives Out, Janelle Monae in Glass Onion—with, of course, the assistance of Blanc and his gleeful scenery-chewing. If Knives Out was more generally a class warfare parable, Glass Onion is a broadside against the ostensible meritocracy of the “disruption” economy. The second film somehow manages to be at once less subtle and more nuanced in its critique: less subtle because billionaire Miles Bron (Edward Norton) and his “Disruptors” are basically archetypes of the social media era1 and because Bron’s downfall at the film’s end is nakedly cathartic schadenfreude; more nuanced because of the film’s critical implications, which are what I want to tease out in this post.
The conceit at the center of the film, which is also its big reveal, is that genius tech billionaire Miles Bron not only isn’t a genius, he’s actively a moron.
As alluded above, this is a cathartic reveal. It is also a serendipitous one in the present moment, coming as it does in the midst of Elon Musk’s heel turn.2 One of the slowest, hardest lessons that we still haven’t entirely learned over the past several years (as this film dramatizes) is not to assume complexity of purpose and motive where there aren’t any; not to assume arcane subtlety of thought when stupidity, cupidity, and/or simple incompetence makes just as much sense. This indeed was the central fallacy of the conspiracy theories imagining Trump as a Russian asset, with Putin manipulating him in a byzantine web of intrigue; it is also the delusion at the heart of the 2020 election denial with its enormous ensemble of plotters that somehow linked the late Hugo Chavez to Italian military satellites and Dominion voting machines.
It is also the conceit at work in theories positing that what we see of Elon Musk’s actions at Twitter are merely glimpses of an otherwise invisible and massively complex game of four-dimensional chess, which we non-geniuses perceive as hapless floundering. I’ve now seen numerous such fantasies explicated in varyingly granular detail—some framed in awe of his brilliance, some as warnings of his Bond-villain plotting. But really, I have to think it’s all like the glass onion of Johnson’s film: a symbol of complexity that is, in actuality, transparent.
Trump was only a Russian asset insofar as he wanted to emulate Putin, which Putin knew would be bad for America. Accordingly, the Russians did their best to fuck with the election, but that was blunt-force action, not a labyrinth of intrigue. The 2020 election was, very simply, an election that voted out an historically unpopular president. And Elon Musk thought he could treat an irreducibly complex cultural and political morass as an engineering problem. In each case, people needed to see complexity instead of looking, as Daniel Craig’s detective Benoit Blanc suggests, “into the clear centre of this glass onion.”
A good murder mystery benefits from a great twist, a revelation that shocks the reader/audience all the more because it seems so obvious in hindsight. Agatha Christie was a master of this: she gave us mysteries in which, by turns, every suspect was guilty, none of them were, and in one instance—a twist that made my students genuinely angry on the one occasion I taught the novel in question—the narrator did it. What’s revelatory in Glass Onion isn’t the identity of the culprit, but his idiocy. By the time Benoit Blanc is doing the classic gather-the-suspects-in-the-library bit, his ensuing speech isn’t meant to reveal Miles Bron as the murderer, it’s to stall for time so Helen (Janelle Monae) can find the proof in his office they need. As Blanc rambles into increasing incoherence, drawing out his monologue as much as he can, he inadvertently finds his way to an epiphany,3 one that offends his sensibility as “the world’s greatest detective”:
His dock doesn’t float. His wonder fuel is a disaster. His grasp of disruption theory is remedial at best. He didn’t design the puzzle boxes. He didn’t write the mystery. Et voila! It all adds up. The key to this entire case. And it was staring me right in the face. Like everyone in the world, I assumed Miles Bron was a complicated genius. But why? Look into the clear centre of this glass onion. Miles Bron is an idiot!
I’m going to go out on a limb and guess that the most prominent tech billionaires aren’t nearly as moronic as Miles Bron is portrayed. After all, in the film Bron’s supposed genius is revealed to be entirely fabricated, the product of theft and systemic mendacity, to the point where it almost strains credulity that he could have successfully managed a multibillion dollar corporation. Almost! The point made over the course of the film is just how much of Bron’s success is predicated on other people’s vested interest in maintaining his mythos. A key scene at the beginning features Lionel (Leslie Odom Jr.), Bron’s chief engineer, in the midst of trying to salve the concerns of the board of directors. The board seems to be getting fidgety about their erratic CEO. Lionel makes a case we’ve heard made a lot in the past two decades: sure, Bron’s ideas seem out there, but that’s just his genius, his capacity for blue-sky thinking!
(I should pause to note how good Odom Jr.’s performance is: he communicates quite deftly, through his tone and facial expressions, how he’s papering over his own misgivings and making an argument he desperately needs to believe).
All of the people invited to his private island in a classic murder mystery setup—his “Disruptors,” as he fondly calls them (more on that momentarily)—are people who are beholden to Bron, and whom he needs to maintain the fiction of his genius. The point of contention at the heart of the mystery is that Bron stole the work of his former partner Cassandra (Monae). When she sued, all the others perjured themselves in support of Bron because they’d needed to stay in his good graces, in part because he had become the principal patron in all their endeavours, but also because their own success had traded on Bron’s name and reputation for being a genius.
Ten years ago or so, Chris Hayes (of MSNBC fame) wrote Twilight of the Elites, a trenchant critique of the cultural tendency to understand meritocracy uncritically, as an invariably positive thing. As a general principle, Hayes observes, meritocracy doesn’t have much to quibble with: who seriously thinks that the best ideas, the strongest performances, the most talented people, shouldn’t be rewarded? The problem however is less with the principle itself than the conviction that it can be a self-sustaining system. Left unregulated, the libertarian ideal asserts, the best people, products, and ideas will inevitably excel and failure will be relegated, deservedly, to the dustbin of history. Interference in the pursuit of pure excellence—especially by the government—is a recipe for mediocrity and turgidity.
The unavoidable problem with this premise, Hayes points out, is that if your sole criterion is rewarding success—with minimal oversight—you inevitably reward cheating. He cites numerous examples, but the one I found most striking wasn’t in the book itself, but one that unfolded right at the time the book was published, and which Hayes cited in numerous interviews he gave. In late 2012, longstanding rumours about Lance Armstrong’s cheating came to a head. In January 2013, he gave the now-notorious interview with Oprah Winfrey in which he admitted to systematically doping. But as Hayes notes, it wasn’t just that Armstrong cheated pharmaceutically over a long period, but that he created circumstances in which he obliged everyone around him to be complicit—either by actively aiding him, by remaining silent, or, in the case of other cyclists on the American team, also doping in order to keep the team as a whole competitive. Once complicit, of course, it was in everyone’s self-interest in Armstrong’s orbit to perpetuate and maintain the fictions that sustained their livelihoods. What’s more, this cohort of self-interested insiders colluded to silence or suppress anyone seeking to expose the truth.
Hence, to my mind the most significant aspect of Glass Onion is not the broad parody of a billionaire who’s not nearly as smart as he thinks he is or pretends to be; while the spectacle of Helen destroying the symbols of Bron’s wealth and excess that ends the film is profoundly cathartic, it perhaps obscures to some extent the culpability of Bron’s enablers. Helen’s rage, after all, is elicited not just by Bron’s successful destruction of the evidence that would have exposed him, but by the complicity—again!—of his “Disruptors,” who shamefacedly accede to continue lying for him, in spite of the fact that they now know he has murdered two of their number.
In the end, Glass Onion is about the fallacy of disruption. Bron’s Disruptors are as much disruptors as Bron is a genius: he flatters them outrageously in turn to Blanc at one point, enumerating the ways each supposedly acts as a productive chaos agent in their respective fields, but when push comes to shove disruption is the last thing they want—having built wealth and power, their interest is in consolidating and expanding it. This indeed is Silicon Valley writ small: Mark Zuckerberg’s motto might be “Move fast and break things,” but even an indifferent observer will note that it has been a very long time since he has broken anything or moved very fast or very far. It is perhaps ironic that in the current tech landscape, dominated by billionaires and corporate giants, the true disruptors are governmental figures like Elizabeth Warren–those who are quite vocal in their determination to disrupt such tech monopolies as Amazon and Facebook (sorry—Meta) and break them up into smaller, more competitive chunks.
It is however heartening the difference a year makes when comparing Miles Bron to Peter Isherwell of Don’t Look Up. Even just a year ago, the central conceit of an idiot billionaire wouldn’t have felt quite as on the nose. Indeed, Mark Rylance’s performance in Don’t Look Up is more consonant with the general assumptions of the past two decades: his portrayal of Isherwell is that of a genius, self-absorbed and arrogant to the point of sociopathy, but a genius nonetheless. Miles Bron, by contrast, is uncannily apposite to a moment when a critical mass of tech industry fuckery has (finally) called into question unexamined assumptions of tech genius: Mark Zuckerberg’s metaverse obsession and Facebook’s massive value loss; Sam Bankman-Fried’s detonation of crypto currency futures through the simple expedient of colossal financial incompetence; Elizabeth Holmes’ guilty verdict; Peter Thiel’s real-time transformation into comic-book villainy; and of course Elon Musk’s ongoing Twitter meltdown; cumulatively, these and similar misadventures cannot help but make plain something that never should have been far from the collective understanding: that talent, brilliance, and genius in one area not only don’t necessarily indicate capability in others, but that they hardly ever do. Pair this slowly creeping realization with the obvious, observable ways social media and digital culture polarize and factionalize people, and it’s hard to take seriously the persistent techno-utopianism of the Silicon Valley set.
It will however be difficult for many to let go of that mythos, not least because the entire industry is deeply invested in propagating it. It is telling that one of the funniest and most cited moments of the film is when Birdie (Kate Hudson) still struggles to see genius in Bron’s idiocy. “It’s so dumb,” says Blanc, disgustedly. “So dumb it’s … brilliant!” Birdie breathes. “NO!” thunders Blanc. “It’s just dumb!”
NOTES
1. There’s Birdie (Kate Hudson) a fashion icon turned influencer whose constant inadvertently racist gaffes on social media have become part of her brand; Claire (Kathryn Hahn), a liberal-ish politician whose progressive bona fides coexist uneasily with her indebtedness to Bron; Lionel (Leslie Odom Jr.), the engineer largely tasked with realizing Bron’s ideas, which come in varying shades of whackadoodle; Duke (Dave Bautista) a masculinity guru and men’s rights influencer; and Whiskey (Madelyn Cline) Duke’s girlfriend and arm candy who is herself an influencer with political ambitions, and who might be the smartest of the group. Rounding out the group but not part of it is Peg (Jessica Henwick), Birdie’s long-suffering assistant who has possibly the funniest moment in the entire film.
2. In numerous interviews, Rian Johnson has pushed back against the assumption that Miles Bron is a one-to-one analogue for Elon Musk, pointing out that he wrote the screenplay in 2020 and shooting wrapped well before Musk broke cover from Tesla and SpaceX and fully displayed his inner twelve-year-old in his fraught takeover of Twitter. Miles Bron, Johnson maintains, was written as an amalgam; it was just happy (or awkward) coincidence that the film’s release coincided with the escalation of the Twitter saga.
3. As Blanc fumbled his way to his epiphany, I could not help but remember all the times I’ve under-prepared for a lecture and felt myself start to go off the rails, only to inadvertently work out something in the midst of my rambling that reveals something about the topic that hadn’t before occurred to me. I wasn’t sure whether this part made me feel attacked or seen.