Category Archives: Uncategorized

Thoughts on AI as the New School Year Approaches

TL;DR: this year I’m being unequivocal about students’ use of AI. I’m banning it.

And yes, this is very much an exercise in spitting into the wind, fighting the tide, etc. Choose your metaphor. AI’s infiltration of every aspect of life seems inevitable at this point, but to my mind that makes an education in the humanities in which students engage with “The best which has been thought and said” (to use Matthew Arnold’s phrasing) all the more crucial, and all the more crucial that they do it without recourse to the banalizing effects of AI.

I’m profoundly sceptical of the claims made for AI’s potential, and not just because twenty-five years of Silicon Valley utopianism has given us the depredations of social media, monopolistic juggernauts like Amazon and Google, and a billionaire class that sees itself as a natural monarchy with democracy an also-ran annoyance. In the present moment, the best descriptor I’ve seen of AI is a “mediocrity machine,” something evidenced every time I go online and find my feeds glutted with bot-generated crap. “But it will get better!” is not to my mind an encouraging thought, not least because AI’s development to this point has often entailed the outright theft of massive amounts of intellectual property for the purposes of educating what is essentially a sophisticated predictive text algorithm.

But that is not why I’m banning use of AI in my classrooms.

To be clear: I say that knowing all too well how impossible it is to definitively ban AI. Unlike plagiarism proper, AI’s presence in student essays is at once glaringly obvious and infinitely slippery. With plagiarism, so long as you can find the source being copied, that evidence is ironclad. But AI is intrinsically protean, an algorithmic creation that potentially never produces the same result twice. And yet it’s always obvious, whenever I get an essay that is grammatically perfect but intellectually vacuous. A ChatGPT-authored essay is basically the uncanny valley in prose form: it looks like a reasoned and structured argument, but it lacks the idiosyncrasies and depth that make it, well, human. I’ll say this much for AI—it has pulled off the feat of making me grateful to find an essay riddled with grammatical errors and unreadable sentences. In the present moment, that’s the sign of a student actually doing the work.

But then, that’s the basic point: it’s doing the work that’s important. I recently read an interview with Texas teacher Chanea Bond on why she’s banning AI in her classroom. It’s a great interview, and she helped clarify some of my thinking on the subject. But what depresses me is that her stance is even remotely controversial—that she received a great deal of pushback on social media, much of it from fellow educators who see AI as a useful tool. To which Bond said: No, in Thunder. Which was not a response emerging from some sort of innate Luddism, but from her own experience of attempting to use AI as the useful tool so many claim it to be. “I envisioned that AI would provide them with the skeleton of a paper based on their own ideas,” she reports. “But they didn’t have the ideas—the analysis component was completely absent. And it makes sense: To analyze your ideas, they must be your ideas in the first place.”

This, she realized, was the root of the problem: “my students don’t have the skills necessary to be able to take something they get from AI and make it into something worth reading.” To develop a substantive essay from an AI-generated outline, one must have the writing skill set and critical acumen to grasp where the AI is deficient, and to see where one’s own knowledge and insights build out the argument. Bond’s students were “not using AI to enhance their work,” she observes, but are rather “using AI instead of using the skills they’re supposed to be practicing.”

And there’s the rub. In my lifetime, and certainly in my professional academic career, there has never been a time when the humanities has not been fighting a rearguard action against one form of instrumentalization or another. Tech and STEM are just the most recent (though they have certainly been the most overwhelming). The problem the humanities invariably faces is that it’s ultimately rooted in intangibles. Oh, there’s a whole host of practical abilities that emerge from taking classes in literature or philosophy or history—critical reading, critical writing, communication, language—and perhaps we could do a better job of advertising these benefits. But I’m also loath to play that game, for reducing humanities education down to a basic skillset not only elides its primary, intangible benefit, but also plays into the hands of administrators seeking to reduce the university to a job creation factory. In such a scenario, it’s all too easy to imagine the humanities reduced to a set of service courses—English transformed to basic composition, literature jettisoned; languages taught solely on the basis of what’s of practical use in industry; philosophy a set of business ethics classes; communication studies entirely instrumentalized, stripped of all inquiry and theory; and entire other departments simply shuttered, starting with classics and history and working down from there.

It may appear I have drifted from my original topic, but this sort of thinking is exactly the kind that believes AI can replace critical, intellectual, and creative work. What is both risible and infuriating about this assumption is that whatever sophistication AI has attained has come by way of scraping every available resource to be found online—every digitized book, every image, every video, a critical mass of which is other people’s intellectual property, all plundered to feed the AI maw. (And still, all it can produce are essays that top out in the C range in first year English classes.) I don’t think AI’s cheerleaders have grasped what a massive Ouroborosian exercise this is: as the snake consumes more and more of its tail there is less and less of it. The more we collectively rely on AI to do our thinking for us, the less creative, new, and genuinely revolutionary thought there will be to feed the beast. And as a result, the more stagnant and banal becomes the output.

Last year I reread 1984 for a course I was teaching and was struck anew by the degree to which Orwell is less concerned with totalitarianism’s mechanisms of brutality than with the curtailment of people’s capacity to articulate resistance and dissent. His notorious Thought Police are just the crude endpoint of power’s expression; the more insidious project lies in removing the ability of people to think thoughts worth policing. Winston Smith labours at the Ministry of Truth revising history to square it with the Party’s ever-changing reality, but it is his lover Julia’s work in the Ministry’s fiction department that struck me this time around. Her job, she tells Winston, is working the “novel-writing machines,” which consists of “chiefly in running and servicing a powerful but tricky electric motor.” She off-handedly refers to these machines as “kaleidoscopes.” Asked by Winston what the books she produces are like, she replies, “Oh, ghastly rubbish. They’re boring, really. They only have six plots, but they swap them round a bit.” These novels function as a societal soporific, titillating the masses with stories by turns smutty and thrilling, but which deaden their readership by circumscribing what is possible to imagine.

Julia’s description struck me as uncannily prescient, conceiving as it did a sort of difference engine version of Chat GPT.  The point of Julia’s work is the same as that of Newspeak, the language replacing English in Orwell’s dystopian future. Newspeak radically reduces available vocabularies, cutting back the words at one’s disposal to a bare minimum. In doing so it curtails, dramatically, what can be said and therefore what can be thought—eliminating the possibilities for resistance by eliminating the ability to articulate resistance. How does one foment revolution when one lacks a word for revolution?

I’m banning AI, insofar as I can, because its use cranks the law of diminishing returns up to eleven. In the process, it obviates the most basic benefit and indeed the whole point of studying the humanities, which can’t be reduced to some abstracted skill set. The intrinsic value of the humanities lies in the process of doing—reading, watching, thinking, researching, writing; then being evaluated and challenged to dig deeper, think harder, and going through the process again. There is of course content to be learned, whether it’s the history of colonialism, the Copernican Revolution, the aesthetic innovations of the Renaissance, or dystopian novels like Orwell’s; all of that is valuable because all knowledge is valuable, but inert knowledge is little better than trivia. Making connections, puzzling out knotty and difficult texts, and most importantly making sense of it to yourself and engaging in dialogue about it with others—it is this process wherein lies the great intangible benefits of the humanities. The use of AI is kaleidoscopic in the sense Julia alludes to in 1984, providing the illusion of variation and newness, but ultimately just a set of mirrors combining and recombining the same things over and over.

Leave a comment

Filed under teaching, Uncategorized

Barbie and Utopia

WARNING: In case you’re one of the three people in the world who hasn’t seen the movie yet, this post contains spoilers for Barbie.

Barbie ends with Stereotypical Barbie (Margot Robbie, henceforth just “Barbie”), having become human, being delivered to an appointment of some sort by Gloria (America Ferrera) and Sasha (Ariana Greenblatt), the human mother and daughter who have befriended her. They wish her good luck, and she gets out of the car, entering a nondescript building.

My assumption, at this point, was that she was either starting her first day of a new job or arriving at a job interview. I figured it was decent odds the job would actually be at Mattel™. But after almost two hours of watching Greta Gerwig’s exceptionally smart film, I should have known better; I should have been better prepared for Barbie telling the receptionist that she was there to see her gynecologist.

A simple job or job interview would have been pedestrian; a job at Mattel™ would have been too neat, too much of an obvious endorsement of the naïve utopianism embodied by the Barbie brand. Perhaps more significantly, such an ending would provide a sense of continuity undermining the film’s more shrewdly subversive elements. By contrast, meeting with an OBGYN is a rather more startling note on which to end, though of course it shouldn’t be—for someone suddenly endowed with a functioning reproductive system where none had been before, it makes perfect sense.

And while gynecologists cover a wide range of care not necessarily specifically related to childbirth or conception, the obvious implication is that Barbie is either pregnant or seriously considering it.

[EDIT: Got a lot of pushback on this assertion, which is fair: whatever my qualifier, it’s not necessarily “the” obvious implication that Barbie’s in baby-making mode. I should say rather that it’s a possible implication, or even just go with the indefinite article and say it’s an obvious implication. But yeah, my own lack of any necessary interaction with a gynecologist makes me a wee bit myopic here. Unfortunately, I can’t think of any Ken-based puns about male obtuseness about obstetric care.]

There are a few ways to read this ending. A less charitable reading might see it as a betrayal of the feminist messages otherwise powerfully conveyed in the previous two hours of cinema—seeing it, in other words, as a normative gesture suggesting that, once human, Barbie is compelled to submit to a biologically determined destiny of motherhood.

That is one way to read it. I would disagree, however. Quite emphatically. (To be clear, I have not seen this interpretation anywhere. So far as I know, it’s a straw man of my own creation.) Another tack would be to see Barbie’s prospective procreation as consonant with the way the film explores the theme of mothers and daughters: the characters of Gloria and Sasha comprise the story’s moral and emotional core, with Barbie and her various other iterations providing a comic but poignant symbolic counterpoint to the fraught and complex question(s) of how to be a woman in the world. Barbie’s transformation into a human, after all, happens when she meets her creator and symbolic mother Ruth Handler (Rhea Perlman), whose last name she assumes.

There’s a lot going on in this movie, and I’ll be getting to a fraction of it in this post. I would never have imagined a film about Barbie, which was made under the auspices of Mattel™, would or could contain multitudes. But here we are. A friend and colleague of mine posted to Facebook that he’d never have imagined that he’d emerge from the theatre thinking about all the ways he’s now planning to reference Barbie in his literary theory class this fall. I had an identical reaction: given that I’m teaching an introduction to popular culture this fall and a seminar on postmodernism in the winter, there’s going to be a lot of Barbie popping up in my lectures.

I might write posts along those lines in the future, but at the moment it’s my upcoming utopias/dystopias course with which the film is really resonating. (I actually just emailed all my currently enrolled students that their summer homework is to go see Barbie. Given that it’s a reasonably good chance most of them already have, it’s not really much of an ask.)

This will be the second time I’ve taught the course, though it will be the first time I’ve done it in a physical classroom. I last taught it in the winter term of 2021, while we were all still on lockdown and all teaching was being done remotely (and yes, teaching a course on utopias and dystopias in literature during a global pandemic is a bit on the nose—or it would have been if my other class that term hadn’t been a graduate seminar on post-apocalyptic literature. I suppose you could say I was leaning in.) What became immediately apparent then, and what I’m working through again now, is the fundamental asymmetry one finds in utopias vs. dystopias. Which is to say: while we don’t lack for utopian literature stretching back to Plato’s Republic, two things leap out: (1) aside from Plato, Thomas More’s genre-defining novel Utopia, and such novels as Erewhon and Herland, there aren’t many titles that will be familiar; (2) the volume of utopian literature is dwarfed by a magnitude by dystopias. In fact, the entirety of the list of utopian works on Wikipedia is matched by just the YA dystopian fiction from just the past twenty years.

Which, to be fair, is hardly surprising. Perfect, idyllic societies don’t tend to be fodder for gripping narratives; by contrast, everything going to shit, where everyone either needs to grab a shotgun to kill zombies or flee from omniscient authoritarians, makes for exciting reading. Hence dystopia’s claim to the vast amount of market share.

But that asymmetry of market share is balanced conceptually, by which I mean that where dystopia is straightforward and easy to understand, utopia is far more complex and fraught. It is one of those concepts that fractures and expands and grows more elusive the more you examine it. It is both an impulse and a goal, a collective ideal and profoundly idiosyncratic. There comes a point where it can become simply too broad to be a tenable concept, given that it can be applied to any and all endeavours seeking to improve the human condition. Not least of which are works of creative imagination like filmmaking or fiction-writing: Northrop Frye characterized literature as “collective utopian dreaming,” in which even the bleakest and most nihilistic dystopian ideation is, in the act of its creation, a utopian gesture.

Or to put my point more succinctly: utopian thought is a necessary precondition for dystopian dreaming.

But what does this have to do with Barbie? You might well ask.

Or perhaps not: the utopian elements of Barbie are quite straightforward. Or they start out that way, at any rate. Barbie lives in Barbieland with all the other Barbies who have been iterations of the original doll: Barbies of every race and ethnicity, Barbies of every body type, Barbies of every profession. Barbie is the embodiment of capable, competent, accomplished womanhood. This enviable state of being is of course reflective of Barbie’s evolution, as she kept pace with societal change—or, more accurately, as feminist inroads in mainstream culture made it profitable for Mattel™ to create a far more diverse range of Barbies.

Barbieland as presented in the film, it is immediately apparent, is an idealized space of play. It is where Barbie and all the other Barbies exist in the imaginations of the little girls for whom they’re made.[1] It is also, crucially, a space of stasis: Gerwig communicates as much in our introduction to Barbieland, which features Barbie going through her routine of rising, dressing, “eating,” and driving around her sun-drenched pink environs in which all the different Barbies live their similarly perfect Barbie lives.

The discordant note entering this perfect harmony is change, which seeps in from the Real World. Barbie suddenly has thoughts of death, courtesy of her real-world owner—whom we learn isn’t the daughter Sasha but the mother Gloria, who has taken to playing with the Barbie doll out of a sort of sad nostalgia as Sasha grows out of her childhood innocence into the fiercely cynical and sarcastic adolescent intelligence that all adults fear. Thoughts of mortality precipitate change for Barbie: her feet, previously perfectly shaped to her high heels, flatten out; her breakfast “milk” goes sour; and, horror of horrors, she develops a scintilla of cellulite.

On learning that these changes are the result of Real World problems intruding on the Barbieland reality, Barbie is puzzled. We fixed the Real World, she says—the proliferation of Barbies of all races and professions showed that women can be and do anything, and thus resolved all those pesky issues.

This moment, which seems at first glance a comic expression of Barbie’s naivety, is key to the film’s utopian critique. For one thing, it skewers the naïve utopianism—the idea that making the Barbie line inclusive and aspirational substantively effects women’s empowerment—that animates the Barbie brand.

At the same time however it more subtly recognizes the utopian impulse of play and imagination. The point isn’t that an inspirational doll can effect systemic change, but that it opens a conceptual space in which to do so imaginatively. Fredric Jameson refined Northrop Frye’s assertion about art and literature as collective utopian dreaming, stating that it functions to resolve—symbolically and imaginatively—unresolvable real-world contradictions. Gerwig’s film is very aware of its corporate and consumerist framework; no amount of irony or satire changes the fact that Mattel™ signed off on it, and that, for all its subversive tweaks, it still functions as an extended advertisement for a doll that started as an idealization of white, blonde femininity (and no matter how lacerating Sasha’s speech was about Barbie’s pernicious effects on women’s body images, she’s still wearing a pink dress by the end). These, indeed, are the contradictions at the heart of Barbie; Gerwig, to her great credit, makes no attempt to resolve them in a facile manner or otherwise paper over them. As I said at the outset, Barbie ending the movie working for Mattel™ would have been just such a facile resolution.

Instead, Barbie contrasts the brittle fragility of naïve utopianism with the more complex utopian impulse: Gloria’s nostalgic play with the Barbie doll which precipitates the action proceeds from sadness and loss. Perhaps this is why it intrudes on Barbie’s perfect world—not the sort of play that imagines an ideal future, but which mourns the passing of an ephemeral past. One imagines however that for Gloria it is, for lack of a better word, therapeutic. It symbolically resolves the contradiction of what she wishes for her daughter’s future and what she misses from her past.

Nor is there any delusion on Gloria’s part that she can freeze that past in time. When Barbie later tells her that she just didn’t want anything to change, America Ferrera’s expression—sad, fond, wistful—and her intonation of “Oh, honey” articulates the film’s thematic crux. The naïve utopianism of Barbieland is naïve specifically because it imagines its stasis to be the ideal. The pernicious element in utopian thought lies in seeing it as an achievable end and that end as eternal and unchanging. Complicating the distinction between utopia and dystopia is how each contains the seed of the other. How many dystopian narratives begin in what initially seems like a perfect society? How many dystopian narratives are thinly-veiled fantasies about collapsing our arid and trivial modernity and replacing it with something authentic and primal?

It makes sense that Barbieland will continue on, restored from the Kens’ abortive experiment with patriarchy, continuing to be a space of fantasy and utopia and populated with whatever new Barbies Mattel™ creates. That Barbie herself leaves it behind to be human makes narrative sense—she’s not about to be satisfied with her old life—but also thematic sense. The film establishes Barbieland’s utopianism not just as an imagined space, but a space of creation in which fantasy takes physical form. The ending in this respect allegorizes growth and change. Possibly Barbie will have a child of her own, possibly not, but one way or another she will age and die, with each day being different from the previous.

KENDNOTES

1. “But what about Ken?” There’s a whole lot that could be said about the way the film depicts Ken (Ryan Gosling et al), but the principal point is that Ken was created to be ancillary to Barbie, and therefore that is what he is in Barbieland.He was always a concession to heteronormativity, a necessary male partner, but one who could never (a) overshadow Barbie, (b) be anything but bland and asexual, and (c) therefore have nothing approaching a real personality. Ken’s existential ennui proceeds from this indeterminate status that doesn’t allow him to have any real purpose and precludes any possibility that he may consummate his “relationship” with Barbie.

As our narrator (Helen Mirren) tells us in the opening sequence, Ken exists solely for Barbie; his only purpose is to be noticed by Barbie. This, of course, along with Ken’s ruinous (and hilarious) importation of patriarchy into Barbieland in the film’s second half, has not unpredictably gotten under the skin of the usual suspects in the culture wars. For all their sturm und drang (e.g. Ben Shapiro burning a stack of Barbies, dudebros losing their shit over Justin Trudeau and his son posing in pink at the theatre), I suspect that much of the angst proceeds from indignation over Ken’s relegation, seeing it as the film’s wholesale dismissal of men as useless rather than an astute observation about Ken’s literal role in the Barbie mythos.

6 Comments

Filed under Uncategorized

Humans of the University

I’m on strike. My faculty union at Memorial University, after fourteen months of frustrating and fruitless negotiations with an utterly recalcitrant administration, called a strike vote. Ninety-three percent of members voted, of whom ninety percent (myself included) called for a strike. And so now we are in our second week of walking the picket lines.

I’ve never been on strike before, so this is a new and interesting experience for me. I have no idea how it compares to other such job actions, but I can confidently make two observations that may seem to contradict each other: one, everyone is desperate to end this in a satisfactory manner and get back into the classroom; two, everyone is having a blast.

To emphasize the second observation, let’s keep in mind that this strike is happening in February in Newfoundland, which means the weather has ranged from bad to shitty—at best, inoffensive gloom, at worst sleeting rain and blizzards. After our first day I invested in thermal underwear, good mittens, snow pants, and several pairs of thick socks. But once so fortified, walking the picket line for two and a half hours a day has become something I look forward to, because it has meant spending time with my colleagues, who are to a person smart, dedicated, funny, compassionate, and profoundly, inspiringly dedicated to their teaching and research. Those two and a half hours fly past as we chat, joke, talk about our research and writing projects, and—most importantly, perhaps—have very intensive discussions about the strike itself and the broader issues at stake.

.

That being said, none of this is a lark. Everyone is concerned for our students; we worry incessantly about the adverse effects this might have on their term; and we worry about the future of the university—both our specific institution and the “university” more generally. What is most heartening and keeps our hope and energy up is that our students are firmly behind us. Every day they come out to the line with signs of their own, often bearing coffee and donuts, shouting and singing their solidarity and voicing the same ire at our current tone-deaf and bafflingly obtuse upper administration. I have little doubt that this strike would sputter and die if we found our students foursquare against us; even just ten years ago, there would likely have been, at the very least, ambivalence and a more pervasive skepticism about tenured professors making a comfortable living demanding more.

To my mind, the signal shift in the present moment is exemplified by our administration’s consistent failure to frame the strike in those very terms. By the old rules of the game, it should be the easiest thing in the world to vilify the striking faculty as a bunch of sheltered, tenured Sunshine List elites making unreasonable demands in a time of economic straits. They have certainly tried, but it seems for once that people—both our students and the public more generally—aren’t buying it. The administration has attempted to make this all about professors asking for more money, but for once the more complex argument is finding a receptive audience. It’s not the salaries of tenured professors that has people’s attention, but the pittance paid to precariously-employed contractual professors on one hand, and the $450,000 salary of the university president on the other.1 It’s not the putative ivory-towered academics understood to be out of touch, but the university’s managerial class—who for the duration of the strike thus far have issued occasional and increasingly petulant messages making verifiably false claims, refused to accommodate students uncomfortable with crossing picket lines, and forbidden administrative staff and per-course instructors from joining faculty on the picket line on breaks and lunch hours. It’s not so much about professorial compensation as it is about collegial governance and faculty having more of a say in the university’s future.

The fact that these knottier, more complex issues seem to be eclipsing the easier to understand snooty-professors-want-moar caricature is heartening; in my more hopeful moments I think it signals a shift, moving our cultural center of gravity away from the neoliberal dominance of the past few decades to something more humane and empathetic. I’m reasonably convinced that the experience of the pandemic is at the root of this apparent shift: we have a generation of students who endured two years of remote learning, who found their professors to be sympathetic people understanding of their travails and saw them also struggle to do their best in bad circumstances. All of which unfolded in a larger societal context in which prior verities about work and recompense came into question: the category of “essential worker” extended to people working minimum wage jobs in grocery stores; people fortunate enough to be able to work remotely realized they could do the same job in half the time while wearing pyjamas; quality of life became a more pronounced concern, something made plain by the general reluctance for people to return to shitty jobs simply for the sake of having a job; meanwhile, the problem of wealth disparity became ever more glaring as the wealthiest sectors did not share the pain but grew even wealthier.

Much of this is difficult, if not impossible, to quantify. Hence, I should be cautious and note that what I’m describing is less an objective, empirical reality than a vibe. But it is a profoundly powerful vibe that currently thrums through the energy on our pickets. And, well, I’m a humanities professor: qualitatively considering and analyzing vibes is more or less my stock in trade. To put it another way, I work in intangibles.2 In the context of a corporatized university whose administrative class has become increasingly preoccupied with “outcomes” and “finding efficiencies,” this has meant fighting a protracted rearguard action against a pervasive attitude (epitomized by but not limited to university administrators) to which intangibles are anathema.

I’ve devoted a lot of thought over my career as an academic to the question of how to argue for the value of intangibles. Walking the picket lines with my brilliant colleagues and talking with the many, many students who come out to support us has made one thing clear to me: if I’m looking for a concrete manifestation of this intangible value, it’s here, in the human beings who comprise the university.

.

The most common refrain among the students articulates this sensibility: professors’ teaching conditions, they say, are our learning conditions. And in the end, it is the classroom that is the most fundamental university space, and the students’ experience that is—or should be—the central focus of the university project. Because if not, then what are we doing otherwise? Professors with time and resources to do research bring that depth and breadth of thought to the classroom; perhaps more importantly, contractual and per-course instructors who aren’t run ragged with massive teaching loads, under constant financial stress, and who have a reasonable chance at converting their precarious positions into full-time careers, are going to be far more effective in the classroom (the fact that so many of them are exemplary educators now speaks to an inhuman level of dedication).

As I write this, the faculty union and the university bargaining team are back at the table. I hope they resolve this satisfactorily so we can resume our real work. But since before this started, it has felt as if the administration is speaking a language with no meaning for the rest of us. And if that continues, I and my colleagues are in for the long haul.

#FairDealAtMUN

NOTES

1. Memorial’s current president Vianne Timmons is paid $450,000 as a base salary, along with an $18,000 housing allowance, $1,000 monthly vehicle allowance, $25,000 research, and the standard travel perks afforded her position (which most recently included travel to Monaco for a conference of—wait for it!—arctic university administrators). The process for “finding” and recruiting Timmons cost the university $150,000. A CBC report about negative responses to Timmons’ lavish compensation quoted someone familiar with her hiring process as saying “Only a handful of people are qualified to lead a university like MUN, and finding that person takes time and money.” I think it’s safe to say, especially given Timmons’ utter lack so far of public statements about the strike—given that a university president’s central task is presenting a public face of the institution—that we’re not getting our money’s worth, and that the remarkable solidarity on display is at least partly a backlash against the assumptions quoted above.

It also needs to be emphasized that this state of affairs is pervasive across academe, something usefully discussed by Amir Barnea in the Toronto Star.

2. Again, I am a humanities professor, so my perspective in ineluctably informed and shaped by that context and training. But I should note that the intangible value of a university education—the intrinsic value of the university experience—transcends discipline. Whether your degree is in philosophy, chemistry, or engineering, if your sole metric of value is your ultimate salary, you’ve sadly missed the larger point. In the end, as I conclude above, it is the human dimension that defines the university.

1 Comment

Filed under Uncategorized

The Rig, Lovecraft, The Kraken Wakes, and Our “collective failure of imagination”

Iain Glen as Captain Magnus MacMillan

.

WARNING: SPOILERS AHEAD

The Rig is a new British series on Amazon Prime that I watched for two reasons: one, it stars Iain Glen; two, the trailer teased it as a Lovecraftian horror set on an oil rig in which something from the deeps makes its presence known in increasingly disturbing and threatening ways.

.

So OBVIOUSLY this was something I needed to watch, both from personal interest and, as someone who has now taught three classes on H.P. Lovecraft and weird fiction, from professional obligation. And it was … good. It was obviously done on a budget and there were moments of didacticism that made it feel more like a Canadian than a British show,1 as well as some clunky writing just more generally, but it was interesting and definitely worth watching. What makes it worth posting about is that it embodies a number of themes and tropes that are representative of the ongoing evolution of the New Weird, which are what I want to talk about here.

The setup is quite simple: the crew of an oil rig off the Scottish coast, the Kinloch Bravo, are looking forward to heading home as their rotation ends. A crisis on another platform diverts the helicopters, however, so everybody has to sit tight for however long that takes to be resolved. But then a tremor shakes the rig and a thick, spooky fog rolls in, and all communication from internet to the radio goes down. Tempers, already frayed from the helicopters’ diversion, erode further. A crewmember named Baz goes aloft in an attempt to fix the radio transmission; he falls and is badly injured. His injuries, however, which should have killed him, start to heal on their own. At the end of the first episode he staggers out to where the crew is assembled on the helicopter deck to warn “It’s coming!” Meanwhile, just before his appearance, it is noticed that the fog is composed partly of ash.

And that’s the first episode. One of the things I quite liked about The Rig is that it settles in for a slow burn over its six episodes. Though punctuated here and there by moments of shock or surprise, it’s mostly about two things: the personalities of the crew and their (often fraught) relationships, and the gradual figuring-out of just what is going on. And if both of these components tend toward overstatement and occasional hamfisted exposition, the story is nevertheless compelling. In short, the drilling of the undersea oil fields has released or awakened an ancient life form, bacterial in nature, which lives in the ash that falls out of the mist. When a human is infected by way of a cut or simple excessive exposure to the falling ash, the organism takes up residence and makes the host more amenable to it by fixing injuries and expelling impurities. For Baz, this means he heals quickly; it also means the gold fillings in his teeth fall out, as they’re treated by the organism as foreign objects. So, on the balance, not bad for Baz—but another crewmember doesn’t fare so well, as his history of addiction and many tattoos make the process fatal for him. (The scene in which he is about to get into the shower and suddenly finds all the ink in his skin running is a great uncanny moment).

What’s most interesting in The Rig, and what makes it worth discussing in some detail, is that the “invader” isn’t necessarily monstrous and isn’t overtly malevolent. The agony Baz experiences, we come to understand, isn’t torture or assault but the entity trying to communicate. The ultimate climactic crisis isn’t about whether our heroes can defeat and/or destroy the invading force, it’s whether they should. The problem posed isn’t how to repel an invader, but how two intelligent species can communicate and whether they can coexist. In later episodes, Mark Addy shows up as a company man named Coake2 who has an explicitly instrumentalist and zero-sum response to such questions: as seen in the trailer, he spits “Nature isn’t a balance, it’s a war!” Later, he informs one of the crew that “[the corporation] Pictor’s business is resource management! We find them, we use them up, we move on! And that includes human resources.” He boils it down: “We’re useful or not. We have value or not. They’re coming for me because I have value. Anyone who doesn’t gets left behind.” He delivers this rant as, against the explicit orders of the rig’s captain Magnus MacMillan (Glen), he prepares to put into action a plan to destroy the entity.

This confrontation, coming as it does in the final episode, clarifies lines of conflict that had been usefully muddled and uncertain to this point. Kinloch Bravo is under threat, but that threat has been ambiguous. As the nature of the entity becomes clearer, the nature of the danger it poses becomes less so. If there is a consistent bogeyman throughout, it’s the company itself, a faceless corporate entity regarded with a sort of low-grade antipathy by everyone. Rumours that the platform is set to be decommissioned—rumours later confirmed—circulate, with people’s anxiety about future employment in tension with the understanding of the pernicious impact of their industry on the environment. As one character observes, a long life of gainful employment is meaningless if, in the end, the sky is on fire.

The ambivalence of the crew (or some of them, anyway) to their industry squares with their ambivalence to their company: Coake’s rant is in some senses merely an explicit confirmation of the more diffuse sense pervading the crew of their expendability—that they, like the oil they drill, are resources to be extracted. The character of Hutton (Owen Teale) anticipates Croake’s sentiments with an embittered speech in the final episode: “I used to think we were the steel, holding it all together,” he tells Magnus, “even if the rest of the world didn’t see it. But now I know we’re the well. Because every trip, every person that gets chewed up, every chopper that goes down, it just takes a bit more, and a bit more … till it hollows you out.”

Hence, the emergence of the entity—the “Ancestor,” as Rose (Emily Hampshire) comes to call it—is threatening not because it offers destruction, but because it represents an alternative way of thinking and being. It is a collective organism; the humans it “infects” can communicate with it, after a fashion, and with each other. Its threat to the lives of the rig workers and people more generally is commensurate with the threat it perceives from them. It is not itself malevolent or casually destructive.

Its ancient provenance and apparent immortality—Rose establishes that it measures time on a geological scale—as well as its capacity to infect people, puts it very firmly in the Lovecraftian tradition of cosmic horror. However, the wrinkle that it potentially poses no existential threat is a significant departure from the standard conventions of the genre. To be certain, there is a definite Lovecraft vibe in The Rig in the frequent refrains about how we know less about the ocean depths than the surface of the moon, or in Alwynn’s pithy observation (featured in the trailer above) that “If we keep punching holes in the earth, eventually it’s going to punch back.” The implicit sentiment that the madcap drive for oil exploration in the name of profits will take us into dangerous territory is not dissimilar from Lovecraft’s opening paragraph of “The Call of Cthulhu”:

We live on a placid island of ignorance in midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age.

“Cosmic horror” of the Lovecraftian variety is built around existential dread arising from the realization of humanity’s insignificance in the face of the infinitude of vast, cosmic entities. The “old gods” like Cthulhu and Dagon that populate Lovecraft’s stories find echoes in the “Ancestor” of The Rig, but the way in which it infects certain crewmembers is more in line with “The Shadow Over Innsmouth,” Lovecraft’s story in which he grafts his Cthulhu mythos onto an allegory of his horror of miscegenation. (For those unfamiliar with Lovecraft, an important bit of context is that he was terribly racist). The story’s narrator is touring historically interesting towns in Massachusetts. As part of his itinerary, he arrives in Innsmouth—a run-down old fishing community with some interesting architecture, an odd historical relationship to the reef just off its shores, and people who have a vaguely fish-like aspect about them. TL;DR: having long ago made a Faustian deal with the Old God who inhabits the reef, the people of the town have become fish-human hybrids. The narrator discovers that he is descended from Innsmouth stock and will inevitably be thus transformed.

I cite “The Shadow Over Innsmouth” because it set a standard in weird fiction rooted in the horror of this revolting otherness, usually framed in slimy cephalopodic terms: the tentacle-faced god Cthulhu being the apotheosis.

He seems nice.

.

What is so fascinating in weird fiction is less the lugubrious racism of Lovecraft than the ways in which his idiom has been adapted and transformed by such contemporary authors as China Miéville, Jeff Vandermeer, Charlie Jane Anders, and Annalee Newitz (among many others). One key shift in the “new weird” is the quasi-erasure of Lovecraft’s instinctive antipathy to the uncanny Other. What surfaces instead (so to speak) is a more complex interrogation of that otherness and the consideration that (a) it isn’t necessarily evil or destructive; (b) it might offer alternative modes of thinking and being; and (c) perhaps it is the next stage of evolution. The Rig’s principal literary allusion is thus not Lovecraft, but an author whose apocalyptic imagination was somewhat more nuanced.

About five minutes into the first episode (four minutes and thirty seconds, to be exact) we see the character Alwynn (Mark Bonnar)3—the calm sage wisdom archetype—reading John Wyndham’s novel The Kraken Wakes.

Mark Bonnar as Alwynn and his not at all significant choice of reading.

.

Wyndham’s novel, published in 1953, is about an invasion by an alien species that can only live under conditions of enormous pressure, and so arrive as a series of mysterious fireballs that land in the deepest parts of the ocean. At first the fireballs are thought to be just some weird cosmic phenomenon, but then it slowly becomes apparent that the Earth’s deeps have been colonized by an intelligent species. For the reader’s benefit, Wyndham provides one of the more obvious voice-of-the-author characters you’re likely to encounter, an outspoken scientist named Alistair Bocker who is more clear-eyed and prescient than anyone else about the newcomers and the threat they pose, though he is dismissed by most for the better part of the novel as a crank. The narrator and his wife, however, a pair of journalists, befriend Bocker and so have a front row seat to his outlandish theories, which are, of course, all correct and the actual reality of what’s going on.

The Kraken Wakes has had something of a revival in literary study because it squares nicely with recent waves of climate fiction (cli-fi) and ecocritical literary theory. The third phase of the aliens’ war against the landlubbers—after phase one, in which shipping is sunk, and phase two, which involved “sea-tanks” coming ashore and assaulting coastal towns and cities—is the melting of the polar ice caps to raise the sea levels. This last attack precipitates the collapse of civil society, as scarcity takes hold and nations fall into civil strife and factional warfare. Hence, one can see how it resonates now, seventy years after its publication.4

The Rig, for all of its less subtle tendencies, is a thoughtful contribution to the growing body of creative works thinking through the crisis of the Anthropocene, which is among its other aspects a philosophical and imaginative crisis. Alwynn only offers comment on The Kraken Wakes twice; when first asked what it’s about, he replies cryptically, “A collective failure of imagination.” The rising sea levels that drive humanity away from the coasts and topple governments are the culmination of this failure: while Wyndham’s Dr. Bocker does eventually opine about the impossibility of co-existence with another intelligent species, he makes clear that the instinctively aggressive response to the submerged interlopers will end badly. His calls to attempt contact are ignored, dismissed, or simply laughed off as naïve. When the first of a series of nuclear weapons is dropped into the deeps, he effectively throws up his hands in disgusted resignation.

That this dynamic is reiterated in The Rig—replete with the allusions to Wyndham—makes for an interesting thematic turn in the contemporary context. The series is by no means a straightforward climate change allegory but is sympathetic to the people whose livelihoods depend on the fossil fuel industry. That these people are, as Hutton makes clear in his speech, as much an extractive resource as the oil itself puts the emphasis on the inhumanity of the industry and the collective failure to imagine alternative economies, alternative ways of being. The Rig makes an interesting, if unlikely, companion piece to Kate Beaton’s recent graphic memoir Ducks: Two Years in the Oil Sands, which chronicles her time working in the oil patch as a means of paying her student loans off. The book’s consonance with The Rig is precisely in this depiction of people as objects of extraction.

.

The conflict at the heart of The Rig is not between collective and individuated intelligences, but between a collective intelligence and one desperately invested in seeing itself as individuated. The latter state is given to the sort of zero-sum thinking proffered by Coake: you either have value or not, and those without value can be justifiably excised from the equation. Though never stated explicitly (one of the series’ rare moments of such restraint), the collective intelligence of the “Ancestor” is the principal threat to the corporation—itself an ironically amorphous entity, but which is reliant on its workers and consumers believing themselves locked in a zero-sum individualistic society for which there is no alternative. Environmental movements have always advocated seeing ourselves as parts of a collective entity, as survival as a species is never down to any single one of its creatures. Such thinking is anathema to capitalism, which has done its best to convince people that, in Margaret Thatcher’s words, “There is no alternative.” It has indeed become a commonplace observation that it is easier to imagine the end of the world than the end of capitalism.

The second and final time Alwynn comments on The Kraken Wakes is in response to a crewmate’s question, “How’s it working out?” To which he responds, “Not good for us.”

NOTES

1. Weirdly enough, the more earnest didacticism that I always associate with CBC dramas is mostly provided by a Canadian actor. Emily Hampshire plays Rose, a company representative on the platform who seems to operate outside the usual chain of command. At first she appears as though she’ll be one of the bad guys, as she’s a company mouthpiece. But she chooses her side quite emphatically, which is at least partially based on her erstwhile desire to be a paleontologist; this educational background makes her the person best able to grasp what the “Ancestor” is, but also makes her the source of most of the show’s lecture moments when she provides lessons in geology.

Hampshire most indelibly played Stevie Budd on Schitt$ Creek, which also makes her a weirdly uncanny presence in this dramatic and vaguely apocalyptic setting.

2. Three thoughts. One, The Rig gives us a troika of Game of Thrones alumni: Iain Glen aka Ser Jorah Mornont as Magnus, captain of the rig; Owen Teale, who played Ser Aliser Thorne, Jon Snow’s principal antagonist at the Wall—here as the malcontent Hutton, he’s playing essentially the same character, albeit with a redemptive moment at the end; and of course King Robert Baratheon himself, Mark Addy. Thought the second: Mark Addy has a peculiar, almost nasal quality of voice, that is extremely endearing when he plays a good bloke as he does in The Full Monty, but which is extremely sinister and grating as the nasty Coake. Three: though we see his name spelled “Coake,” it’s a homonym for “Koch” and I wondered if the series is deliberately referencing the fossil-fuel billionaire Charles Koch and his late brother David.

3. Mark Bonnar’s Alwynn is the other lecturer-type in the show. He is also so much like an elder version of David Tennant that my partner and I started referring to him as Grandpa Tenth.

4. I’m delighted that John Wyndham is having a revival. I read The Chrysalids in grade nine English and proceeded to tear through his other work—discovering as I did that The Chrysalids is not close to his best. The Kraken Wakes was the one I wanted to read but it was out of print for years; it was only on finding a copy in a used bookstore that I was able to read it, and it immediately became my favourite. Several years ago in a second-year SF class I taught Day of the Triffids and was struck by its prescience—not the kind of prescience that has made Kraken Wakes a cli-fi staple, but a kind of genre prescience, given that the basic structure, narratively and thematically, anticipates zombie apocalypse.

1 Comment

Filed under television, Uncategorized, what I'm watching

Some Christmas thoughts about 2021

There’s a moment that happens on occasion in American football, when a running back, having been handed the ball, breaks through all the defenders and sprints into the open field, with nothing between him and the endzone.

And sometimes, rather than making a touchdown, he gets blindsided by a linebacker he didn’t see coming.

That’s what 2021 has been like, especially these past few months.

I was vaccinated in June, and then again in August. We resumed in-person classes at Memorial University in September. Like many people, I felt a massive sense of relief—the pandemic finally seemed to be heading into the rearview mirror, and things were returning to something resembling normalcy. Except … yeah, not so much. I’m writing this at my parents’ house on Christmas day, but we won’t be properly celebrating the holiday until tomorrow because my niece and nephew were possibly exposed to COVID on a schoolbus, and thus need to quarantine for another day. I will myself have to quarantine for five days on returning to St. John’s.

We should be in the clear, running to the endzone with open field, but we keep getting tackled.

This is what I told all my students earlier this term: I wanted them to know it wasn’t just them, it was a general malaise that I was also experiencing. I told them they weren’t alone in feeling anxious, discomfited, or just generally off. I told them that if it came down to it, if they were struggling to get their classwork done, that there was no shame in dropping my course … or any other course they were taking. I told them that I would do whatever I could to help, provided they kept me in the loop. You don’t need to give me the gory details, I said—just tell me you’re having a rough go of it, and that’s enough. We’ll work something out.

I have to imagine there are those who will say I’m catering to the fragility of the snowflake generation in making such accommodations. Anyone having that thought can fuck right off. What saved me from breakdown these past few months was the fact that I had such extraordinary students: as I said to all my classes at the term’s end, they’re the reason why I have hope for the world. As a GenXer, irony is my default setting; for those inheriting the catastrophes of climate change and resurgent fascism, earnestness is their baseline. What would have been radically uncool in the 1990s is quite possibly what will save us all.

On Christmas day, I find myself thinking about all the best parts of the past year. My students are, as they often are, Exhibit A.

I taught a graduate seminar in the winter term that, in spite of the fact that it happened on Zoom, was hands down my favourite class ever. It was called “The Spectre of Catastrophe,” and looked at 21st-century post-apocalyptic narratives. In spite of the fact that the subject was perhaps a little bit too on the nose for our current situation, it was the most fun I’ve had as a teacher ever (and that’s saying a lot, as my following comments will make clear). The only advantage classes taught over Zoom have over in-person teaching is the comments students can write as the lecture/discussion unfolds. In this particular class, there was always an ongoing conversation scrolling up on the side. I used the ten-minute breaks each hour in our three-hour classes to read through the lively and often hilarious discussions happening in parallel to the actual class.

Getting back into the classroom this past September felt so very, very good. It didn’t hurt that I was teaching a roster of courses that were, for me, an embarrassment of riches: a first-year class called “Imagined Places,” which gave me the (gleeful) chance to teach Tolkien (The Hobbit), Ursula K. Le Guin (A Wizard of Earthsea), Sir Terry Pratchett (Small Gods), Neil Gaiman (The Ocean at the End of the Lane), and Jo Walton (The Just City) (for the record, everybody needs to read Jo Walton). I also taught, for the third time, “Reading The Lord of the Rings.” I had five, count ‘em FIVE, students in LOTR who were also in my fourth-year seminar “The American Weird: Lovecraft, Race, and Genre as Metaphor.” (I always measure my success as an educator by the number of students who take multiple classes with me. I assume it means I’m doing something right, though it’s also possible they’ve just got my number and know what it takes to get a decent grade).  Given that the Tolkien and the Lovecraft courses were back-to-back, I had something like an entourage following me from the chemistry building in which I taught LOTR to the Arts buildings that was the home of the weird.

I called my Lovecraft students “weirdos,” because, well, obviously. It only offended them the first time I called them that.

Even given the fact that I was teaching a dream slate of classes, this fall semester was hard. I am congenitally incapable of asking for help, or for that matter recognizing that I need help until after the fact; these past few days of decompressing at my parents’ home have been invaluable in offering relaxation and reflection. I have also realized I need to find my way to a therapist in 2022.

But the moral of the story is that the students I worked with this past year kept me sane, and gave me hope. So on this day of prayer (even for those as atheistic as myself) I am grateful for you all. And given how many of you have signed up for yet more Lockett pontification next term, I can only assume I’m doing something right.

Or you’ve got my number. Either way, it’s all good.

1 Comment

Filed under Uncategorized

History, Memory, and Forgetting Part 1: Deconstructing History in The Underground Railroad

“Yesterday, when asked about reparations, Senate Majority Leader Mitch McConnell offered a familiar reply: America should not be held liable for something that happened 150 years ago, since none of us currently alive are responsible … This rebuttal proffers a strange theory of governance, that American accounts are somehow bound by the lifetime of its generations … But we are American citizens, and thus bound to a collective enterprise that extends beyond our individual and personal reach.”

—Ta-Nehisi Coates, Congressional Hearing on Reparations, 20 June 2019
Thuso Mbedu as Cora in The Underground Railroad

I’ve been slowly working my way through Barry Jenkins’ ten-episode adaptation of Colson Whitehead’s novel The Underground Railroad. I’ve been a huge fan of Whitehead’s fiction ever since I got pointed towards his debut novel The Intuitionist; I read The Underground Railroad when it was still in hardcover, and I’ve included it twice in classes I’ve taught. When I saw it was being adapted to television by the virtuoso director of Moonlight and If Beale Street Could Talk, I knew this was a series I wanted to watch.

I was also wary—not because I was concerned about the series keeping faith with the novel, but because I knew it would make for difficult viewing. Whatever liberties Whitehead takes with history (as I discuss below), he is unsparing with the brutal historical realities of slavery and the casual cruelty and violence visited on slaves. It is often difficult enough to read such scenes, but having them depicted on screen—seeing cruelty and torture made explicit in audio and visual—can often be more difficult to watch.

For this reason, for myself at least, the series is the opposite of bingeable. After each episode, I need to digest the story and the visuals and think.

The Underground Railroad  focuses on the story of Cora (Thuso Mbedu), a teenage girl enslaved on a Georgia plantation whose mother had escaped when she was young and was never caught, leaving Cora behind. Another slave named Caesar (Aaron Pierre) convinces her to flee with him. Though reluctant at first, circumstances—the kind of circumstances which make the show difficult viewing—convince her to go. She and Caesar and another slave named Lovey (who had seen them go and, to their dismay, tagged along) are waylaid by slave-catchers. Though Lovey is recaptured, Cora and Caesar escape, but in the process one of the slave-catchers is killed. They make it to a node of the Underground Railroad and are sheltered by a white abolitionist with whom Caesar had been in contact. He then takes them underground to the “station,” where they wait for the subterranean train that will take them out of Georgia.

Because that is the principal conceit of The Underground Railroad: that the rail system spiriting slaves away is not metaphorical, but literal, running through underground tunnels linking the states.

More than a few reviews of the series, and also of the novel on which it is based, have referred to the story as “magical realism.” This is an inaccurate characterization. Magical realism is a mode of narrative in which one group or culture’s reality collides with another’s, and what seems entirely quotidian to one is perceived as magical by the other. That’s not what Colson Whitehead is doing, and, however lyrical and ethereal occasionally dreamlike Barry Jenkin’s visual rendering is, it’s not what the series is doing either. In truth, the literalization of the underground railroad is the least of Whitehead’s historical tweaks: The Underground Railroad is not magical realism, but nor is it alternative history (a genre that usually relies on a bifurcation in history’s progression, such as having Nazi sympathizer Charles Lindbergh win the presidency in 1940 in Philip Roth’s The Plot Against America). The Georgia plantation on which The Underground Railroad begins is familiar enough territory, not discernable from similar depictions in Uncle Tom’s Cabin or 12 Years a Slave. But then as Cora journeys from state to state, each state embodies a peculiar distillation of racism. Cora and Caesar first make it to South Carolina, which appears at first glance to be an enlightened and indeed almost utopian place: there is no slavery, and the white population seems dedicated to uplifting freed Blacks, from providing education and employment to scrupulous health care to good food and lodging. Soon however the paternalistic dimension of this altruism becomes more glaring, and Cora and Caesar realize that the free Blacks of South Carolina are being sterilized and unwittingly used in medical experimentation.

In North Carolina, by contrast, Blacks have been banished, and any found within the state borders are summarily executed. The white people of North Carolina declared slavery a blight—because it disenfranchised white workers. Since abolishing slavery and banishing or killing all of the Black people, the state has made the ostensible purity of whiteness a religious fetish. Cora spends her time in North Carolina huddled in the attic of a reluctant abolitionist and his even more reluctant wife in an episode that cannot help but allude to Anne Frank.

Whitehead’s vision—stunningly rendered in Jenkin’s adaptation—is less an alternative history than a deconstructive one. As Scott Woods argues, The Underground Railroad is not a history lesson, but a mirror, with none of “the finger-wagging of previous attempts to teach the horrors of slavery to mainstream audiences.” I think Woods is being polite here, using “mainstream” as a euphemism for “white,” and he tactfully does not observe that effectively all of such finger-wagging attempts (cinematically, at any rate) have tended to come from white directors and feature white saviour protagonists to make liberal white audiences feel better about themselves.

There are no white saviours in The Underground Railroad; there is, in fact, very little in the way of salvation of any sort, just moments of relative safety. As I still have a few episodes to go, I can’t say how the series ends; the novel, however, ends ambivalently, with Cora having defeated the dogged slave-catcher who has been pursuing her from the start, but still without a clear sense of where she is going—the liberatory trajectory of the underground railroad is unclear and fraught because of the weight of the experience Cora has accrued over her young life. As she says at one point, “Make you wonder if there ain’t no real place to escape to … only places to run from.” There is no terminus, just endless flight.

When I say The Underground Railway is a “deconstructive” history, I don’t use the term in the sense as developed by Jacques Derrida (or at least, not entirely). Rather, I mean it in the more colloquial sense such as employed by, for example, chefs when they put, say, a “deconstructed crème brûlée” on the menu, which might be a smear of custard on the plate speared by a shard of roasted sugar and garnished with granitas infused with cinnamon and nutmeg. If the dish is successful, it is because it defamiliarizes a familiar dessert by breaking down its constituent ingredients in such a way as to make the diner appreciate and understand them anew—and, ideally, develop a more nuanced appreciation for a classic crème brûlée.

So—yes, odd analogy. But I’d argue that Whitehead’s novel is deconstructive in much the same manner, by taking a pervasive understanding of racism and the legacy of slavery and breaking it down into some of its constituent parts. In South Carolina, Cora and Caesar experience the perniciousness of white paternalism of the “white man’s burden” variety—the self-important concern for Black “uplift” that is still invested in the conviction of Black culture’s barbarism and inferiority, and which takes from this conviction license to violate Black bodies in the name of “science.”

Thuso Mbedu as Cora and William Jackson Harper as Royal.

Then in North Carolina, we see rendered starkly the assertion that Blacks are by definition not American, and are essentially seen as the equivalent of an invasive species. This, indeed, was the basis of the notorious 1857 Supreme Court ruling on Dred Scott v. Sandford, which asserted that Black people could not be citizens of the United States. Though that precedent was effectively voided by the thirteenth and fourteenth amendments—which abolished slavery and established citizenship for people of African descent born in America, respectively—the franchise was not effectively extended to Black Americans until Lyndon Johnson signed the Voting Rights act a century after the end of the Civil War. The delegitimization of Black voters continues: while the current Trumpian incarnation of the G.O.P. tacitly depicts anyone voting Democrat as illegitimate and not a “real American,” in practice, almost all of the legal challenges to the 2020 election result were directed at precincts with large numbers of Black voters.

Later in the novel when Cora finds her way to Indiana to live on a thriving Black-run farm, we see the neighbouring white community’s inability to countenance Black prosperity in such close proximity, especially when Black people flourishing reflects badly on their own failures. The pogrom that follows very specifically evokes the Tulsa Massacre of 1921, when a huge white mob essentially burned down a thriving Black part of town.

What’s important to note here is that Whitehead’s deconstructive process is less about history proper than about our pervasive depictions of history in popular culture, especially by way of fiction, film, and television. Or to be more accurate, it is about the mythologization of certain historical tropes and narratives pertaining to how we understand racism. One of the big reasons why so many (white) people are able to guilelessly[1] suggest that America is not a racist nation, or claim that the election of a Black president proves that the U.S. is post-racial, is because racism has come to be understood as a character flaw rather than a systemic set of overlapping cultural and political practices. Think of the ways in which Hollywood has narrated the arc of American history from slavery to the Civil War to the fight for civil rights, and try to name films that don’t feature virtuous white protagonists versus racist white villains. Glory, Mississippi Burning, Ghosts of Mississippi, A Time to Kill, Green Book, The Help[2]—and this one will make some people bristle—To Kill a Mockingbird. I could go on.

To be clear, I’m not saying some of these weren’t excellent films, some of which featured nuanced and textured Black characters[3] with considerable agency; but the point, as with all systemic issues, is not the individual examples, but the overall patterns. These films and novels flatter white audiences—we’re going to identify with Willem Dafoe’s earnest FBI agent in Mississippi Burning against the Klan-associated sheriff. “That would be me,” we[4] think, without considering how the FBI—at the time the film was set, no less!—was actively working to subvert the civil rights movement and shore up the societal structures subjugating and marginalizing Black Americans, because in this framing, racism is a personal choice, and therefore Dafoe’s character is not complicit in J. Edgar Hoover’s.  

Gene Hackman and Willem Dafoe in Mississippi Burning (1988).

The white saviour tacitly absolves white audiences of complicity in racist systems in this way, by depicting racism as a failing of the individual. It allows us to indulge in the fantasy that we would ourselves be the white saviour: no matter what point in history we find ourselves, we would be the exception to the rule, resisting societal norms and pressures in order to be non-racists. Possibly the best cinematic rebuke to this fantasy was in 12 Years a Slave,[5] in the form of a liberal-minded plantation owner played by Benedict Cumberbatch, who recognizes the talents and intelligence of Solomon Northup (Chiwetel Ejiofor), a formerly free Black man who had been dragooned by thugs and illicitly sold into slavery. Cumberbatch’s character looks for a brief time to be Solomon’s saviour, as he enthusiastically takes Solomon’s advice on a construction project over that of his foreman. But when the foreman violently retaliates against Solomon, Cumberbatch’s character cannot stand up to him. In a more typical Hollywood offering, we might have expected the enlightened white man to intervene; instead, he lacks the intestinal fortitude to act in a way that would have brought social disapprobation, and as a “solution” sells Solomon to a man who proves to be cruelly sociopathic.

Arguing for the unlikeliness that most people could step out of the roles shaped for them by social and cultural norms and pressures might seem like an apologia for historical racism—how can we have expected people to behave differently?—but really it’s about the resistance to seeing ourselves enmeshed in contemporary systemic racism.

Saying that that is Whitehead’s key theme would be reductive; there is so much more to the novel that I’m not getting to. There is, to be certain, a place—a hugely important place—for straightforward historical accounts of the realities and finer details of slavery, even the more “finger wagging” versions Scott Woods alludes to. But what both Whitehead’s novel and Barry Jenkins’ adaptation of it offer is a deconstruction of the simplistic binarism of decades of us vs. them, good vs. bad constructions of racism that give cover to all but the most unapologetically racist white people. The current backlash against “critical race theory”—which I’ll talk more about in the third of these three posts—proceeds to a great extent from its insistence on racism not as individual but systemic, as something baked into the American system.

Which, when you think about it, is not the outrageous argument conservatives make it out to be. Not even close: Africans brought to American shores, starting in 1619, were dehumanized, brutalized, subjected to every imaginable violence, and treated as subhuman property for almost 250 years. Their descendants were not given the full franchise as American citizens until the Civil Rights Act and the Voting Rights Act of 1964 and 1965, respectively. Not quite sixty years on from that point, it’s frankly somewhat baffling that anyone, with a straight face, can claim that the U.S. isn’t a racist nation. One of the greatest stumbling blocks to arriving at that understanding is how we’ve personalized racism as an individual failing. It shouldn’t be so difficult to recognize, as a white person, one’s tacit complicity in a long history without having to feel the full weight of disapprobation that the label “racist” has come to connote through the pop cultural mythologization of racism as a simple binary.

NOTES


[1] I want to distinguish here between those who more cynically play the game of racial politics, and those who genuinely do not see racism as systemic (granted that there is a grey area in between these groups). These are the people for whom having one of more Black friends is proof of their non-racist bona fides, and who honestly believe that racism was resolved by the signing of the Civil Rights Act and definitively abolished by Obama’s election.

[2] The Help, both the novel and the film, is perhaps one of the purest distillations of a white saviour story packaged in such a way to flatter and comfort white audiences. Essentially, it is the story of an irrepressible young white woman (with red hair, of course) nicknamed Skeeter (played by Emma Stone) who chafes against the social conventions of 1963 Mississippi and dreams of being a writer and journalist. TL;DR: she ends up telling the stories of the “help,” the Black women working as domestic labour for wealthy families such as her own, publishes them—anonymously of course, though Skeeter is the credited author—and thus gets the traction she needs to leave Mississippi for a writing career.

I can’t get into all of the problems with this narrative in a footnote; hopefully I don’t need to enumerate them. But I will say that one of the key things that irked me about this story, both the novel and the movie, is how it constantly name-checks Harper Lee and To Kill a Mockingbird as Skeeter’s inspiration. Lee might have given us the archetype of the white saviour in the figure of Atticus Finch, but she did it at a moment in time (it was published in 1960) when the subject matter was dangerous (Mary Badham, who played Scout, found herself and her family shunned when they returned to Alabama after filming ended for having participated in a film that espoused civil rights). By contrast, The Help, which was published in 2009 and the film released in 2011, is about as safe a parable of racism can be in the present day—safely set in the most racist of southern states during the civil rights era, with satisfyingly vile racist villains and an endearing, attractive white protagonist whose own story of breaking gender taboos jockeys for pole position with her mission to give voice to “the help.”

[3] Though to be fair, in some instances this had as much to do with virtuoso performances by extremely talented actors, such as Denzel Washington, Morgan Freeman, and Andre Braugher in Glory. And not to heap yet more scorn on The Help, but the best thing that can be said about that film is that it gave Viola Davis and Octavia Spencer the visibility—and thus the future casting opportunities—that their talent deserves.

[4] I.e. white people.

[5] Not coincidentally helmed by a Black director, Steve McQueen (no, not that Steve McQueen, this Steve McQueen).

Leave a comment

Filed under Uncategorized

Summer Blogging and Finding Focus

Why do I write this blog? Well, it certainly isn’t because I have a huge audience—most of my posts top out at 40-60 views, and many garner a lot less than that. Every so often I get signal boosted when one or more people share a post. The most I’ve ever had was when a friend posted a link of one I wrote about The Wire and police militarization to Reddit, and I got somewhere in the neighbourhood of 1500 views.[1] Huge by my standards, minuscule by the internet’s.

Not that I’m complaining. I have no compunction to chase clicks, or to do the kind of networking on Twitter that seems increasingly necessary to building an online audience, which also entails strategically flattering some audiences and pissing off others. The topics I write about are eclectic and occasional, usually the product of a thought that crosses my mind and turns into a conversation with myself. My posts are frequently long and sometimes rambling, which is also not the best way to attract readers.

Blogging for me has always been something akin to thinking out loud—like writing in a journal, except in a slightly more formal manner, with the knowledge that, however scant my audience is, I’m still theoretically writing for other people, and so my thoughts have to be at least somewhat coherent. And every so often I get a hit of dopamine when someone shares a post or makes a complimentary comment.

I started my first blog when I moved to Newfoundland as a means of giving friends and family a window into my new life here, without subjecting them to the annoyance of periodic mass emails. I posted in An Ontarian in Newfoundland for eight years, from 2005 to 2013, during which time it went from being a digest of my experiences in Newfoundland to something more nebulous, in which I basically posted about whatever was on my mind. I transitioned to this blog with the thought that I would focus it more on professional considerations—using it as a test-space for scholarship I was working on, discussions about academic life, and considerations of things I was reading or watching. I did do that … but then also inevitably fell into the habit of posting about whatever was on my mind, often with long stretches of inactivity that sometimes lasted months.

During the pandemic, this blog has become something akin to self-care. I’ve written more consistently in this past year than I have since starting my first blog (though not nearly as prolifically as I posted in that first year), and it has frequently been a help in organizing what have become increasingly inchoate thoughts while enduring the nadir of Trump’s tenure and the quasi-isolation enforced by the pandemic. I won’t lie: it has been a difficult year, and wearing on my mental health. Sometimes putting a series of sentences together in a logical sequence to share with the world brought some order to the welter that has frequently been my mind.

As we approach the sixth month of the Biden presidency and I look forward to my first vaccination in a week, you’d think there would be a calming of the mental waters. And there has been, something helped by the more frequent good weather and more time spent outside. But even as we look to be emerging from the pandemic, there’s a lot still plaguing my peace of mind, from my dread certainty that we’re looking at the end of American democracy, to the fact that we’re facing huge budget cuts in health care and education here in Newfoundland.

The Venn diagram of the thoughts preoccupying my mind has a lot of overlaps, which contributes to the confusion. There are so many points of connection: the culture war, which irks me with all of its unnuanced (mis)understandings of postmodernism, Marxism, and critical race theory; the sustained attack on the humanities, which proceeds to a large degree from the misperception that it’s all about “woke” indoctrination; the ways in which cruelty has become the raison d’être of the new Right; the legacy of the “peace dividend” of the 1990s, the putative “end of history,” and the legacy of post-9/11 governance leading us to the present impasse; and on a more hopeful note, how a new humanism practiced with humility might be a means to redress some of our current problems.

For about three or four weeks I’ve been spending part of my days scribbling endless notes, trying to bring these inchoate preoccupations into some semblance of order. Reading this, you might think that my best route would be to unplug and refocus; except that this has actually been energizing. It helps in a way that there is significant overlap with a handful of articles I’m working on, about (variously) nostalgia and apocalypse, humanism and pragmatism, the transformations of fantasy as a genre, and the figuration of the “end of history” in Philip Roth’s The Plot Against America and contemporary Trumpist figurations of masculinity.

(Yes, that’s a lot. I’m hoping, realistically, to get one completed article out of all that, possibly two).

With all the writing I’ve been doing, it has been unclear—except for the scholarly stuff—how best to present it. I’ve been toying with the idea of writing a short book titled The Idiot’s[2] Guide to Postmodernism, which wouldn’t be an academic text but more of a user manual to the current distortions of the culture wars, with the almost certainly vain idea of reintroducing nuance into the discussion. That would be fun, but in the meantime I think I’ll be breaking it down into a series of blog posts.

Some of the things you can expect to see over the next while:

  • A three-part set of posts (coming shortly) on history, memory, and forgetting.
  • A deep dive into postmodernism—what it was, what it is, and why almost everyone bloviating about it and blaming it for all our current ills has no idea what they’re talking about.
  • A handful of posts about cruelty.
  • “Jung America”—a series of posts drawing a line from the “crisis of masculinity” of the 1990s to the current state of affairs with Trumpism and the likes of Jordan Peterson and Ben Shapiro.
  • At least one discussion about the current state of the humanities in the academy, as well as an apologia arguing why the humanities are as important and relevant now as they have ever been.

Phew. Knowing me, I might get halfway through this list, but we’ll see. Meantime, stay tuned.

NOTES


[1] David Simon also left a complimentary comment on that one. Without a doubt, the highlight of my blogging career.

[2] Specifically, Jordan Peterson, but there are others who could use a primer to get their facts straight.

2 Comments

Filed under blog business, Uncategorized

My Mostly Unscientific Take on UFOs

Over the past year or so, is has seemed as though whatever shadowy Deep State agencies responsible for covering up the existence of extraterrestrials have thrown up their hands and said “Yeah. Whatever.”

Perhaps the real-world equivalent of The X-Files Smoking Man finally succumbed to lung cancer, and all his subordinates just couldn’t be bothered to do their jobs any more.

Or perhaps the noise of the Trump presidency created the circumstances in which a tacit acknowledgement of numerous UFO sightings wouldn’t seem to be bizarre or world-changing.

One way or another, the rather remarkable number of declassified videos from fighter pilots’ heads-up-displays of unidentified flying objects of odd shapes and flying capabilities has evoked an equally remarkable blasé response. It’s as if the past four years of Trump, natural disasters, civil tragedies, and a once-in-a-century (touch wood) pandemic has so eroded our capacity for surprise that, collectively, we seem to be saying, “Aliens? Bring it.” Not even the QAnon hordes, for whom no event or detail is too unrelated not to be folded into the grand conspiracy have seen fit to make comment upon something that has so long been a favourite subject of conspiracists (“Aliens? But are they pedophile child sex-trafficking aliens?”).

Perhaps we’re all just a bit embarrassed at the prospect of alien contact, like having a posh and sophisticated acquaintance drop by when your place is an utter pigsty. I have to imagine that, even if the aliens are benevolent and peaceful, humanity would be subjected to a stern and humiliating talking-to about how we let our planet get to the state it’s in.

“I’m sorry to have to tell you, sir, that your polar icecaps are below regulation size for a planet of this category, sir.” (Good Omens)

Not to mention that if they landed pretty much anywhere in the U.S., they’d almost certainly get shot at.

And imagine if they’d landed a year ago.

“Take us to your leader!”
“Um … are you sure? Perhaps you should try another country.”
“All right, how do I get to Great Britain?”
“Ooh … no. You really don’t want that.”
“Russia then? China? India? Hungary?”
“Uh, no, no, no, and no.”
“Brazil?”
“A world of nope.”
“Wait–what’s the one with the guy with good hair?”
“Canada. But, yeah … probably don’t want to go there either. Maybe … try Germany?”
“Wasn’t that the Hitler country?”
“They got better.”

You’d think there would be more demand for the U.S. government to say more about these UFO sightings. The thing is, I’m sure that in some sections of the internet, there is a full-throated ongoing yawp all the time for that, but it hasn’t punctured the collective consciousness. And frankly, I don’t care enough to go looking for it.

It is weird, however, considering how we’ve always assumed that the existence of extraterrestrial life would fundamentally change humanity, throwing religious belief into crisis and dramatically transforming our existential outlook. The entire premise of Star Trek’s imagined future is that humanity’s first contact with the Vulcans forced a dramatic reset of our sense of self and others—a newly galactic perspective that rendered all our internecine tribal and cultural squabbles irrelevant, essentially at a stroke resolving Earth’s conflicts.

To be certain, there hasn’t been anything approaching definitive proof of alien life, so such epiphany or trauma lies only in a possible future. Those who speak with any authority on the matter are always careful to point out that “UFO” is not synonymous with “alien”—they’re not necessarily otherworldly, just unidentified.

I, for one, am deeply skeptical that these UFOs are of extraterrestrial origin—not because I don’t think it’s infinitesimally possible, just that the chances are in fact infinitesimal. In answer to the question of whether I think there’s life on other planets, my answer is an emphatic yes, which is something I base on the law of large numbers. The Milky Way galaxy, by current estimates, contains somewhere in the neighbourhood of 100 billion planets. Even if one tenth of one percent of those can sustain life, that’s still a million planets, and that in just one of the hundreds of billions of galaxies in the universe.

But then there’s the question of intelligence, and what comprises intelligent life. We have an understandably chauvinistic understanding of intelligence, one largely rooted in the capacity for abstract thought, communication, and inventiveness. We grant that dolphins and whales are intelligent creatures, but have very little means of quantifying that; we learn more and more about the intelligence of cephalopods like octopi, but again: such intelligences are literally alien to our own. The history of imagining alien encounters in SF has framed alien intelligence as akin to our own, just more advanced—developing along the same trajectory until interplanetary travel becomes a possibility. Dolphins might well be, by some metric we haven’t yet envisioned, far more intelligent than us, but they’ll never build a rocket—in part because, well, why would they want to? As Douglas Adams put it in The Hitchhiker’s Guide to the Galaxy, “man had always assumed that he was more intelligent than dolphins because he had achieved so much—the wheel, New York, wars and so on—whilst all the dolphins had ever done was muck about in the water having a good time. But conversely, the dolphins had always believed that they were far more intelligent than man—for precisely the same reasons.”

To put it another way, to properly imagine space-faring aliens, we have to imagine not so much what circumstances would lead to the development of space travel as how an alien species would arrive at an understanding of the universe that would facilitate the very idea of space travel.

Consider the thought experiment offered by Hans Blumenberg in the introduction of his book The Genesis of the Copernican World. Blumenberg points out that our atmosphere has a perfect density, “just thick enough to enable us to breath and to prevent us from being burned up by cosmic rays, while, on the other hand, it is not so opaque as to absorb entirely the light of the stars and block any view of the universe.” This happy medium, he observes, is “a fragile balance between the indispensable and the sublime.” The ability to see the stars in the night sky, he says, has shaped humanity’s understanding of themselves in relation to the cosmos, from our earliest rudimentary myths and models, to the Ptolemaic system that put us at the center of Creation and gave rise to the medieval music of the spheres, to our present-day forays in astrophysics. We’ve made the stars our oracles, our gods, and our navigational guides, and it was in this last capacity that the failings of the Ptolemaic model inspired a reclusive Polish astronomer named Mikołaj Kopernik, whom we now know as Copernicus.

But what, Blumenberg asks, if our atmosphere was too thick to see the stars? How then would have humanity developed its understanding of its place in the cosmos? And indeed, of our own world—without celestial navigation, how does seafaring evolve? How much longer before we understood that there was a cosmos, or grasped the movement of the earth without the motion of the stars? There would always of course be the sun, but it was always the stars, first and foremost, that inspired the celestial imagination. It is not too difficult to imagine an intelligent alien species inhabiting a world such as ours, with similar capabilities, but without the inspiration of the night sky to propel them from the surface of their planet.[1]

Now think of a planet of intelligent aquatic aliens, or creatures that live on a gas giant that swim deep in its dense atmosphere.

Or consider the possibility that our vaunted intelligence is in fact an evolutionary death sentence, and that that is in fact the case for any species such as ourselves—that our development of technology, our proliferation across the globe, and our environmental depredations inevitably outstrip our primate brains’ capacity to reverse the worst effects of our evolution.

Perhaps what we’ve been seeing is evidence of aliens who have mastered faster than light or transdimensional travel, but they’re biding their time—having learned the dangers of intelligence themselves, they’re waiting to see whether we succeed in not eradicating ourselves with nuclear weapons or environmental catastrophe; perhaps their rule for First Contact is to make certain a species such as homo sapiens can get its shit together and resolve all the disasters we’ve set in motion. Perhaps their Prime Directive is not to help us, because they’ve learned in the past that unless we can figure it out for our own damned selves, we’ll never learn.

In the words of the late great comedian Bill Hicks, “Please don’t shoot at the aliens. They might be here for me.”

EDIT: Stephanie read this post and complained that I hadn’t worked in Monty Python’s Galaxy Song, so here it is:

NOTES


[1] And indeed, Douglas Adams did imagine such a species in Life, the Universe, and Everything—an alien race on a planet surrounded by a dust cloud who live in utopian peace and harmony in the thought that they are the sum total of creation, until the day a spaceship crash-lands on their planet and shatters their illusion. At which point, on reverse engineering the spacecraft and flying beyond the dust cloud to behold the splendours of the universe, decide there’s nothing else for it but to destroy it all.

Leave a comment

Filed under maunderings, Uncategorized

Liz Cheney is as Constant as the Northern Star …

… and I don’t particularly mean that as a compliment.

Literally minutes before he is stabbed to death by a posse of conspiring senators, Shakespeare’s Julius Caesar declares himself to be the lone unshakeable, unmoving, stalwart man among his flip-flopping compatriots. He makes this claim as he arrogantly dismisses the petition of Metellus Cimber, who pleads for the reversal of his brother’s banishment. Cimber’s fellow conspirators echo his plea, prostrating themselves before Caesar, who finally declares in disgust,

I could be well moved if I were as you.
If I could pray to move, prayers would move me.
But I am constant as the northern star,
Of whose true-fixed and resting quality
There is no fellow in the firmament.
The skies are painted with unnumbered sparks.
They are all fire and every one doth shine,
But there’s but one in all doth hold his place.
So in the world. ‘Tis furnished well with men,
And men are flesh and blood, and apprehensive,
Yet in the number I do know but one
That unassailable holds on his rank,
Unshaked of motion. And that I am he.

Caesar mistakes the senators’ begging for weakness, not grasping that they are importuning him as a ploy to get close enough to stab him until it is too late.

Fear not, I’m not comparing Liz Cheney to Julius Caesar. I suppose you could argue that Cheney’s current anti-Trump stance is akin to Caesar’s sanctimonious declaration if you wanted to suggest that it’s more performative than principled. To be clear, I’m not making that argument—not because I don’t see it’s possible merits, but because I really don’t care.

I come not to praise Liz Cheney, whose political beliefs I find vile; nor do I come to bury her. The latter I’ll leave to her erstwhile comrades, and I confess I will watch the proceedings with a big metaphorical bowl of popcorn in my lap, for I will be a gratified observer no matter what the outcome. If the Trumpists succeed in burying her, well, I’m not about to mourn a torture apologist whose politics have always perfectly aligned with those of her father. If she soldiers on and continues to embarrass Trump’s sycophants by telling the truth, that also works for me.

Either way, I’m not about to offer encomiums for Cheney’s courage. I do think it’s admirable that she’s sticking to her guns, but as Adam Serwer recently pointed out in The Atlantic, “the [GOP’s] rejection of the rule of law is also an extension of a political logic that Cheney herself has cultivated for years.” During Obama’s tenure, she frequently went on Fox News to accuse the president of being sympathetic to jihadists, and just as frequently opined that American Muslims were a national security threat. During her run for a Wyoming Senate seat in 2014, she threw her lesbian sister Mary under the bus with her loud opposition to same-sex marriage, a point on which she stands to the right of her father. And, not to repeat myself, but she remains an enthusiastic advocate of torture. To say nothing of the fact that, up until the January 6th assault on the Capitol, was a reliable purveyor of the Trump agenda, celebrated then by such current critics as Steve Scalise and Matt Gaetz.

Serwer notes that the Cheney’s “political logic”—the logic of the War on Terror—is consonant with that of Trumpism not so much in policy as in spirit: the premise that there’s them and us, and that “The Enemy has no rights, and anyone who imagines otherwise, let alone seeks to uphold them, is also The Enemy.” In the Bush years, this meant the Manichaean opposition between America and Terrorism, and that any ameliorating sentiment about, say, the inequities of American foreign policy, meant you were With the Terrorists. In the present moment, the Enemy of the Trumpists is everyone who isn’t wholly on board with Trump. The ongoing promulgation of the Big Lie—that Biden didn’t actually win the election—is a variation of the theme of “the Enemy has no rights,” which is to say, that anyone who does not vote for Trump or his people is an illegitimate voter. Serwer writes:

This is the logic of the War on Terror, and also the logic of the party of Trump. As George W. Bush famously put it, “You are either with us or with the terrorists.” You are Real Americans or The Enemy. And if you are The Enemy, you have no rights. As Spencer Ackerman writes in his forthcoming book, Reign of Terror, the politics of endless war inevitably gives way to this authoritarian logic. Cheney now finds herself on the wrong side of a line she spent much of her political career enforcing.

All of which is by way of saying: Liz Cheney has made her bed. The fact that she’s chosen the hill of democracy to die on is a good thing, but this brings us back to my Julius Caesar allusion. The frustration being expressed by her Republican detractors, especially House Minority Leader Kevin McCarthy, is at least partially rational: she’s supposed to be a party leader, and in so vocally rejecting the party line, she’s not doing her actual job. She is being as constant as the Northern Star here, and those of us addicted to following American politics are being treated to a slow-motion assassination on the Senate (well, actually the House) floor.

But it is that constancy that is most telling in this moment. Cheney is anchored in her father’s neoconservative convictions, and in that respect, she’s something of a relic—an echo of the Bush years. As Serwer notes, however, while common wisdom says Trump effectively swept aside the Bush-Cheney legacy in his rise to be the presidential candidate, his candidacy and then presidency only deepened the bellicosity of Bush’s Us v. Them ethos, in which They are always already illegitimate. It’s just now that the Them is anyone opposed to Trump.

In the present moment, I think it’s useful to think of Liz Cheney as an unmoving point in the Republican firmament: to remember that her politics are as toxic and cruel as her father’s, and that there is little to no daylight between them. The fact that she is almost certainly going to lose both her leadership position and lose a primary in the next election to a Trump loyalist, is not a sign that she has changed. No: she is as constant as the Northern Star, and the Trump-addled GOP has moved around her. She is not become more virtuous; her party has just become so very much more debased.

2 Comments

Filed under maunderings, The Trump Era, Uncategorized, wingnuttery

A Few Things

What I’m Reading     I first heard of Heather McGhee two or three years ago when she was interviewed on one of the political podcasts I listen to. She was then the president of Demos, a progressive think-tank focused on race and economics and strategies to strengthen American democracy; I was immediately impressed by how clearly and articulately she broke down the inextricability of race and economic policy, and the ways in which Republicans have successfully sold the idea to white voters of government spending as a zero-sum game in which every dollar that goes to help Black people and minorities is a dollar taken from them—and that government programs that help non-wealthy whites  are somehow stealing from them to benefit inner-city Blacks. And hence, non-wealthy whites have become reliable Republican voters who vote against the own interests in election after election.

To be clear, this is not a new insight. President Lyndon B. Johnson himself, who signed the Civil Rights Act into being in 1964 and the Voting Rights Act the following year, knew that he was alienating a significant portion of his own party. “Well, we’ve lost the South,” he is reported to have said on signing the civil rights legislation; he also famously acknowledged the principle on which Richard Nixon would successfully court southern white Democrats: “If you can convince the lowest white man he’s better than the best colored man, he won’t notice you’re picking his pocket. Hell, give him somebody to look down on, and he’ll empty his pockets for you.”

What impressed me about McGhee was how clearly she laid out the historical narrative, as well as how convincingly she argued her central premise: that systemic racism hurts everyone, white people included. I don’t remember which podcast it was on which I originally heard her, but that’s become something of a moot point as since then she’s been on all the podcasts—especially lately since her book The Sum of Us: What Racism Costs Everyone and How We Can Prosper Together came out. Since that first podcast I heard, she resigned as Demos’ president and traveled the U.S., speaking to hundreds of experts, activists, historians, and ordinary people. The Sum of Us is the result, and it makes her original argument in an exhaustively detailed and forceful manner. It is an eminently readable book: personal without being subjective, wonky without losing itself in the weeds, and both rigorously historical while still relating straightforward stories that persuasively bring home the societal costs of systemic racism. The one example she shares in her interviews functions as the book’s central metaphor: starting in the 1920s, the U.S. invested heavily in public projects and infrastructure, one thing being the construction of public pools. During the Depression, Roosevelt’s Works Progress Administration (WPA) continued this trend, using such community investment to generate jobs. By the 1950s, towns and cities across the country boasted ever-larger and more lavish public pools, which became a point of pride for communities—pools large enough, in some cases, to admit thousands of swimmers.

But such pools were, of course, whites-only. With the advent of desegregation in the mid-late 50s, courts decreed that these public pools were legally obliged to admit Blacks. Town and city councils responded swiftly, voting to drain the pools and fill them in with dirt and seed them over with grass (in Georgia, the Parks and Recreation Department was simply eliminated, and was not resurrected for ten years). Affluent whites did not suffer: there was a concomitant boom in the construction of backyard pools and the establishment of private swimming clubs, which could effect de facto segregation by leaving membership decisions to the discretion of a governing board. But non-wealthy whites were suddenly left without a key option for summer recreation, all because their communities could not countenance sharing a publicly-funded pool with their Black citizens. In what is one of the pernicious elements of systemic racism, McGhee observes, many of the non-wealthy whites who could no longer bring their children to swim in one of these magnificent pools for free probably thought that this was a fair deal—better to go without than be obliged to share with people you’d been brought up to consider beneath you.

I am at present about halfway through The Sum of Us; look for a longer blog post when I’ve finished it. Meanwhile, I would suggest that this book should be required reading for our present moment.

What I’m Watching     I wrote in my last post about how much I’m enjoying rewatching Battlestar Galactica, but as Stephanie and I took a hiatus from that show to binge The Mandalorian, so again we’re taking a hiatus to watch The Stand­—the recent mini-series adaptation of Stephen King’s mammoth 1978 novel in which 99.9% of the world is wiped out by a weaponized superflu nicknamed “Captain Trips,” and the remaining people of the U.S. gather in two opposing communities. On one side are the forces of good, who have been drawn to Boulder, Colorado by dreams of a 108 year old Black woman named Mother Abigail. On the other are those drawn by promises of power, licentiousness, and revenge by the evil Randall Flagg, a denim-clad and cowboy-boot shod demon in human form, who establishes his new society in (of course) Las Vegas.

As I’ve discussed a few times on this blog, last term I taught a fourth year class on pandemic fiction; I did not include The Stand, in spite of the fact that it’s one of the few actual pandemic novels written prior to the 21st century, mainly because it is way too long (almost 1500 pages) to shoehorn into a semester-long course. Given its significance to the topic, however, I did record a short lecture in which I ran down the key themes and plot points (which you can watch here if you’re so inclined). But one of the things I found interesting in retrospect—I first read The Stand in high school, and then read it again when King published the unexpurgated version in 1990—after doing all the preparatory reading for my course, was how King transformed a story about a biological catastrophe into a Manichaean light v. dark, G v. E, cosmic battle royale with Mother Abigail as God’s surrogate and Randall Flagg as Satan’s proxy. While the novel meditated at length on the nature of civilization and how one pragmatically goes about rebuilding after the apocalypse—with 1500 pages, how could it not?—it is obvious that it’s the metaphysical war that most interested King.

We’re slightly more than halfway through the new adaptation, and quite enjoying it. It was quite badly reviewed; and while I can agree with some of the complaints, it has been on the whole well-adapted to the screen, and (mostly) well-cast. Alexander Skarsgard is at his menacing best as Randall Flagg, James Marsden is all wry southern charm as Texan Stu Redman, Greg Kinnear plays the professorial Glen Bateman with the right balance of pomposity and insight; Whoopi Goldberg basically plays Mother Abigail as a devoutly Christian Guinan with a head of white dreadlocks; my favourite however is Brad William Henke, who plays the mentally disabled Tom Cullen with a guileless, earnest simplicity that avoids stereotypes (those who watch Justified will recognize Henke from season two as Coover Bennett, a similarly mentally delayed character whose disability manifests instead in sociopathic violence).

There is much that is left out, and much that could have been done better, but on the whole it is a pretty satisfying adaptation of an intriguing but flawed novel (“intriguing but flawed” is how I’d characterize most of King’s oeuvre, but I suppose that is to be expected when you churn out an average of two brick-sized novels a year). If you like The Stand, or are just amenable to Stephen King more generally, I’d recommend this series.

What I’m Writing     I recently dusted off an article-in-progress that had been mouldering for a year or two, on zombie apocalypse and celebrity; in a fit of energy I finished it and submitted it to a journal. I now have another that I’m looking to finish, on Emily St. John Mandel’s Station Eleven and nostalgia. Given that I’ll be doing that novel in both of my classes over the next two weeks, it seems an ideal time to return to it. Given that it is also about apocalypse, though of the non-zombie variety—and indeed about a civilization-ending pandemic—I’ve been trying to rewrite my introduction to put it in the context of the past year. It’s been slow going, not least because finding the right balance between the personal and the objective can be tricky when your aim is to submit it to a scholarly journal. But the overarching argument—that Mandel’s post-apocalyptic world in which the main characters are actors and musicians travelling between settlements to perform Shakespeare and classical music comprises a nostalgic desire to return to a pre-postmodern humanism—is, I think, a strong one. I just have to fill out a core section in which I discuss humanism in a more granular way.

(This process will also be useful, as it will give me a lot of lecture material).

On a related subject, I’ve also been working on a new essay on Terry Pratchett and Discworld. I have an article on Pratchett and his campaign for assisted dying coming out soon in a new collection; I’m trying to carry that momentum forward on a handful of Sir Terry essays, but the one I’ve been focusing on is a reading the Discworld novels in the context of the philosophy of American Pragmatism and what I’ve been calling the “magical humanism” exhibited in a lot of contemporary fantasy.

As much as I love Sir Terry’s writing, I find it difficult to write about it in a scholarly manner, for the basic reason that I find it difficult to find a focus and not end up running off madly in all directions. The essay I wrote for the collection, which came in way past deadline, needed to be cut from nine thousand words to six thousand (one of the essay’s blind reviewers said something to the effect of “this is obviously a piece of work gesturing to a much larger theory of Pratchett’s fiction,” which was at once both gratifying and true). Part of the problem is the iterative nature of the Discworld’s world-building: each  of the forty-one novels is a standalone narrative, and with each new installment, Sir Terry modified and refined aspects of that world, but also returned to the same themes and preoccupations in such a way that it is close to impossible to discuss the political and philosophical preoccupations of a given novel without being obliged to reference a dozen others.

This isn’t the most conducive thing for my intellectual temperament, which at the best of times is digressive and inclined to run down whatever rabbit holes I find, until I realize that, several paragraphs of writing on, I’ve found myself discussing an entirely different topic. (Presumably, devoted readers of this blog will have noticed this). That being said, however, it is a pleasure to lose myself in this topic … not least because I increasingly see Sir Terry’s humanism as a necessary antidote to our present toxic political moment.

Leave a comment

Filed under A Few Things, Uncategorized, what I'm reading, what I'm watching, what I'm working on