Remembering Postmodernism Part One: Contingent Realities

On rereading my introductory post in this series, I realized I forgot to include an important caveat about the particularity of my perspective. To wit: I am, academically speaking, an Americanist—which is to say my principal research and teaching focus is on contemporary American literature and culture. To that end, my understanding of everything I will be talking about will be filtered through that particular lens of expertise. Postmodernism is, by definition, a global phenomenon, and as I will discuss in my next post, globalism as we know it is, to a large extent, the product of a world war in which the United States emerged as the hegemon of the Western world.

But however much the U.S. put its stamp on what we call postmodernism and postmodernity, there are many iterations specific to other nations, other cultures, other histories—not least of which is the weird Venn diagram of postmodernism and postcolonialism. Or the equally weird 1980s feedback loop between American and Japanese postmodernism.

But for the sake of keeping things simple, though I’ll be gesturing to other such iterations of the postmodern, I’ll mostly be keeping to my wheelhouse. So please chime in with all the stuff I’m leaving out.

OK, so that being said, let’s recap: my introductory post was basically about the complexity of postmodernism and the fact that there is no one definition, and that there is no unifying thread linking all aspects of postmodernism.

So let me suggest own theory of the unifying thread linking all aspects of postmodernism.

Very simply: the postmodern condition is one in which our tacit understanding is that language does not reflect reality, but that language creates reality.

Before I go further, let me stipulate that the phrase “tacit understanding” is doing the heavy lifting in that sentence: as I’ll elaborate below, there is a broad range of responses to my suggestion, not the least of which is outright hostility. An instinctive response from certain quarters to the idea that language “creates” reality would be to see it as confirmation that postmodern thought is all about absolute relativism and the denial of objective truth. As physicist and notorious anti-postmodernist Alan Sokal said in the 1990s, anyone suggesting that gravity was a social construct was welcome to walk out of his third-floor office window.

Except that neither I nor any “postmodern” thinker of any substance is suggesting that objective reality doesn’t exist, or that it only exists as something conjured from words, any more than the postmodern thinkers Sokal mocked believed we could all float up into the air if we just denied the existence of gravity or deleted the word from the dictionary. I find it’s useful in this discussion to make a distinction between “reality” and “actuality,” in which the latter is the world as it is, and the former is how we make it comprehensible to one another.1 No postmodernist thinker not holding court in a 3am dorm room blue with pot smoke seeks to deny the actuality of the world. We all inhabit bodies that feel pain and experience the sensoria of our immediate environments, and if we choose to step out of a third-floor window, it almost certainly isn’t to prove a point of about social constructionism.

In fact, the word and concept of gravity is as good a starting point as any. The word “gravity” comes to us from the Latin gravitas, which means “seriousness” and was one of the ancient Roman virtues. It also could mean dignity or importance, or the moral rigor required to undertake a task of great importance. When you consult the Oxford English Dictionary, the first three entries are (1) “the quality of being grave”; (2) “Grave, weighty, or serious character or nature; importance, seriousness”; (3) “Weighty dignity; reverend seriousness; serious or solemn conduct or demeanour befitting a ceremony, an office, etc.” Only after wading through those three entries do you get to number four, which breaks down gravity’s meanings “in physical senses,” and only at number five to you arrive at “The attractive force by which all bodies tend to move towards the centre of the earth; the degree of intensity with which a body in any given position is affected by this force, measured by the amount of acceleration produced.”

The mathematical expression of Earth’s gravity is 9.6 m/s2, which is your vertical acceleration should you jump out of a plane or Alan Sokal’s office window. A postmodernist consideration of gravity would not be skeptical of the fact that things fall, but rather would be preoccupied with how we understand it, and how the linguistic signifier “gravity” possesses meanings other than its mathematical rendering—pointing to its origins meaning seriousness and “weighty” as a metaphor referring to matters of importance, to the word’s shared root with “grave” and the semantic overlap there. One might also cite the history of how we’ve come to understand gravity, from Aristotle’s assertion that heavy objects fall because their element is of earth, and thus seek their natural state. Though the apple falling on Isaac Newton’s head is almost certainly apocryphal, it makes for a good story, even if his mathematics did not actually do much more than Aristotle to clarify not just how but also why gravity works. That had to wait for Einstein’s theories of relativity and the understanding of gravity as curvatures in space/time shaped by objects with mass. And even now, physicists still scratch their heads over the relationship between gravity and quantum mechanics.

None of which questions the actuality that objects fall to the earth at 9.6 m/s2. What it does do is get us into language games that highlight the contingency of meaning, and a simple trio of facts that I always lead off with in my first-year classes: all language is descriptive; all language is metaphorical; all language is rhetorical. Which is to say, all language seeks to describe the world, it does so invariably through analogy, and it seeks to persuade. What language creates, as I said above, is a shared reality that at its best sharpens and clarifies our understanding of the actuality we all individually inhabit. Also, it’s fun: one of the great pleasures of reading a gifted poet or prose stylist is seeing the ways in which they can make you think of certain things anew by using language in challenging and novel ways. Heh, “novel” ways—see what I did there? By which I mean even the humble pun has the capacity to highlight the slipperiness of our shared vocabularies. “I don’t think you quite grasp the gravity of your situation” is a pun that has been used, among other places, in Star Trek and Doctor Who to refer to the fact that the seriousness of one’s circumstances specifically relates to the imminent danger of falling from a great height. My favourite line from Back to the Future is when Doc Brown, puzzled by Marty McFly’s constant use of the word “Heavy!” finally demands whether something has gone wrong with gravity in the future. And of course, there’s the old chestnut that there is no gravity—the Earth just sucks.

But, I can hear some people protesting, maundering about the various meanings of gravity isn’t what we’re concerned with—what we’re concerned with is postmodernism’s denial of objective truth! Which is a big deal! And yes, it would be a big deal if that were indeed the case. The problem is that the word “truth” entails some significant gradations between straightforward facts in evidence and the capital-T Truths bound up in abstractions like justice, morality, and good and evil. Your average postmodernist has no quibble with facts in evidence, but takes issue with the notion of transcendent truths—such as a concept of absolute justice, or that evil exists outside of our capacity to characterize it semantically. Where people most commonly get postmodernism wrong is in characterizing it as a denial of actuality. One suggestion that has surfaced in a significant number of think-pieces over the past several years, that Donald Trump operates out of the “postmodern playbook,” insofar as he treats reality as fungible and truth as something subject to his own whim, is also a basic misapprehension.2 Postmodernism—or, more accurately, postmodern thought—isn’t about the denial of objective truth or actuality, but the interrogation of the premises and cultural assumptions on which the conceptions of capital-T objective Truths are based.

To return to my earlier assertion and its load-bearing words: when I say that “the postmodern condition is one in which our tacit understanding is that language does not reflect reality, but that language creates reality,” I’m not necessarily asserting that language actually creates reality. (As it happens, I find this understanding completely persuasive, but that’s just me). I grasp why this idea is anathema to many, many people, especially religiously devout people who are deeply invested in the assumption of transcendent Truths that exist beyond language. Relatedly, there is also the very long idealist tradition in Western thought and philosophy that is predicated on the basic idea that there is an objective, external Truth towards which we strive, with language as our principal vehicle in doing so.3 What I’m arguing is that the postmodern condition is one in which the prospect of language creating reality isn’t necessarily something that presents itself as such, but is rather a felt experience presenting in most cases intuitively as suspicion, fear, or just a general anxiety. It is usually not articulated specifically, except by nerdy academics like myself or in angry rejections of postmodern thought by other nerdy academics.

This is what I mean by “tacit understanding”: something bound up in a broader cultural condition in which the critical mass of information and the critical mass of media through which we access information, a global economic system that is bewildering to literally everybody, technology that far outstrips the average individual’s capacity to understand it—and I could go on, but I’ll refer you back to my previous post’s bullet-points—has created a cultural condition in which the language/reality relationship has become, shall we say, unmoored. This unmooring was not the specific creation of such anti-postmodernists’ bêtes noir as Jacques Derrida and Michel Foucault; it is, rather—as I will be delving into in future posts—the elemental lived experience of the cultural condition of postmodernity.

Or to put it more simply: you might reject with all your being the idea that language creates reality, but you live in a world so thoroughly suffused by consumerism, so fractured by the multiplicity of disparate media platforms, so atomized by digital culture, that your lived reality is one in which any access to what you consider objective truth is rendered at best deeply fraught and at worst impossible. Those who wonder at the lunacy of QAnon really need only understand this basic dimension of the postmodern condition: namely, that when people’s sense of reality becomes unmoored they will often latch onto epistemic systems that give them a sense of order (no matter how batshit that system might be). Conspiracy theorizing and conspiracy-based paranoia of course long predate the postmodern era, but they find particularly fertile ground in a situation where reality itself feels contingent and slippery.

By the same token, “the imagination of disaster”—as Susan Sontag called it in her classic 1965 essay—has become pervasive, either imagining extinction-level events (alien invasion, asteroid headed for Earth, etc.) that are ultimately averted after much destruction, and which re-establish a sense of order; or, increasingly, depicting post-apocalyptic scenarios in which civilization has collapsed and survivors navigate a new sparsely populated world. Both are fantasies of return: in the first case to a society in which we can have faith in our institutions, in the latter to a more elemental existence shorn of the distractions and trivialities of postmodern life. Indeed, I would argue (and have argued) that post-apocalyptic shows like The Walking Dead share DNA with fantasy like Game of Thrones, insofar as both zombie apocalypse and fantasy are inherently nostalgic, imagining as they do a return to premodern, pre-industrial worlds in which “objective reality” is boiled down the immediacy of survival, whether in the face of attacking zombies, or the imperative of destroying the One Ring4 … in other words, something elemental and visceral and not subject to the seeming infinitude of mediations and contingencies of meaning manifest in the postmodern condition. As a number of thinkers have suggested, it is easier to imagine the destruction of contemporary civilization than how it might be fixed. Perhaps most notably, Fredric Jameson wryly observed that “It seems to be easier for us today to imagine the thoroughgoing deterioration of the earth and of nature than the breakdown of late capitalism,” and that “perhaps that is due to some weakness in our imaginations.”5

Perhaps it is a weakness of our imaginations, but one that, according to Jameson’s voluminous writings on postmodernism, is entirely understandable. In what is perhaps the definitive study of postmodernism, his 1991 book Postmodernism, or The Cultural Logic of Late Capitalism, Jameson argues that one of postmodernism’s key features is its sheer inescapability: the “prodigious new expansion of multinational capitalism”—which is more or less synonymous with the postmodern condition—“ends up penetrating and colonizing those very precapitalist enclaves (Nature and the Unconscious) which offered extraterritorial and Archimedean footholds for critical effectivity.”6 Or, to translate it into non-Jamesonian English, if you can’t get outside out something—physically, mentally, or otherwise—how can you effect a proper critique? How can you address something with which you are always already complicit?7

Keeping in mind that Jameson made that observation about “Archimedean footholds” in a book published in 1991, I think it’s safe to say that we can remark, from our perspective thirty years later, on how the situation he describes has only expanded by a magnitude with the advent of the internet and the current impossibility—short of decamping to live “off the grid” in the wilderness (itself a disappearing enclave)—of extricating oneself from the digital networks that now penetrate all aspects of life.

So how did we get here? Well, that’s my next post. Stay tuned.

NOTES

1. We find the same distinction in the philosophy of Immanuel Kant, who distinguished between the noumenon, or “the thing in itself” (das Ding an sich) and the phenomenon, or the thing as it appears to an observer. By the same token, psycholinguist Jacques Lacan—who weirdly is entirely ignored by postmodernism’s detractors (who generally choose to train their fire on Derrida and Foucault)—distinguishes between the “Real” and the “Symbolic.” The Real aligns with actuality, our experience of the world; the Symbolic by contrast is the realm of language, and any translation of the Real into language takes it out of the realm of the Real and into the Symbolic. In other words, your personal experience is specific to you, but is ultimately incommunicable as such—to communicate it to others means translating into the realm of the unreal, i.e. of language and our shared vocabularies.

2. The Trumpian world of “alternative facts” is not a product of the postmodern condition, but is more properly associated with authoritarianism—something Hannah Arendt, in passages much-quoted these past few years, asserted in The Origins of Totalitarianism: “The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist,” she writes, “but people for whom the distinction between fact and fiction (i.e., the reality of experience) and the distinction between true and false (i.e., the standards of thought) no longer exist.” She also said that “Before mass leaders seize the power to fit reality to their lies, their propaganda is marked by its extreme contempt for facts as such, for in their opinion fact depends entirely on the power of man who can fabricate it.”

3. As I’ll discuss in a future post, modernist art and literature was largely predicated on the premise that true art was about the accessing of reality—T.S. Eliot called this the “objective correlative,” the idea that the right combination of words and metaphors could conceivably access the fundamental truth of a given emotion. But for the modernists, “objective reality” was vanishingly difficult to touch; its principal revolt was against nineteenth century and Victorian positivism, and its assumption that objective reality could be rendered unproblematically through the practices of realism.

4. There’s an article or possibly an entire book to be written about how the popularity of The Lord of the Rings in America was in part a function of antipathy to the early stirrings of postmodernity—especially considering that Tolkien was especially popular on college campuses in the 1960s, which, when you think about it, is more than a little counter-intuitive, that a deeply conservative, essentially Catholic story would take root amidst leftist radicalism. Even those people amenable to the rise of the New Left and the newly translated writings of Derrida and Foucault et al were inclined to wear buttons declaring “Frodo Lives!”, which suggests that postmodernist theory might have fascinated people, but the lived reality of postmodernity still inspired imaginative escape to worlds not ruled by moral ambiguity and contingent realities.

5. Fredric Jameson, The Cultural Turn: Selected Writings on the Postmodern, 1983-1998. p. 50.

6. Fredric Jameson, Postmodernism, or The Cultural Logic of Late Capitalism. p. 49.

7. Because I don’t want these blog posts to turn into novellas, I will just quickly here note in passing that, within the arguments between those theorizing postmodernism, is this very question. More doctrinaire Marxists like Jameson or scions of the Frankfurt School like Theodor Adorno (more on the Frankfurt School and “cultural Marxism” in a future post) assert that the postmodern condition obviates the possibility of substantive cultural critique. Other critics and theorists see fifth columnists: Linda Hutcheon argues at length that postmodernist art and literature weaponizes irony and parody in what she terms “complicitous critique,” while such thinkers of the Birmingham School as Dick Hebdige and Stuart Hall—who effectively invented what we now call “cultural studies—argued that even within the worst excesses of a consumerist culture industry, artists carve out their own enclaves of resistance.

Leave a comment

Filed under Remembering Postmodernism

Remembering Postmodernism: Introduction

Before I get under way: why “remembering” postmodernism? Several reasons: back when I was an undergraduate, I came across a book about Canadian art titled Remembering Postmodernism. It struck me on two points: one, that the afterword was written by Linda Hutcheon, whose books and articles on postmodernist fiction had been my gateway drug into the topic; and two, that the title was pleasantly cheeky, considering that as far as I was concerned at the time, postmodernism was an ongoing thing.

Now, I never actually read the book, mainly because postmodernist art was—however interesting I found it—somewhat outside my wheelhouse. But the title stuck with me. Two years ago I taught a fourth-year course on American postmodernist fiction, which I titled “Remembering Postmodernism.” In that instance, the expression was a straightforward acknowledgement of postmodernism as an historical period: we started with Thomas Pynchon’s 1966 novel The Crying of Lot 49 and ended with Monique Truong’s The Book of Salt (2003), and a large part of our discussion of Truong’s novel was whether it was, in fact, postmodernist, or whether it represented whatever the next (as yet unnamed) historical phase was.

Now I’m doing these posts and contemplating whether it’s worthwhile to write a book called The Idiot’s Guide to Postmodernism, for the simple fact that there’s an awful lot of talk about postmodernism and postmodernists these days, and none of it seems to have the slightest clue about it. It is rather a term of disapprobation that seems designed on one hand to vilify contemporary university humanities programs as irredeemably “woke,” and as a shorthand to encompass the varying iterations of “wokeness” on the other.

Hence, as far as I’m concerned, we need to remember what postmodernism was and think about what it might still be. And remembering is a useful term insofar as it means both the calling to mind of things forgotten, as well as the act of re-assemblage.

If I were to ask you what the opposite of “remember” is, you’d most likely say “forget,” for the simple reason that that is entirely correct. But in a more strictly linguistic and semantic way, the opposite of “remember” is “dismember.” Sometimes when we remember something, it’s a simple matter of that something simply springing into our mind—where we left our car keys, for instance, or the fact that today is someone’s birthday. The more laborious process of remembering, however, is one of re-membering, of finding those sundered scraps of the past and putting them together like Isis reassembling Osiris’ dismembered body. 

And lest you think this post is just an excuse for me to nerd out over semantics (which, to be fair, it totally is—more of that to come), the fallibilities of memory and their relationship to how we conceive of history are quite germane to postmodern thought.

So, I want to do a deep dive on postmodernism over a series of posts. And to be clear, by “deep dive” I actually mean more of a SparkNotes-type run-through of its history, its basic premises, and what it means in the grander scheme of things. I’ve been studying and writing on postmodernism for the better part of my academic career, starting with my undergrad years, and there are libraries-worth of scholarship on the subject. So I’m hardly going to do much more in a handful of blog posts than offer the general contours.

Why, then, do I want to even bother? Two reasons: first, as I posted previously, because I want to use my blog this summer to work through my confused thoughts on a variety of issues. And second, because nobody honking off about it in the present moment—outside of those who have actually considered postmodernism within the confines of academe—seems to have any bloody clue what they’re talking about. As a case in point, I recently read an otherwise interesting article in The Bulwark, an online publication by anti-Trump conservative thinkers, on anti-democratic intellectuals of the new Right who opine that the tenets of “classical liberalism” in fact contain the seeds of tyranny. The article in question was a reasoned and persuasive defense of Enlightenment thought and its influence on the United States’ founding fathers. But then there was this paragraph:

Contemporary “political correctness” or “wokeness” comes from Marx and Nietzsche by way of the Postmodernists, not from John Locke or the Founding Fathers. A serious person would feel the need to at least attempt to trace some of that intellectual history and confront the ideological differences.

Yes, a serious person would feel the need to at least attempt to trace some of that intellectual history and confront the ideological differences—which the author of this article obviously did not do when throwing Marx, Nietzsche, and postmodernism into the same bucket and tacitly ascribing a causal line of influence. He is smart enough at least to cite Marx and Nietzsche—which is more than most people invoking the dreaded specter of postmodernism do—and obviously knows enough to understand that there is a relationship between those two thinkers and some aspects of postmodernism. In other words, there is a very general sense in which he isn’t wrong, but any number of specific senses in which everything about that statement betrays a profound ignorance of all three of its subjects—starting with the basic fact that Marxism and postmodernism are, if not strictly speaking antithetical, then extremely antagonistic.1

But what is postmodernism? Well, let’s start with the pervasive misapprehension that it constitutes a sort of absolute relativism—a contradiction in terms, yes, but one that ostensibly obviates the possibility of objective meaning. After all, if everything is relative to everything else, where is the capital-T Truth?

As with the paragraph I quoted above, this understanding gets some things right, but remains a basic misapprehension for the simple reason that there is no singular “postmodernism.” The rejection of absolutes, the fundamental skepticism about grand narratives, and the understanding of power not as something external to ourselves but bound up in the flux of discourse and language, are all key aspects of what I’ll be calling “postmodern thought.” Postmodern thought is what Jordan Peterson is referring to when he asserts that, for postmodernists, all interpretations of anything at all are equally valid. I will have occasion in future posts to explain why this is completely wrong (which doesn’t differentiate it from most of his assertions), but for my first few posts I want to distinguish between the various intellectual strands of thinking that emerged from postmodernism, and postmodernism as the cultural condition that gave rise to postmodern thought.

Because the one thing I want to clarify, even if I don’t manage to make anything else clear in these posts, is that postmodernism—or more specifically postmodernity—is above all other things a cultural condition emerging from a confluence of historical circumstances and contexts. The tacit understanding of postmodernism when deployed as a term of disapprobation is that it is a pernicious relativist mode of thinking that exerts a profound and deleterious influence on, well, everything; in the crudest and most conspiratorial version of this thinking, such contemporary social movements that enrage conservatives like feminism, trans rights, Black Lives Matter (and the current bête noir, “critical race theory”) , and a host of other “woke” causes, can be traced back to a handful of French intellectuals in the late 1960s who invented postmodernism as a means of pursuing Marxism and the destruction of Western civilization by other means.2 And while the loose assemblage of schools of thought I’m calling “postmodern” has undoubtedly shaped contemporary attitudes—sometimes positively, sometimes not—my larger argument is that they have emerged in response to cultural conditions; what postmodernism’s detractors call “postmodernism” has usually been less a tool of cultural change than a series of attempts to find language to describe the dramatic cultural transformations of the post-WWII landscape. The cultural changes in question, from intersectional understandings of identity to the normalization of gay marriage, might have been facilitated in part by aspects of postmodern thought, but have been far more facilitated by the technological and economic transformations of postmodernity.

Which, once again, brings us back to the basic question of just what postmodernism is. As I say above, it is a lot of things, including but not limited to:

  • The cultural logic of late capitalism.
  • The breakdown of faith in such societal grand narratives as religion, governance, justice, science, etc.
  • A set of aesthetic practices in art, fiction, film, and architecture (among others) that reflect and articulate such breakdowns.
  • The rise of multinational corporate capital, and its transformation of the imperialist project from a national, colonial one to the subjugation of national interests to the global market.
  • The ascendancy of neoliberal free-market fundamentalism.
  • The snowballing of technology, especially communication technology—from television to the internet to social media—and the concomitant erosion of traditional informational gatekeepers (e.g. legacy media).
  • The inescapability of consumer culture and the culture industry.
  • A set of philosophical, theoretical, and critical attempts to adequately describe all of the above.

So … that clears things up, right? Just kidding—of course it doesn’t. But hopefully that starts to communicate the complexity of the subject.

I do want to make clear that this series of posts is not meant as an apologia for postmodernity, postmodernism, or postmodern thought. I have for many years studied postmodernism and written about it and taught it in university classrooms, but I would not call myself a postmodernist (though others, mostly the people who don’t really understand it, certainly would). My own thinking and philosophical inclinations bend toward pragmatism (the philosophical kind) and a sort of small-h humanism; my lodestars in this respect are Richard Rorty and Terry Pratchett. I passionately love some aspects of postmodern culture, mostly its aesthetic incarnations—the fiction of Toni Morrison and Don DeLillo, for example, or films like Blade Runner—and I find the theoretical armature defining postmodernism, whether it’s celebratory like that of Linda Hutcheon or Brian McHale, or damning like that of Fredric Jameson, fascinating; but huge swaths of what I’m calling postmodernity are pernicious and harmful (corporate capitalism and social media among them).

This series of posts is rather a quixotic attempt to re-inject some nuance into what has become an impoverished and overdetermined understanding of a bewilderingly complex concept. More importantly, it is my own way of working through my own thoughts on the matter. So bear with me.

NOTES

1. There is also the rather amusing thought of just how disgusted Nietzsche would be with “woke” sentiments.

2. More on this in my future post on “The Conspiracy Theory of Postmodernism.”

Leave a comment

Filed under Remembering Postmodernism

A Few Things

Summer Blogging Update     Several posts ago, I announced my summer blogging plans; I then dutifully followed up with a three-part series on memory, history, and forgetting in fairly quick succession. And then … well, nothing for two weeks. That’s largely due to the fact that my next series of posts, which will be a deep dive into postmodernism—what it is, what it was, what it isn’t, and why most of the people using the word in the media these days have no idea what they’re talking about—has been taking somewhat longer to write than I’d planned. Actually, that isn’t entirely true: the second and third installments are all but completed, and I’ve gotten a healthy start on the third and fourth, but the introductory post is taking longer (and is getting longer). I am optimistic that I will have it done by the end of the weekend, however, so hopefully Phase Two of My Ambitious Posts No One Will Read will soon commence.

What I’ve Watched     A few weeks ago I binged the series Rutherford Falls on Amazon Prime. I’d read some positive reviews of it and heard good things via NPR’s Pop Culture Happy Hour podcast (always a reliable guide); the show is also co-created by Michael Schur, a showrunner who, in my opinion, has been batting 1.000 for years: The Office, Parks and Recreation, Brooklyn Nine-Nine, The Good Place. But what’s particularly notable about Rutherford Falls is that the premise of the show is rooted in the relationship in the titular town between its indigenous and non-indigenous townsfolk. The series was co-created by Sierra Teller Ornelas, a Navajo screenwriter and filmmaker. The writers’ room is also well represented with Native American writers, and the cast is remarkable.

Ed Helms and Jana Schmieding.

Ed Helms plays Nathan Rutherford, a scion of the family that gives the town its name, and who is about the most Ed Helms character I’ve yet seen—a painfully earnest and well-meaning but frequently clueless fellow whose entire sense of self is bound up in his family’s history and that of the town. He runs a museum-ish space in his ancestral home. His best friend is Reagan (Jana Schmieding), a member of the (fictional) Minishonka Nation who is more or less persona non grata in her Native community because she left her fiancée at the altar a number of years before and instead went and did her masters degree at Northwestern. She runs—or attempts to run—a “cultural center” inside the local casino, which is owned by Terry Thomas (Michael Greyeyes), the de facto leader of the Minishonka community.

The show is hilarious, but is also quite comfortable in its own skin (so to speak)—it engages with difficult issues of Native history, white entitlement and appropriation of Native culture, the memorialization of settler culture (the precipitating crisis is when the town’s African-American mayor seeks to remove a statue of the town’s founding Rutherford in the town square, not because it’s a symbol of colonialism, but because cars keep crashing into it), and the fraught navigation by Terry Thomas of American capitalism as a means of accruing power and influence for his people—all without ever losing its humour or coming across as pedantic. A key moment comes in the fourth episode, when an NPR podcaster, having sniffed out that there might be a story in this sleepy town, interviews Terry. He asks him how he reconciles the contradiction of running a casino—the epitome of capitalistic graft—with his own Native identity. Terry, who had been answering the podcaster’s questions with a politician’s practiced smoothness, reaches out and pauses the recorder—and then proceeds, in a stern but not-angry voice, to school the well-meaning NPR hipster on precisely how the casino is consonant with Native values because it is not about the accrual of wealth for wealth’s sake, but for the benefit of a community that has long been marginalized. He then hits play again, and resumes his breezy tone for the rest of the interview.

It is a bravura performance; indeed, Michael Greyeyes is the series’ great revelation. A Canadian-born actor of the Cree Nation, he has long been a staple in Canadian and American indigenous film, or film and television featuring indigenous characters, and has (at least in most of the stuff I’ve seen him in) tended to play stern, brooding, or dangerous characters. So it is a joy to see him show off his comedic talents. His best line? When arguing with Nathan Rutherford over the historical accuracy of a costume he wants him to wear in his planned historical theme park, he says in exasperation, “Our market research shows that the average American’s understanding of history can be boiled down to seven concepts: George Washington, the flag, Independence Day, Independence Day the movie, MLK, Forrest Gump, and butter churns.”

What I’m Currently Watching     Well, we just watched the season four finale of The Handmaid’s Tale last night, and I’m going to need a while to process that. And we watched the series premier of Loki, which looks promising—largely on the basis of the fact that a Tom Hiddleston / Owen Wilson buddy comedy is unlikely to disappoint no matter what is done to it.

F. Murray Abraham, Danny Pudi, David Hornsby, Rob McElhenney, and Charlotte Nicdao

But the show we’re loving right now beyond what is strictly rational is the second season of Mythic Quest on AppleTV. For the unfamiliar: it’s the creation of Rob McElhenney, Charlie Day, and Megan Ganz, the folks who brought you It’s Always Sunny In Philadelphia; the series is a workplace comedy set in the offices of a video game called Mythic Quest: Raven’s Banquet, an MMORPG along the lines of World of Warcraft or Elder Scrolls. McElhenney plays the game’s creator and mastermind Ian (pronounced “Ion”) Grimm, a self-styled visionary who is sort of a cross between Steve Jobs and a lifestyle guru. Charlotte Nicdao plays Poppy, the perennially anxious and high-strung lead engineer who is responsible for turning Ian’s (again, pronounced “Ion”) ideas into workable code. Danny Pudi, who most famously played Abed in Community, is Brad, the “head of monetization,” and is delightfully cast here as a kind of anti-Abed who knows little and cares less about pop culture and video gaming and is only concerned about how he can wring every last cent out of the game’s devoted players. And F. Murray Abraham—yes, Salieri himself—plays the game’s head writer, washed-up SF author C.W. Longbottom, a lascivious alcoholic whose sole claim to fame (to which he clings like a barnacle) is having won a Nebula award in the early 1970s.

That combination of characters alone is a selling point—but the show is also incredibly smart, and also—just when you least expect it—deeply emotional.

What I’m Reading     At this point in the early summer as I scramble to use my time to complete at least one article before September, as well as move my summer blogging project forward (to say nothing of starting on course prep for the Fall), the question feels a little more like what am I not reading.

But that’s all business. My pleasure reading of the moment is a trilogy by M.R. Carey. Carey wrote the brilliant zombie apocalypse novel The Girl With all the Gifts—which was made into a quite good film starring Glenn Close and Gemma Arterton—and its companion novel set in the same world The Boy on the Bridge. One of my grad students this past semester alerted me to the fact that Carey had recently published a series set in another post-apocalyptic scenario. The “Ramparts Trilogy”—comprised of the novels The Book of Koli, The Trials of Koli, and The Fall of Koli—isn’t really a trilogy per se. The novels were all released within several months of each other, suggesting that “Ramparts” was really more of 1000+ page novel that the publisher chose to release in serial form rather than all at once.

Like the other Carey novels I’ve read, the Koli books are post-apocalyptic, set in an England some three to four centuries after humanity more or less obliterated itself in “the Unfinished War.” It is a word in which humanity has been reduced to a mere handful of its former numbers, and everything in nature now seems to be intent on killing the remaining survivors. Human manipulation of trees and vegetation to make them grow faster as part of an effort of reverse climate change has resulted in hostile forests: only on cold or overcast days can anyone walk among the trees without them snaring them with their tendrils. Any wood cut has to be “cured” in saline vats to kill it before it can be used as lumber. And the animal life that has evolved to live in such an environment is equally dangerous.

Koli is, at the start of the novels, an earnest if slightly dim fifteen-year old living in the village of Mythen Rood in the Calder Valley in the northwest of what was once England. His family are woodsmiths; Mythen Rood is protects by the Ramparts, villagers who can use the tech of the old world—a flamethrower, a guided bolt-gun, a “cutter,” which emits an invisible cutting beam, and a database that offers up helpful but cryptic knowledge and information from the old world.

There is much tech left over from the old world, but most of it is useless. Koli’s story essentially begins when he becomes obsessed with “waking” a piece of tech and joining the ranks of the Ramparts.

And that’s all I’ll tell you of the story: suffice to say, with one-third of the last novel to go, I am captivated by both the story Carey tells and the potential future he envisions.

Leave a comment

Filed under A Few Things, what I'm reading, what I'm watching

History, Memory, and Forgetting Part 3: The Backlash Against Remembering

“The struggle of man against power is the struggle of memory against forgetting.”

—Milan Kundera The Book of Laughter and Forgetting

I had drafted the first two installments of this three-part series of posts and was working on this one when the news of the discovery of the remains of 215 indigenous children on the grounds of a former residential school in BC broke. I paused for a day or two, uncertain of whether I wanted to talk about it here, in the context of my broader theme of history, memory, and forgetting. Part of my hesitation is I honestly lack the words for what is a shocking but utterly unsurprising revelation.

I must also confess that, to my shame, one of my first thoughts was the dread certainty that we’d soon be treated to some tone-deaf and historically ignorant column from the likes of Rex Murphy or Conrad Black or any one of the coterie of residential school apologists. So far however the usual suspects seem to be steering clear of this particular story; possibly the concrete evidence of so much death and suffering perpetrated by white turnkeys in the name of “civilizing” Native children is a bridge too far even for Murphy and Black et al’s paeans to Western civilization and white Canada’s munificence.

What I’m talking about in this third post of three is remembering as a political act: more specifically, of making others remember aspects of our history that they may not want to accept or believe. Scouting as I did for anything trying to ameliorate or excuse or explain away this evidence of the residential schools’ inhumanity,[1] I found my way to a 2013 Rex Murphy column that might just be the epitome of the genre, as one gleans from its title: “A Rude Dismissal of Canada’s Generosity.” In Murphy’s telling, in this column as in other of his rants and writings, conditions for Native Canadians in the present day are a vast improvement over historical circumstances, but the largesse of white Canadians and the government is something our indigenous populations perversely deny at every turn. He writes: “At what can be called the harder edges of native activism, there is a disturbing turn toward ugly language, a kind of razor rhetoric that seeks to cut a straight line between the attitudes of a century or a century and a half ago and the extraordinarily different attitudes that prevail today.”

“Attitudes” is the slippery word there: outside of unapologetically anti-indigenous and racist enclaves, I doubt you’d have difficulty finding white Canadians who did not piously agree that the exploitation and abuse of our indigenous peoples was a terrible thing. You’d have a much harder time finding anyone willing to do anything concrete about it, such as restoring the land we cite in land acknowledgments to its ancestral people. Attitudes, on the balance, have indeed improved, but that has had little effect on Native peoples’ material circumstances. And in his next paragraph, Murphy seems intent on demonstrating that not all attitudes have, in fact, improved:

From native protestors and spokespeople there is a vigorous resort to current radical jargon—referring to Canadians as colonialist, as settlers, as having a settler’s mentality. Though it is awkward to note, there is a play to race in this, a conscious effort to ground all issues in the allegedly unrepentant racism of the “settler community.” This is an effort to force-frame every dispute in the tendentious framework of the dubious “oppression studies” and “colonial theory” of latter-day universities.

And there it is—the “radical jargon” that seeks to remember. Referring to Canadians as colonialist settlers isn’t radical, nor is it jargon, but is a simple point of fact—and indeed, for decades history textbooks referred to settlers as brave individuals and the colonizing of Canada as a proud endeavour, necessarily eliding the genocidal impact on the peoples already inhabiting the “new world.” Murphy’s vitriol is, essentially, denialism: denying that our racist and oppressive history doesn’t linger on in a racist present. He speaks for an unfortunately large demographic of white Canada that is deeply invested in a whitewashed history, and reacts belligerently when asked to remember things otherwise.

This is a phenomenon we see playing out on a larger scale to the south, most recently with a substantial number of Republican-controlled state legislatures introducing bills that would forbid schools from teaching any curricula suggesting that the U.S. is a racist country, that it has had a racist history, or really anything that suggests racism is systemic and institutional rather than an individual failing. The bogeyman in much of the proposed legislation is “critical race theory.” Like Rex Murphy’s sneering characterization of “latter-day universities” offering degrees in “oppression studies” (not actually a thing), critical race theory is stigmatized as emerging from the university faculty lounge as part and parcel of “cultural Marxism’s” sustained assault on the edifices of Western civilization.[2] While critical race theory did indeed emerge from the academy, it was (and is) a legal concept developed by legal scholars like Derrick Bell and Kimberlé Crenshaw[3] in the 1970s and 80s. As Christine Emba notes, “It suggests that our nation’s history of race and racism is embedded in law and public policy, still plays a role in shaping outcomes for Black Americans and other people of color, and should be taken into account when these issues are discussed.” As she further observes, it has a clear and quite simple definition, “one its critics have chosen not to rationally engage with.”

Instead, critical race theory is deployed by its critics to connote militant, illiberal wokeism in a manner, to again quote Emba, that is “a psychological defense, not a rational one”—which is to say, something meant to evoke suspicion and fear rather than thought. The first and third words of the phrase, after all, associate it with elite liberal professors maundering in obscurantist jargon, with which they indoctrinate their students into shrill social justice warriors. (The slightly more sophisticated attacks will invoke such bêtes noir of critical theory as Michel Foucault or Jacques Derrida[4]).

But again, the actual concept is quite simple and straightforward: racism is systemic, which should not be such a difficult concept to grasp when you consider, for example, how much of the wealth produced in the antebellum U.S. was predicated on slave labour, especially in the production of cotton—something that also hugely benefited the northern free states, whose textile mills profitably transformed the raw cotton into cloth. Such historical realities, indeed, were the basis for the 1619 Project, the New York Times’ ambitious attempt to reframe American history through the lens of race—arguing that the true starting-point of America was not with the Declaration of Independence in 1776, but when the first African slaves set foot on American soil in 1619.

The premise is polemical by design, and while some historians took issue with some of the claims made, the point of the project was an exercise in remembering aspects of a national history that have too frequently been elided, glossed over, or euphemized. In my previous post, I suggested that the forgetting of the history of Nazism and the Holocaust—and its neutering through the overdeterminations of popular culture—has facilitated the return of authoritarian and fascistic attitudes. Simultaneously, however, it’s just as clear that this revanchist backsliding in the United States has as much to do with remembering. The reactionary crouch inspired by the tearing down of Confederate monuments isn’t about “erasing history,” but remembering it properly: remembering that Robert E. Lee et al were traitors to the United States and were fighting first and foremost to maintain the institution of slavery. Apologists for the Confederacy aren’t wrong when they say that the Civil War was fought over “states’ rights,” they’re just eliding the fact that the principal “right” being fought for above all others was the right to enslave Black people. All one needs to do to clarify this particular point is to read the charters and constitutions of the secessionist states, all of which make the supremacy of the white race and the inferiority of Africans their central tenet. The Confederate Vice President Alexander Stephens declared that the “cornerstone” of the Confederacy was that “the negro is not equal to the white man; that slavery subordination to the superior race is his natural and normal condition. This, our new government, is the first, in the history of the world, based upon this great physical, philosophical, and moral truth.”

Statue of Nathan Bedford Forrest in Memphis, Tennessee

Bringing down Confederate monuments isn’t erasure—it’s not as if Robert E. Lee and Nathan Bedford Forrest disappear from the history books because people no longer see their bronze effigies in parks and town squares—but the active engagement with history. That was also the case with the erection of such monuments, albeit in a more pernicious manner: the vast majority were put up in the 1920s and 1950s, both of which were periods when white Americans felt compelled to remind Black Americans of their subordinate place by memorializing those who had fought so bloodily to retain chattel slavery. Like the Confederate battle flag, these monuments were always already signifiers of white supremacy, though that fact has been systematically euphemized with references to “southern tradition,” and of course the mantra of states’ rights.

Martin Sheen as Robert E. Lee in Gettysburg (1993)

Once when I was still in grad school and had gone home to visit my parents for a weekend, I was up late watching TV, thinking about going to bed, and while flipping channels happened across a film set during the Civil War. It was about halfway through, but I stayed up watching until it was done. The story was compelling, the acting was really good, and the battle scenes were extremely well executed. The film was Gettysburg, which had been released in 1993. It had a huge cast, including Jeff Daniels and Sam Elliott, and Martin Sheen as General Robert E. Lee. Because I’d only seen the second half, I made a point of renting it so I could watch the entire thing.

Gettysburg is a well-made film with some great performances, and is very good at pushing emotional buttons. Colonel Joshua Chamberlain’s (Jeff Daniels) bayonet charge down Little Round Top is a case in point:

I’m citing Gettysburg here because it is one of the most perfect examples of how deeply the narrative of the Lost Cause became rooted in the American imagination. The Lost Cause, for the uninitiated, is the ongoing project, begun just after the end of the Civil War, to recuperate and whitewash (pun intended) the Confederacy and the antebellum South. Its keynotes include the aforementioned insistence that the Civil War wasn’t fought over slavery but states’ rights; its foregrounding of cultured and genteel Southern gentlemen and women; depictions of slaves (when depicted at all) as happy spiritual-singing fieldhands under the benevolent supervision of the mastah; Northerners as rapacious carpetbaggers who proceeded to despoil everything good and noble about the South in the years following the war; and the war itself as a tragic but honourable dispute between sad but dutiful officer classes prosecuting their respective sides’ goals not in anger but sorrow.

This last element is Gettysburg’s connective tissue. Let’s be clear: the film isn’t Southern propaganda like D.W. Griffith’s Birth of a Nation. It is, indeed, high-minded and even-handed. Martin Sheen—Martin fuckin’ Jed Bartlett Sheen!—plays Robert E. Lee, one of the prime targets of statue removers. But the film is propagandistic: there is, to the best of my recollection, not a single Black character in the film, and slavery as the cause of the conflict is alluded to (again, to the best of my recollection) only once—and then only obliquely.

My point in using this example—I could just as easily have cited Gone With the Wind or Ken Burns’ docuseries The Civil War—is how invidiously the Lost Cause narrative has wormed its way into the American imaginary. It is of a piece with everything else I’ve been talking about in this thee-part series of posts. What novels like The Underground Railroad and historical reckonings like the 1619 Project—as well as the campaign to tear down Confederate monuments—attempt is a kind of radical remembering. And as we see from the ongoing backlash, to remember things differently can be threatening to one’s sense of self and nation.

NOTES


[1] As I said, none of the usual suspects seems to have advanced an opinion, but there was—not unpredictably—an awful lot of such attempts in the comments sections on articles about the grisly discovery. They ranged from the usual vile racist sentiments one always finds in these digital cesspools, to one person who argued at length that the child mortality rates in residential schools were consonant with child mortality rates in the population at large, nineteenth century hygiene and medicine being what it was. This individual was not undeterred from their thesis in spite of a long-suffering interlocutor who provided stats and links showing that (a) what the person was referencing was infant mortality rates, which is not the same thing; (b) that the death rates in residential schools were actually more egregious in the 1930s and 40s, and (c) that mass burials in unmarked graves without proper records and death certificates spoke to the dehumanization of the Native children on one hand, and the likelihood on the other hand that this indicated that the “teachers” at these schools were reluctant to leave evidence of their abusive treatment.

[2] I will have a lot more to say on this particular misapprehension of “the university faculty lounge” in a future post on the more general misapprehensions of what comprises a humanities degree.

[3] Crenshaw also developed that other concept that triggers conservatives, “intersectionality.”

[4] Stay tuned for my forthcoming post, “The Conspiracy Theory of Postmodernism.”

Leave a comment

Filed under maunderings

History, Memory, and Forgetting Part 2: Forgetting and the Limits of Defamiliarization

“We cross our bridges when we come to them, and burn them behind us, with nothing to show for our progress except a memory of the smell of smoke, and a presumption that once our eyes watered.”

—Tom Stoppard, Rosencrantz and Guildenstern are Dead

In my first year at Memorial, I taught one of our first-year fiction courses. I ended the term with Martin Amis’ novel Time’s Arrow—a narrative told from the perspective of a parasitic consciousness that experiences time backwards. The person to whom the consciousness is attached turns out to be a Nazi war criminal hiding in America, who was a physician working with Dr. Mengele at Auschwitz. Just as we get back there, after seeing this man’s life played in reverse to the bafflement of our narrator, the novel promises that things will start to make sense … now. And indeed, the conceit of Amis’ novel is that the Holocaust can only make sense if played in reverse. Then, it is not the extermination of a people, but an act of benevolent creation—in which ashes and smoke are called down out of the sky into the chimneys of Auschwitz’s ovens and formed into naked, inert bodies. These bodies then have life breathed into them, are clothed, and sent out into the world. “We were creating a race of people,” the narrative consciousness says in wonder.

Time’s Arrow, I told my class, is an exercise of defamiliarization: it wants to resist us becoming inured to the oft-repeated story of the Holocaust, and so requires us to view it from a different perspective. Roberto Benigni’s film Life is Beautiful, I added, worked to much the same end, by (mostly) leaving the explicit brutalities of the Holocaust offstage (as it were), as a father clowns his way through the horror in order to spare his son the reality of their circumstances. As I spoke, however, and looked around the classroom at my students’ uncomprehending expressions, I felt a dread settle in my stomach. Breaking off from my prepared lecture notes, I asked the class: OK, be honest here—what can you tell me about the Holocaust?

As it turned out, not much. They knew it happened in World War II? And the Germans were the bad guys? And the Jews didn’t come out of it well …? (I’m honestly not exaggerating here). I glanced at my notes, and put them aside—my lecture had been predicated on the assumption that my students would have a substantive understanding of the Holocaust. This was not, to my mind, an unreasonable assumption—I had grown up learning about it in school by way of books like Elie Wiesel’s Night, but also seeing movies depicting its horrors. But perhaps I misremembered: I was from a young age an avid reader of WWII history (and remain so to this day), so I might have assumed your average high school education would have covered these bases in a more thorough manner.[1]

The upshot was that I abandoned my lecture notes and spent the remaining forty minutes of class delivering an off-the-cuff brief history of the Holocaust that left my students looking as if I’d strangled puppies in front of them, and me deeply desiring a hot shower and a stiff drink.

In pretty much every single class I’ve ever taught since, I will reliably harangue my students that they need to read more history. To be fair, I’d probably do that even without having had this particular experience; but I remember thinking of Amis’ brilliant narrative conceit that defamiliarization only works if there has first been familiarization, and it depressed me to think that the passage of time brings with it unfamiliarization—i.e. the memory-holing of crucial history that, previously, was more or less fresh in the collective consciousness. The newfound tolerance for alt-right perspectives and the resurgence of authoritarian and fascist-curious perspectives (to which the Republican Party is currently in thrall) proceeds from a number of causes, but one of them is the erosion of memory that comes with time’s passage. The injunctions against fascism that were so powerful in the decades following WWII, when the memory of the Holocaust was still fresh and both the survivors and the liberators were still ubiquitous, have eroded—those whose first-hand testimonials gave substance to that history have largely passed away. Soon none will remain.

What happens with a novel like Time’s Arrow or a film like Life is Beautiful when you have audiences who are effectively ignorant of the history informing their narrative gambits? Life is Beautiful, not unpredictably, evoked controversy because it was a funny movie about the Holocaust. While it was largely acclaimed by critics, there were a significant number who thought comedy was egregiously inappropriate in a depiction of the Holocaust,[2] as was using the Holocaust as a backdrop for a story focused on a father and his young son. As Italian film critic Paolo Mereghetti observes, “In keeping with Theodor Adorno’s idea that there can be no poetry after Auschwitz, critics argued that telling a story of love and hope against the backdrop of the biggest tragedy in modern history trivialized and ultimately denied the essence of the Holocaust.” I understand the spirit of such critiques, given that humour—especially Roberto Benigni’s particular brand of manic clowning—is jarring and dissonant in such a context, but then again, that’s the entire point. The film wants us to feel that dissonance, and to interrogate it. And not for nothing, but for all of the hilarity Benigni generates, the film is among one of the most heartbreaking I’ve seen, as it is about a father’s love and his desperate need to spare his son from the cruel reality of their circumstances. Because we’re so focused on the father’s clownish distractions, we do not witness—except for one haunting and devastating scene—the horrors that surround them.

In this respect, Life is Beautiful is predicated on its audience being aware of those unseen horrors, just as Time’s Arrow is predicated on its readers knowing the fateful trajectory of Jews rounded up and transported in boxcars to their torture and death in the camps, to say nothing of the grotesque medical “experiments” conducted by Mengele. The underlying assumption of such defamiliarization is that an oft-told history such as the Holocaust’s runs the risk of inuring people to its genuinely unthinkable proportions.[3] It is that very unthinkability, fresher in the collective memory several decades ago, that drove Holocaust denial among neo-Nazi and white supremacist groups—because even such blinkered, hateful, ignorant bigots understood that the genocide of six million people was a morally problematic onion in their racial purity ointment.[4]

“I know nothing, I see nothing …”

They say that tragedy plus time begets comedy. It did not take long for Nazis to become clownish figures on one hand—Hogan’s Heroes first aired in 1965, and The Producers was released in 1967—and one-dimensional distillations of evil on the other. It has become something of a self-perpetuating process: Nazis make the best villains because (like racist Southerners, viz. my last post) you don’t need to spend any time explaining why they’re villainous. How many Stephen Spielberg films embody this principle? Think of Indiana Jones in The Last Crusade, looking through a window into a room swarming with people in a certain recognizable uniform: “Nazis. I hate these guys.” It’s an inadvertently meta- moment, as well as a throwback to Indy’s other phobia in Raiders of the Lost Ark: “Snakes. Why’d it have to be snakes?” Snakes, Nazis, tomato, tomahto. Though I personally consider that a slander against snakes, the parallel is really about an overdetermined signifier of evil and revulsion, one that functions to erase nuance.

Unfortunately, if tragedy plus time begets comedy, it also begets a certain cultural amnesia when historically-based signifiers become divorced from a substantive understanding of the history they’re referencing. Which is really just a professorial way of saying that the use of such terms as “Nazi” or “fascist,” or comparing people to Hitler has become ubiquitous in a problematic way, especially in the age of social media. Case in point, Godwin’s Law, which was formulated by Michael Godwin in the infancy of the internet (1990). Godwin’s Law declares that “As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches one.” This tendency has been added to the catalogue of logical fallacies as the “Reduction ad Hitlerum,” which entails “an attempt to invalidate someone else’s position on the basis that the same view was held by Adolf Hitler or the Nazi Party.”

Perhaps the most egregious recent example of this historical signification was Congresswoman and QAnon enthusiast Marjorie Taylor Greene’s comparison of Nancy Pelosi’s decision to continue requiring masks to be worn in the House of Representatives (because so many Republicans have declared their intention to not get vaccinated) to the Nazi law requiring Jews to wear a gold Star of David on their chests. She said, “You know, we can look back at a time in history where people were told to wear a gold star, and they were definitely treated like second class citizens, so much so that they were put in trains and taken to gas chambers in Nazi Germany. And this is exactly the type of abuse that Nancy Pelosi is talking about.”

To be certain, Greene was roundly condemned by almost everybody, including by many in her own party—even the craven and spineless Minority Leader Kevin McCarthy had some stern words for her—but what she said was different not in kind but in degree from the broader practice of alluding to Nazism and the Holocaust in glib and unreflective ways.[5]

Though this tendency is hardly new—Godwin’s Law is thirty-one years old—it has been amplified and exacerbated by social media, to the point where it made it difficult to find terms to usefully describe and define Donald Trump’s authoritarian tendencies. The ubiquity of Nazi allusions has made them necessarily diffuse, and so any attempt to characterize Trumpism as fascist in character could be easily ridiculed as alarmist and hysterical; and to be fair, there were voices crying “fascist!” from the moment he made that initial descent on his golden escalator to announce his candidacy. That those voices proved prescient rather than alarmist doesn’t obviate the fact that they muddied the rhetorical waters.[6] As the contours of Trump’s authoritarian tendencies came into focus, the fascistic qualities of the Trumpian Right became harder and harder to ignore; bizarrely, they’ve become even more clearly delineated since Trump left office, as the republicans still kowtow to the Mar-A-Lago strongman and move to consolidate minoritarian power.

Historians and political philosophers and a host of other thinkers of all stripes will be years in unravelling the historical and cultural strands of Trump’s rise and the previously unthinkable hold that Trumpism has over a stubborn rump of the electorate; but I do think that one of the most basic elements is our distressing tendency toward cultural amnesia. It makes me think we’re less in need of defamiliarizing history than defamiliarizing all the clichés of history that have become this inchoate jumble of floating signifiers, which allow neo-Nazis and white supremacists to refashion themselves as clean cut khaki-clad young men bearing tiki-torches, or to disingenuously euphemize their racism as “Western chauvinism” and meme their way out of accusations of ideological hatefulness—“It’s just about the lulz, dude.”

There is also, as I will discuss in the third of these three posts, the fact that remembering is itself a politically provocative act. On one hand, the diminution in the collective memory of Nazism and the Holocaust has facilitated the re-embracing of its key tropes; on the other, the active process of remembering the depredations of Western imperialism and the myriad ways in which slavery in the U.S. wasn’t incidental to the American experiment but integral gives rise to this backlash that takes refuge in such delusions as the pervasiveness of anti-white racism.

NOTES


[1] To be clear, this is not to castigate my students; as I’ll be expanding on as I go, historical amnesia is hardly limited to a handful of first-year university students in a class I taught fifteen years ago.

[2] One can only speculate on what such critics make of Mel Brooks’ career.

[3] Martin Amis attempts something similar in his 1985 short story collection Einstein’s Monsters, which is about nuclear war. His lengthy introductory essay “Thinkability” (to my mind, the best part of the book) addresses precisely the way in which military jargon euphemizes the scope and scale of a nuclear exchange, precisely to render the unthinkable thinkable. 

And speaking of humour used to defamiliarize horror: Dr. Strangelove, or How I Finally Stopped Worrying and Learned to Love the Bomb, and General Buck Turgidson (George C. Scott)’s own “thinkability” regarding the deaths in a nuclear war: “Mr. President, we are rapidly approaching a moment of truth, both for ourselves as human beings and for the life of our nation. Now, truth is not always a pleasant thing. But it is necessary now to make a choice, to choose between two admittedly regrettable, but nevertheless distinguishable, post-war environments: one where you got 20 million people killed, and the other where you got 150 million people killed! … Mr. President, I’m not saying we wouldn’t get our hair mussed, but I do say no more than 10 to 20 million killed, tops! Uh, depending on the breaks.”

[4] Which always rested on a tacit contradiction in their logic: it didn’t happen, but it should have.

[5] We see the same tendency, specifically among conservatives, to depict any attempt at raising taxes or expanding social programs as “socialism,” often raising the spectre of “Soviet Russia”—which is about as coherent as one of my favourite lines from Community, when Britta cries, “It’s just like Stalin back in Russia times!”

[6] I don’t say so to castigate any such voices, nor to suggest that they were premature—the contours of Trump’s authoritarian, nativist style were apparent from before he announced his candidacy to anyone who looked closely enough.

Leave a comment

Filed under maunderings

History, Memory, and Forgetting Part 1: Deconstructing History in The Underground Railroad

“Yesterday, when asked about reparations, Senate Majority Leader Mitch McConnell offered a familiar reply: America should not be held liable for something that happened 150 years ago, since none of us currently alive are responsible … This rebuttal proffers a strange theory of governance, that American accounts are somehow bound by the lifetime of its generations … But we are American citizens, and thus bound to a collective enterprise that extends beyond our individual and personal reach.”

—Ta-Nehisi Coates, Congressional Hearing on Reparations, 20 June 2019
Thuso Mbedu as Cora in The Underground Railroad

I’ve been slowly working my way through Barry Jenkins’ ten-episode adaptation of Colson Whitehead’s novel The Underground Railroad. I’ve been a huge fan of Whitehead’s fiction ever since I got pointed towards his debut novel The Intuitionist; I read The Underground Railroad when it was still in hardcover, and I’ve included it twice in classes I’ve taught. When I saw it was being adapted to television by the virtuoso director of Moonlight and If Beale Street Could Talk, I knew this was a series I wanted to watch.

I was also wary—not because I was concerned about the series keeping faith with the novel, but because I knew it would make for difficult viewing. Whatever liberties Whitehead takes with history (as I discuss below), he is unsparing with the brutal historical realities of slavery and the casual cruelty and violence visited on slaves. It is often difficult enough to read such scenes, but having them depicted on screen—seeing cruelty and torture made explicit in audio and visual—can often be more difficult to watch.

For this reason, for myself at least, the series is the opposite of bingeable. After each episode, I need to digest the story and the visuals and think.

The Underground Railroad  focuses on the story of Cora (Thuso Mbedu), a teenage girl enslaved on a Georgia plantation whose mother had escaped when she was young and was never caught, leaving Cora behind. Another slave named Caesar (Aaron Pierre) convinces her to flee with him. Though reluctant at first, circumstances—the kind of circumstances which make the show difficult viewing—convince her to go. She and Caesar and another slave named Lovey (who had seen them go and, to their dismay, tagged along) are waylaid by slave-catchers. Though Lovey is recaptured, Cora and Caesar escape, but in the process one of the slave-catchers is killed. They make it to a node of the Underground Railroad and are sheltered by a white abolitionist with whom Caesar had been in contact. He then takes them underground to the “station,” where they wait for the subterranean train that will take them out of Georgia.

Because that is the principal conceit of The Underground Railroad: that the rail system spiriting slaves away is not metaphorical, but literal, running through underground tunnels linking the states.

More than a few reviews of the series, and also of the novel on which it is based, have referred to the story as “magical realism.” This is an inaccurate characterization. Magical realism is a mode of narrative in which one group or culture’s reality collides with another’s, and what seems entirely quotidian to one is perceived as magical by the other. That’s not what Colson Whitehead is doing, and, however lyrical and ethereal occasionally dreamlike Barry Jenkin’s visual rendering is, it’s not what the series is doing either. In truth, the literalization of the underground railroad is the least of Whitehead’s historical tweaks: The Underground Railroad is not magical realism, but nor is it alternative history (a genre that usually relies on a bifurcation in history’s progression, such as having Nazi sympathizer Charles Lindbergh win the presidency in 1940 in Philip Roth’s The Plot Against America). The Georgia plantation on which The Underground Railroad begins is familiar enough territory, not discernable from similar depictions in Uncle Tom’s Cabin or 12 Years a Slave. But then as Cora journeys from state to state, each state embodies a peculiar distillation of racism. Cora and Caesar first make it to South Carolina, which appears at first glance to be an enlightened and indeed almost utopian place: there is no slavery, and the white population seems dedicated to uplifting freed Blacks, from providing education and employment to scrupulous health care to good food and lodging. Soon however the paternalistic dimension of this altruism becomes more glaring, and Cora and Caesar realize that the free Blacks of South Carolina are being sterilized and unwittingly used in medical experimentation.

In North Carolina, by contrast, Blacks have been banished, and any found within the state borders are summarily executed. The white people of North Carolina declared slavery a blight—because it disenfranchised white workers. Since abolishing slavery and banishing or killing all of the Black people, the state has made the ostensible purity of whiteness a religious fetish. Cora spends her time in North Carolina huddled in the attic of a reluctant abolitionist and his even more reluctant wife in an episode that cannot help but allude to Anne Frank.

Whitehead’s vision—stunningly rendered in Jenkin’s adaptation—is less an alternative history than a deconstructive one. As Scott Woods argues, The Underground Railroad is not a history lesson, but a mirror, with none of “the finger-wagging of previous attempts to teach the horrors of slavery to mainstream audiences.” I think Woods is being polite here, using “mainstream” as a euphemism for “white,” and he tactfully does not observe that effectively all of such finger-wagging attempts (cinematically, at any rate) have tended to come from white directors and feature white saviour protagonists to make liberal white audiences feel better about themselves.

There are no white saviours in The Underground Railroad; there is, in fact, very little in the way of salvation of any sort, just moments of relative safety. As I still have a few episodes to go, I can’t say how the series ends; the novel, however, ends ambivalently, with Cora having defeated the dogged slave-catcher who has been pursuing her from the start, but still without a clear sense of where she is going—the liberatory trajectory of the underground railroad is unclear and fraught because of the weight of the experience Cora has accrued over her young life. As she says at one point, “Make you wonder if there ain’t no real place to escape to … only places to run from.” There is no terminus, just endless flight.

When I say The Underground Railway is a “deconstructive” history, I don’t use the term in the sense as developed by Jacques Derrida (or at least, not entirely). Rather, I mean it in the more colloquial sense such as employed by, for example, chefs when they put, say, a “deconstructed crème brûlée” on the menu, which might be a smear of custard on the plate speared by a shard of roasted sugar and garnished with granitas infused with cinnamon and nutmeg. If the dish is successful, it is because it defamiliarizes a familiar dessert by breaking down its constituent ingredients in such a way as to make the diner appreciate and understand them anew—and, ideally, develop a more nuanced appreciation for a classic crème brûlée.

So—yes, odd analogy. But I’d argue that Whitehead’s novel is deconstructive in much the same manner, by taking a pervasive understanding of racism and the legacy of slavery and breaking it down into some of its constituent parts. In South Carolina, Cora and Caesar experience the perniciousness of white paternalism of the “white man’s burden” variety—the self-important concern for Black “uplift” that is still invested in the conviction of Black culture’s barbarism and inferiority, and which takes from this conviction license to violate Black bodies in the name of “science.”

Thuso Mbedu as Cora and William Jackson Harper as Royal.

Then in North Carolina, we see rendered starkly the assertion that Blacks are by definition not American, and are essentially seen as the equivalent of an invasive species. This, indeed, was the basis of the notorious 1857 Supreme Court ruling on Dred Scott v. Sandford, which asserted that Black people could not be citizens of the United States. Though that precedent was effectively voided by the thirteenth and fourteenth amendments—which abolished slavery and established citizenship for people of African descent born in America, respectively—the franchise was not effectively extended to Black Americans until Lyndon Johnson signed the Voting Rights act a century after the end of the Civil War. The delegitimization of Black voters continues: while the current Trumpian incarnation of the G.O.P. tacitly depicts anyone voting Democrat as illegitimate and not a “real American,” in practice, almost all of the legal challenges to the 2020 election result were directed at precincts with large numbers of Black voters.

Later in the novel when Cora finds her way to Indiana to live on a thriving Black-run farm, we see the neighbouring white community’s inability to countenance Black prosperity in such close proximity, especially when Black people flourishing reflects badly on their own failures. The pogrom that follows very specifically evokes the Tulsa Massacre of 1921, when a huge white mob essentially burned down a thriving Black part of town.

What’s important to note here is that Whitehead’s deconstructive process is less about history proper than about our pervasive depictions of history in popular culture, especially by way of fiction, film, and television. Or to be more accurate, it is about the mythologization of certain historical tropes and narratives pertaining to how we understand racism. One of the big reasons why so many (white) people are able to guilelessly[1] suggest that America is not a racist nation, or claim that the election of a Black president proves that the U.S. is post-racial, is because racism has come to be understood as a character flaw rather than a systemic set of overlapping cultural and political practices. Think of the ways in which Hollywood has narrated the arc of American history from slavery to the Civil War to the fight for civil rights, and try to name films that don’t feature virtuous white protagonists versus racist white villains. Glory, Mississippi Burning, Ghosts of Mississippi, A Time to Kill, Green Book, The Help[2]—and this one will make some people bristle—To Kill a Mockingbird. I could go on.

To be clear, I’m not saying some of these weren’t excellent films, some of which featured nuanced and textured Black characters[3] with considerable agency; but the point, as with all systemic issues, is not the individual examples, but the overall patterns. These films and novels flatter white audiences—we’re going to identify with Willem Dafoe’s earnest FBI agent in Mississippi Burning against the Klan-associated sheriff. “That would be me,” we[4] think, without considering how the FBI—at the time the film was set, no less!—was actively working to subvert the civil rights movement and shore up the societal structures subjugating and marginalizing Black Americans, because in this framing, racism is a personal choice, and therefore Dafoe’s character is not complicit in J. Edgar Hoover’s.  

Gene Hackman and Willem Dafoe in Mississippi Burning (1988).

The white saviour tacitly absolves white audiences of complicity in racist systems in this way, by depicting racism as a failing of the individual. It allows us to indulge in the fantasy that we would ourselves be the white saviour: no matter what point in history we find ourselves, we would be the exception to the rule, resisting societal norms and pressures in order to be non-racists. Possibly the best cinematic rebuke to this fantasy was in 12 Years a Slave,[5] in the form of a liberal-minded plantation owner played by Benedict Cumberbatch, who recognizes the talents and intelligence of Solomon Northup (Chiwetel Ejiofor), a formerly free Black man who had been dragooned by thugs and illicitly sold into slavery. Cumberbatch’s character looks for a brief time to be Solomon’s saviour, as he enthusiastically takes Solomon’s advice on a construction project over that of his foreman. But when the foreman violently retaliates against Solomon, Cumberbatch’s character cannot stand up to him. In a more typical Hollywood offering, we might have expected the enlightened white man to intervene; instead, he lacks the intestinal fortitude to act in a way that would have brought social disapprobation, and as a “solution” sells Solomon to a man who proves to be cruelly sociopathic.

Arguing for the unlikeliness that most people could step out of the roles shaped for them by social and cultural norms and pressures might seem like an apologia for historical racism—how can we have expected people to behave differently?—but really it’s about the resistance to seeing ourselves enmeshed in contemporary systemic racism.

Saying that that is Whitehead’s key theme would be reductive; there is so much more to the novel that I’m not getting to. There is, to be certain, a place—a hugely important place—for straightforward historical accounts of the realities and finer details of slavery, even the more “finger wagging” versions Scott Woods alludes to. But what both Whitehead’s novel and Barry Jenkins’ adaptation of it offer is a deconstruction of the simplistic binarism of decades of us vs. them, good vs. bad constructions of racism that give cover to all but the most unapologetically racist white people. The current backlash against “critical race theory”—which I’ll talk more about in the third of these three posts—proceeds to a great extent from its insistence on racism not as individual but systemic, as something baked into the American system.

Which, when you think about it, is not the outrageous argument conservatives make it out to be. Not even close: Africans brought to American shores, starting in 1619, were dehumanized, brutalized, subjected to every imaginable violence, and treated as subhuman property for almost 250 years. Their descendants were not given the full franchise as American citizens until the Civil Rights Act and the Voting Rights Act of 1964 and 1965, respectively. Not quite sixty years on from that point, it’s frankly somewhat baffling that anyone, with a straight face, can claim that the U.S. isn’t a racist nation. One of the greatest stumbling blocks to arriving at that understanding is how we’ve personalized racism as an individual failing. It shouldn’t be so difficult to recognize, as a white person, one’s tacit complicity in a long history without having to feel the full weight of disapprobation that the label “racist” has come to connote through the pop cultural mythologization of racism as a simple binary.

NOTES


[1] I want to distinguish here between those who more cynically play the game of racial politics, and those who genuinely do not see racism as systemic (granted that there is a grey area in between these groups). These are the people for whom having one of more Black friends is proof of their non-racist bona fides, and who honestly believe that racism was resolved by the signing of the Civil Rights Act and definitively abolished by Obama’s election.

[2] The Help, both the novel and the film, is perhaps one of the purest distillations of a white saviour story packaged in such a way to flatter and comfort white audiences. Essentially, it is the story of an irrepressible young white woman (with red hair, of course) nicknamed Skeeter (played by Emma Stone) who chafes against the social conventions of 1963 Mississippi and dreams of being a writer and journalist. TL;DR: she ends up telling the stories of the “help,” the Black women working as domestic labour for wealthy families such as her own, publishes them—anonymously of course, though Skeeter is the credited author—and thus gets the traction she needs to leave Mississippi for a writing career.

I can’t get into all of the problems with this narrative in a footnote; hopefully I don’t need to enumerate them. But I will say that one of the key things that irked me about this story, both the novel and the movie, is how it constantly name-checks Harper Lee and To Kill a Mockingbird as Skeeter’s inspiration. Lee might have given us the archetype of the white saviour in the figure of Atticus Finch, but she did it at a moment in time (it was published in 1960) when the subject matter was dangerous (Mary Badham, who played Scout, found herself and her family shunned when they returned to Alabama after filming ended for having participated in a film that espoused civil rights). By contrast, The Help, which was published in 2009 and the film released in 2011, is about as safe a parable of racism can be in the present day—safely set in the most racist of southern states during the civil rights era, with satisfyingly vile racist villains and an endearing, attractive white protagonist whose own story of breaking gender taboos jockeys for pole position with her mission to give voice to “the help.”

[3] Though to be fair, in some instances this had as much to do with virtuoso performances by extremely talented actors, such as Denzel Washington, Morgan Freeman, and Andre Braugher in Glory. And not to heap yet more scorn on The Help, but the best thing that can be said about that film is that it gave Viola Davis and Octavia Spencer the visibility—and thus the future casting opportunities—that their talent deserves.

[4] I.e. white people.

[5] Not coincidentally helmed by a Black director, Steve McQueen (no, not that Steve McQueen, this Steve McQueen).

Leave a comment

Filed under Uncategorized

Summer Blogging and Finding Focus

Why do I write this blog? Well, it certainly isn’t because I have a huge audience—most of my posts top out at 40-60 views, and many garner a lot less than that. Every so often I get signal boosted when one or more people share a post. The most I’ve ever had was when a friend posted a link of one I wrote about The Wire and police militarization to Reddit, and I got somewhere in the neighbourhood of 1500 views.[1] Huge by my standards, minuscule by the internet’s.

Not that I’m complaining. I have no compunction to chase clicks, or to do the kind of networking on Twitter that seems increasingly necessary to building an online audience, which also entails strategically flattering some audiences and pissing off others. The topics I write about are eclectic and occasional, usually the product of a thought that crosses my mind and turns into a conversation with myself. My posts are frequently long and sometimes rambling, which is also not the best way to attract readers.

Blogging for me has always been something akin to thinking out loud—like writing in a journal, except in a slightly more formal manner, with the knowledge that, however scant my audience is, I’m still theoretically writing for other people, and so my thoughts have to be at least somewhat coherent. And every so often I get a hit of dopamine when someone shares a post or makes a complimentary comment.

I started my first blog when I moved to Newfoundland as a means of giving friends and family a window into my new life here, without subjecting them to the annoyance of periodic mass emails. I posted in An Ontarian in Newfoundland for eight years, from 2005 to 2013, during which time it went from being a digest of my experiences in Newfoundland to something more nebulous, in which I basically posted about whatever was on my mind. I transitioned to this blog with the thought that I would focus it more on professional considerations—using it as a test-space for scholarship I was working on, discussions about academic life, and considerations of things I was reading or watching. I did do that … but then also inevitably fell into the habit of posting about whatever was on my mind, often with long stretches of inactivity that sometimes lasted months.

During the pandemic, this blog has become something akin to self-care. I’ve written more consistently in this past year than I have since starting my first blog (though not nearly as prolifically as I posted in that first year), and it has frequently been a help in organizing what have become increasingly inchoate thoughts while enduring the nadir of Trump’s tenure and the quasi-isolation enforced by the pandemic. I won’t lie: it has been a difficult year, and wearing on my mental health. Sometimes putting a series of sentences together in a logical sequence to share with the world brought some order to the welter that has frequently been my mind.

As we approach the sixth month of the Biden presidency and I look forward to my first vaccination in a week, you’d think there would be a calming of the mental waters. And there has been, something helped by the more frequent good weather and more time spent outside. But even as we look to be emerging from the pandemic, there’s a lot still plaguing my peace of mind, from my dread certainty that we’re looking at the end of American democracy, to the fact that we’re facing huge budget cuts in health care and education here in Newfoundland.

The Venn diagram of the thoughts preoccupying my mind has a lot of overlaps, which contributes to the confusion. There are so many points of connection: the culture war, which irks me with all of its unnuanced (mis)understandings of postmodernism, Marxism, and critical race theory; the sustained attack on the humanities, which proceeds to a large degree from the misperception that it’s all about “woke” indoctrination; the ways in which cruelty has become the raison d’être of the new Right; the legacy of the “peace dividend” of the 1990s, the putative “end of history,” and the legacy of post-9/11 governance leading us to the present impasse; and on a more hopeful note, how a new humanism practiced with humility might be a means to redress some of our current problems.

For about three or four weeks I’ve been spending part of my days scribbling endless notes, trying to bring these inchoate preoccupations into some semblance of order. Reading this, you might think that my best route would be to unplug and refocus; except that this has actually been energizing. It helps in a way that there is significant overlap with a handful of articles I’m working on, about (variously) nostalgia and apocalypse, humanism and pragmatism, the transformations of fantasy as a genre, and the figuration of the “end of history” in Philip Roth’s The Plot Against America and contemporary Trumpist figurations of masculinity.

(Yes, that’s a lot. I’m hoping, realistically, to get one completed article out of all that, possibly two).

With all the writing I’ve been doing, it has been unclear—except for the scholarly stuff—how best to present it. I’ve been toying with the idea of writing a short book titled The Idiot’s[2] Guide to Postmodernism, which wouldn’t be an academic text but more of a user manual to the current distortions of the culture wars, with the almost certainly vain idea of reintroducing nuance into the discussion. That would be fun, but in the meantime I think I’ll be breaking it down into a series of blog posts.

Some of the things you can expect to see over the next while:

  • A three-part set of posts (coming shortly) on history, memory, and forgetting.
  • A deep dive into postmodernism—what it was, what it is, and why almost everyone bloviating about it and blaming it for all our current ills has no idea what they’re talking about.
  • A handful of posts about cruelty.
  • “Jung America”—a series of posts drawing a line from the “crisis of masculinity” of the 1990s to the current state of affairs with Trumpism and the likes of Jordan Peterson and Ben Shapiro.
  • At least one discussion about the current state of the humanities in the academy, as well as an apologia arguing why the humanities are as important and relevant now as they have ever been.

Phew. Knowing me, I might get halfway through this list, but we’ll see. Meantime, stay tuned.

NOTES


[1] David Simon also left a complimentary comment on that one. Without a doubt, the highlight of my blogging career.

[2] Specifically, Jordan Peterson, but there are others who could use a primer to get their facts straight.

1 Comment

Filed under blog business, Uncategorized

My Mostly Unscientific Take on UFOs

Over the past year or so, is has seemed as though whatever shadowy Deep State agencies responsible for covering up the existence of extraterrestrials have thrown up their hands and said “Yeah. Whatever.”

Perhaps the real-world equivalent of The X-Files Smoking Man finally succumbed to lung cancer, and all his subordinates just couldn’t be bothered to do their jobs any more.

Or perhaps the noise of the Trump presidency created the circumstances in which a tacit acknowledgement of numerous UFO sightings wouldn’t seem to be bizarre or world-changing.

One way or another, the rather remarkable number of declassified videos from fighter pilots’ heads-up-displays of unidentified flying objects of odd shapes and flying capabilities has evoked an equally remarkable blasé response. It’s as if the past four years of Trump, natural disasters, civil tragedies, and a once-in-a-century (touch wood) pandemic has so eroded our capacity for surprise that, collectively, we seem to be saying, “Aliens? Bring it.” Not even the QAnon hordes, for whom no event or detail is too unrelated not to be folded into the grand conspiracy have seen fit to make comment upon something that has so long been a favourite subject of conspiracists (“Aliens? But are they pedophile child sex-trafficking aliens?”).

Perhaps we’re all just a bit embarrassed at the prospect of alien contact, like having a posh and sophisticated acquaintance drop by when your place is an utter pigsty. I have to imagine that, even if the aliens are benevolent and peaceful, humanity would be subjected to a stern and humiliating talking-to about how we let our planet get to the state it’s in.

“I’m sorry to have to tell you, sir, that your polar icecaps are below regulation size for a planet of this category, sir.” (Good Omens)

Not to mention that if they landed pretty much anywhere in the U.S., they’d almost certainly get shot at.

And imagine if they’d landed a year ago.

“Take us to your leader!”
“Um … are you sure? Perhaps you should try another country.”
“All right, how do I get to Great Britain?”
“Ooh … no. You really don’t want that.”
“Russia then? China? India? Hungary?”
“Uh, no, no, no, and no.”
“Brazil?”
“A world of nope.”
“Wait–what’s the one with the guy with good hair?”
“Canada. But, yeah … probably don’t want to go there either. Maybe … try Germany?”
“Wasn’t that the Hitler country?”
“They got better.”

You’d think there would be more demand for the U.S. government to say more about these UFO sightings. The thing is, I’m sure that in some sections of the internet, there is a full-throated ongoing yawp all the time for that, but it hasn’t punctured the collective consciousness. And frankly, I don’t care enough to go looking for it.

It is weird, however, considering how we’ve always assumed that the existence of extraterrestrial life would fundamentally change humanity, throwing religious belief into crisis and dramatically transforming our existential outlook. The entire premise of Star Trek’s imagined future is that humanity’s first contact with the Vulcans forced a dramatic reset of our sense of self and others—a newly galactic perspective that rendered all our internecine tribal and cultural squabbles irrelevant, essentially at a stroke resolving Earth’s conflicts.

To be certain, there hasn’t been anything approaching definitive proof of alien life, so such epiphany or trauma lies only in a possible future. Those who speak with any authority on the matter are always careful to point out that “UFO” is not synonymous with “alien”—they’re not necessarily otherworldly, just unidentified.

I, for one, am deeply skeptical that these UFOs are of extraterrestrial origin—not because I don’t think it’s infinitesimally possible, just that the chances are in fact infinitesimal. In answer to the question of whether I think there’s life on other planets, my answer is an emphatic yes, which is something I base on the law of large numbers. The Milky Way galaxy, by current estimates, contains somewhere in the neighbourhood of 100 billion planets. Even if one tenth of one percent of those can sustain life, that’s still a million planets, and that in just one of the hundreds of billions of galaxies in the universe.

But then there’s the question of intelligence, and what comprises intelligent life. We have an understandably chauvinistic understanding of intelligence, one largely rooted in the capacity for abstract thought, communication, and inventiveness. We grant that dolphins and whales are intelligent creatures, but have very little means of quantifying that; we learn more and more about the intelligence of cephalopods like octopi, but again: such intelligences are literally alien to our own. The history of imagining alien encounters in SF has framed alien intelligence as akin to our own, just more advanced—developing along the same trajectory until interplanetary travel becomes a possibility. Dolphins might well be, by some metric we haven’t yet envisioned, far more intelligent than us, but they’ll never build a rocket—in part because, well, why would they want to? As Douglas Adams put it in The Hitchhiker’s Guide to the Galaxy, “man had always assumed that he was more intelligent than dolphins because he had achieved so much—the wheel, New York, wars and so on—whilst all the dolphins had ever done was muck about in the water having a good time. But conversely, the dolphins had always believed that they were far more intelligent than man—for precisely the same reasons.”

To put it another way, to properly imagine space-faring aliens, we have to imagine not so much what circumstances would lead to the development of space travel as how an alien species would arrive at an understanding of the universe that would facilitate the very idea of space travel.

Consider the thought experiment offered by Hans Blumenberg in the introduction of his book The Genesis of the Copernican World. Blumenberg points out that our atmosphere has a perfect density, “just thick enough to enable us to breath and to prevent us from being burned up by cosmic rays, while, on the other hand, it is not so opaque as to absorb entirely the light of the stars and block any view of the universe.” This happy medium, he observes, is “a fragile balance between the indispensable and the sublime.” The ability to see the stars in the night sky, he says, has shaped humanity’s understanding of themselves in relation to the cosmos, from our earliest rudimentary myths and models, to the Ptolemaic system that put us at the center of Creation and gave rise to the medieval music of the spheres, to our present-day forays in astrophysics. We’ve made the stars our oracles, our gods, and our navigational guides, and it was in this last capacity that the failings of the Ptolemaic model inspired a reclusive Polish astronomer named Mikołaj Kopernik, whom we now know as Copernicus.

But what, Blumenberg asks, if our atmosphere was too thick to see the stars? How then would have humanity developed its understanding of its place in the cosmos? And indeed, of our own world—without celestial navigation, how does seafaring evolve? How much longer before we understood that there was a cosmos, or grasped the movement of the earth without the motion of the stars? There would always of course be the sun, but it was always the stars, first and foremost, that inspired the celestial imagination. It is not too difficult to imagine an intelligent alien species inhabiting a world such as ours, with similar capabilities, but without the inspiration of the night sky to propel them from the surface of their planet.[1]

Now think of a planet of intelligent aquatic aliens, or creatures that live on a gas giant that swim deep in its dense atmosphere.

Or consider the possibility that our vaunted intelligence is in fact an evolutionary death sentence, and that that is in fact the case for any species such as ourselves—that our development of technology, our proliferation across the globe, and our environmental depredations inevitably outstrip our primate brains’ capacity to reverse the worst effects of our evolution.

Perhaps what we’ve been seeing is evidence of aliens who have mastered faster than light or transdimensional travel, but they’re biding their time—having learned the dangers of intelligence themselves, they’re waiting to see whether we succeed in not eradicating ourselves with nuclear weapons or environmental catastrophe; perhaps their rule for First Contact is to make certain a species such as homo sapiens can get its shit together and resolve all the disasters we’ve set in motion. Perhaps their Prime Directive is not to help us, because they’ve learned in the past that unless we can figure it out for our own damned selves, we’ll never learn.

In the words of the late great comedian Bill Hicks, “Please don’t shoot at the aliens. They might be here for me.”

EDIT: Stephanie read this post and complained that I hadn’t worked in Monty Python’s Galaxy Song, so here it is:

NOTES


[1] And indeed, Douglas Adams did imagine such a species in Life, the Universe, and Everything—an alien race on a planet surrounded by a dust cloud who live in utopian peace and harmony in the thought that they are the sum total of creation, until the day a spaceship crash-lands on their planet and shatters their illusion. At which point, on reverse engineering the spacecraft and flying beyond the dust cloud to behold the splendours of the universe, decide there’s nothing else for it but to destroy it all.

Leave a comment

Filed under maunderings, Uncategorized

Liz Cheney is as Constant as the Northern Star …

… and I don’t particularly mean that as a compliment.

Literally minutes before he is stabbed to death by a posse of conspiring senators, Shakespeare’s Julius Caesar declares himself to be the lone unshakeable, unmoving, stalwart man among his flip-flopping compatriots. He makes this claim as he arrogantly dismisses the petition of Metellus Cimber, who pleads for the reversal of his brother’s banishment. Cimber’s fellow conspirators echo his plea, prostrating themselves before Caesar, who finally declares in disgust,

I could be well moved if I were as you.
If I could pray to move, prayers would move me.
But I am constant as the northern star,
Of whose true-fixed and resting quality
There is no fellow in the firmament.
The skies are painted with unnumbered sparks.
They are all fire and every one doth shine,
But there’s but one in all doth hold his place.
So in the world. ‘Tis furnished well with men,
And men are flesh and blood, and apprehensive,
Yet in the number I do know but one
That unassailable holds on his rank,
Unshaked of motion. And that I am he.

Caesar mistakes the senators’ begging for weakness, not grasping that they are importuning him as a ploy to get close enough to stab him until it is too late.

Fear not, I’m not comparing Liz Cheney to Julius Caesar. I suppose you could argue that Cheney’s current anti-Trump stance is akin to Caesar’s sanctimonious declaration if you wanted to suggest that it’s more performative than principled. To be clear, I’m not making that argument—not because I don’t see it’s possible merits, but because I really don’t care.

I come not to praise Liz Cheney, whose political beliefs I find vile; nor do I come to bury her. The latter I’ll leave to her erstwhile comrades, and I confess I will watch the proceedings with a big metaphorical bowl of popcorn in my lap, for I will be a gratified observer no matter what the outcome. If the Trumpists succeed in burying her, well, I’m not about to mourn a torture apologist whose politics have always perfectly aligned with those of her father. If she soldiers on and continues to embarrass Trump’s sycophants by telling the truth, that also works for me.

Either way, I’m not about to offer encomiums for Cheney’s courage. I do think it’s admirable that she’s sticking to her guns, but as Adam Serwer recently pointed out in The Atlantic, “the [GOP’s] rejection of the rule of law is also an extension of a political logic that Cheney herself has cultivated for years.” During Obama’s tenure, she frequently went on Fox News to accuse the president of being sympathetic to jihadists, and just as frequently opined that American Muslims were a national security threat. During her run for a Wyoming Senate seat in 2014, she threw her lesbian sister Mary under the bus with her loud opposition to same-sex marriage, a point on which she stands to the right of her father. And, not to repeat myself, but she remains an enthusiastic advocate of torture. To say nothing of the fact that, up until the January 6th assault on the Capitol, was a reliable purveyor of the Trump agenda, celebrated then by such current critics as Steve Scalise and Matt Gaetz.

Serwer notes that the Cheney’s “political logic”—the logic of the War on Terror—is consonant with that of Trumpism not so much in policy as in spirit: the premise that there’s them and us, and that “The Enemy has no rights, and anyone who imagines otherwise, let alone seeks to uphold them, is also The Enemy.” In the Bush years, this meant the Manichaean opposition between America and Terrorism, and that any ameliorating sentiment about, say, the inequities of American foreign policy, meant you were With the Terrorists. In the present moment, the Enemy of the Trumpists is everyone who isn’t wholly on board with Trump. The ongoing promulgation of the Big Lie—that Biden didn’t actually win the election—is a variation of the theme of “the Enemy has no rights,” which is to say, that anyone who does not vote for Trump or his people is an illegitimate voter. Serwer writes:

This is the logic of the War on Terror, and also the logic of the party of Trump. As George W. Bush famously put it, “You are either with us or with the terrorists.” You are Real Americans or The Enemy. And if you are The Enemy, you have no rights. As Spencer Ackerman writes in his forthcoming book, Reign of Terror, the politics of endless war inevitably gives way to this authoritarian logic. Cheney now finds herself on the wrong side of a line she spent much of her political career enforcing.

All of which is by way of saying: Liz Cheney has made her bed. The fact that she’s chosen the hill of democracy to die on is a good thing, but this brings us back to my Julius Caesar allusion. The frustration being expressed by her Republican detractors, especially House Minority Leader Kevin McCarthy, is at least partially rational: she’s supposed to be a party leader, and in so vocally rejecting the party line, she’s not doing her actual job. She is being as constant as the Northern Star here, and those of us addicted to following American politics are being treated to a slow-motion assassination on the Senate (well, actually the House) floor.

But it is that constancy that is most telling in this moment. Cheney is anchored in her father’s neoconservative convictions, and in that respect, she’s something of a relic—an echo of the Bush years. As Serwer notes, however, while common wisdom says Trump effectively swept aside the Bush-Cheney legacy in his rise to be the presidential candidate, his candidacy and then presidency only deepened the bellicosity of Bush’s Us v. Them ethos, in which They are always already illegitimate. It’s just now that the Them is anyone opposed to Trump.

In the present moment, I think it’s useful to think of Liz Cheney as an unmoving point in the Republican firmament: to remember that her politics are as toxic and cruel as her father’s, and that there is little to no daylight between them. The fact that she is almost certainly going to lose both her leadership position and lose a primary in the next election to a Trump loyalist, is not a sign that she has changed. No: she is as constant as the Northern Star, and the Trump-addled GOP has moved around her. She is not become more virtuous; her party has just become so very much more debased.

2 Comments

Filed under maunderings, The Trump Era, Uncategorized, wingnuttery

Of Course There’s a Deep State. It’s Just Not What the Wingnuts Think it is.

There is a moment early in the film The Death of Stalin in which, as the titular dictator lays dying, the circle of Soviet officials just beneath Stalin (Khrushchev, Beria, Malenkov) panic at the prospect of finding a reputable doctor to treat him. Why? Because a few years earlier, Stalin, in a fit of characteristic paranoia, had become convinced that doctors were conspiring against him, and he had many of them arrested, tortured, and killed.

I thought of this cinematic moment—the very definition of gallows humour—while reading an article by Peter Wehner in The Atlantic observing that part of the appeal of QAnon (the number of whose adherents have, counter-intuitively perhaps, inflated since Biden’s election) is precisely because of its many disparate components. “I’m not saying I believe everything about Q,” the article quotes one Q follower as saying. “I’m not saying that the JFK-Jr.-is-alive stuff is real, but the deep-state pedophile ring is real.”

As [Sarah Longwell, publisher of The Bulwark] explained it to me, Trump supporters already believed that a “deep state”—an alleged secret network of nonelected government officials, a kind of hidden government within the legitimately elected government—has been working against Trump since before he was elected. “That’s already baked into the narrative,” she said. So it’s relatively easy for them to make the jump from believing that the deep state was behind the “Russia hoax” to thinking that in 2016 Hillary Clinton was involved in a child-sex-trafficking ring operating out of a Washington, D.C., pizza restaurant.

If you’ll recall, the “Deep State” bogeyman was central to Steve Bannon’s rhetoric during his tenure early in the Trump Administration, alongside his antipathy to globalism. The two, indeed, were in his figuration allied to the point of being inextricable, which is also one of the key premises underlying the QAnon conspiracy. And throughout the Trump Administration, especially during his two impeachments and the Mueller investigation, the spectre of the Deep State was constantly blamed as the shadowy, malevolent force behind any and all attempts to bring down Donald Trump (and was, of course, behind the putative fraud that handed Joe Biden the election).

Now, precisely why this article made me think of this moment in The Death of Stalin is a product of my own weird stream of consciousness, so bear with me: while I’ve always found Bannon & co.’s conspiracist depiction of the Deep State more than a little absurd, so too I’ve had to shake my head whenever any of Trump’s detractors and critics declare that there’s no such thing as a Deep State.

Because of course there’s a deep state, just one that doesn’t merit ominous capitalization. It also doesn’t merit the name “deep state,” but let’s just stick with that now for the sake of argument. All we’re really talking about here is the vast and complex bureaucracy that sustains any sizable human endeavour—universities to corporations to government. And when we’re talking about the government of a country as large as the United States, that bureaucracy is massive. The U.S. government employs over two million people, the vast majority of them civil servants working innocuous jobs that make the country run. Without them, nothing would ever get done.

Probably the best piece of advice I ever received as a university student was in my very first year of undergrad; a T.A. told me to never ask a professor about anything like degree requirements or course-drop deadlines, or, really, anything to do with the administrative dimension of being a student. Ask the departmental secretaries, he said. In fact, he added, do your best to cultivate their respect and affection. Never talk down to them or treat them as the help. They may not have a cluster of letters after their name or grade your papers, but they make the university run.

I’d like to think that I’m not the kind of person who would ever be the kind of asshole to berate secretaries or support staff, but I took my T.A.’s advice to heart, and went out of my way to be friendly and express gratitude, to be apologetic when I brought them a problem. It wasn’t long before I was greeted with smiles whenever I had paperwork that needed processing, and I never had any issues getting into courses (by contrast, in my thirty years in academia from undergrad to grad student to professor, I have seen many people—students and faculty—suffer indignities of mysterious provenance because they were condescending or disrespectful to support staff).

The point here is that, for all the negative connotations that attach to bureaucracy, it is an engine necessary for any institution or nation to run. Can it become bloated and sclerotic? Of course, though in my experience that tends to happen when one expands the ranks of upper management. But when Steve Bannon declared, in the early days of the Trump Administration, that his aim was “the deconstruction of the administrative state,” I felt a keen sense of cognitive dissonance in that statement—for the simple reason that there is no such thing as a non­-administrative state.

Which brings us back, albeit circuitously, to The Death of Stalin. There is no greater example of a sclerotic and constipated bureaucracy than that of the former Soviet Union, a point not infrequently made in libertarian and anti-statist arguments for small government. But I think the question that rarely gets raised when addressing dysfunctional bureaucracy—at least in the abstract—is why is it dysfunctional? There are probably any number of reasons why that question doesn’t come up, but I have to imagine that a big one is because we’ve been conditioned to think of bureaucracy as inevitably dysfunctional—a sense reinforced by every negative encounter experienced when renewing a driver’s license, waiting on hold with your bank, filing taxes, dealing with governmental red tape, or figuring out what prescriptions are covered by your employee health plan. But a second question we should ask when having such negative experiences is: are they negative because of an excess of bureaucracy, or too little? The inability of Stalin’s minions to find a competent doctor is a profound metaphor for what happens when we strip out the redundancies in a given system—in this case, the state-sponsored murder of thousands of doctors because of a dictator’s paranoia, such that one is left with (at best) mediocre medical professionals too terrified of state retribution to be dispassionately clinical, which is of course what one needs from a doctor.

I’m not a student of the history of the U.S.S.R., so I have no idea if anyone has written about whether the ineptitude of the Soviet bureaucracy was a legacy of Stalinist terror and subsequent Party orthodoxy, in which actually competent people were marginalized, violently or otherwise; I have to assume there’s probably a lot of literature on the topic (certainly, Masha Gessen’s critical review of the HBO series Chernobyl has something to say on the subject). But there’s something of an irony in the fact that Republican administrations since that of Ronald Reagan have created their own versions of The Death of Stalin’s doctor problem through their evisceration of government. Reagan famously said that the nine most frightening words were “I’m from the government, and I’m here to help,” and since then conservative governments—in the U.S., Canada, and elsewhere—have worked hard to make that a self-fulfilling prophecy. Thomas Frank, author of What’s the Matter With Kansas? (2004) has chronicled this tendency, in which Republican distrust of government tends to translate into the rampant gutting of social services, governmental agencies from the Post Office to the various cabinet departments, which then dramatically denudes the government’s ability to do anything. All of the failures that then inevitably occur are held up as proof of the basic premise of government’s inability to get anything right (and that therefore its basic services should be outsourced to the private sector).

In my brief moments of hope I wonder if perhaps the Trump Administration’s explicit practice of putting hacks and incompetent loyalists in key positions (such as Jared Kushner’s bizarrely massive portfolio) made this longstanding Republican exercise too glaring to ignore or excuse. Certainly, the contrast between Trump’s band of lickspittles and Biden’s army of sober professionals is about the most glaring difference we’ve seen between administrations, ever. What I hope we’re seeing, at any rate, is the reconstruction of the administrative state.

And it’s worth noting that Dr. Anthony Fauci has been resurrected from Trump’s symbolic purge of the doctors.

1 Comment

Filed under maunderings, The Trump Era, wingnuttery