I teach American literature. I have been teaching at the university level for three decades: UC Berkeley, Cornell, Yale, Northeastern. I love teaching – at least I have always loved it in the past (except grading papers, of course, which is hellish). The new semester starts next week and at this moment I’m deeply afraid of what awaits—my anxiety is through the roof. I’m considering beta blockers.
Not because of mass shooting possibilities, nor because right-wing student plants might covertly film my slightly snarky anti-capitalist asides and post them on-line. Nor because Trump and company are working hard to destroy higher ed and I just got an email from my chair saying that our research and classroom and photocopying funds have more or less disappeared.
I am terrified because of AI. I’ve been on sabbatical for the last year and during that time, AI has metastasized—transformed from party trick to infrastructure.
Just two years ago you could teach a class and almost ignore it. I didn’t ignore it then, because in teaching introductory literature classes to English majors, I also teach students to write analytical essays which means that, bottom line, I aim to teach them how to think about words and to think with words. But my previous effort to address AI now seems flimsy: we crafted a class-agreed upon AI policy that required a statement from each student about how they had or had not used AI in developing or editing (not writing) any given essay and whether it had been useful or not. It’s a policy that now seems flimsy and quaint in the face of the torrent of AI pop-ups and extensions and chat bots. Rotary phone kind of quaint. Bless her heart kind of quaint.
I have no energy for policing my students. I will not be spending any time putting student papers through an AI detector. What I want to do is teach, not police. What I most want my students to know and to understand and to practice is this: Writing is thinking.[1]
You have likely heard this before but it bears repeating because it is the essence of the dilemma that AI poses for writers and for teachers of writing and thinking.
Writing is thinking.
What exactly does this mean? Most concretely it means this: when you sit down to write something, you have an idea in your head that you want to express in words. As you try to find the right words to suit the meaning you would like to convey, something alchemical occurs. Something a little magical. The idea that you intended to express starts to shapeshift because the words that you are searching for, that you land on imperfectly and try on for size and choose among, start to suggest new possibilities to you; they gently twist and texture your idea or wrench it suddenly in an unexpected way.
The words themselves create wormholes, open unexpected vistas, demand accounting for unforeseen connections.
All at once, unaccountably, your idea has grown into something new. Your dalliance with the written word has honed your insight in ways that your solo inner monologue could never have accomplished. This by way of the clumsy encounter between you—the writer—and language: a language that you did not create, that was handed to you by others and so works always imperfectly—language that fits always imprecisely with respect to what you would like to express and yet is the one and only way that your idea will assume meaning for others so as to live outside of your head and exist in the world you share with others and maybe matter to someone else as much as it matters to you.
The French philosopher, Jacques Ranciére, says that man is a political animal who is led astray by language.[2] I love this axiom for its mash-up of Aristotle and Jacques Lacan. Man is a political animal (in the words of Aristotle—we are fundamentally interested in community and power) and we are always trapped in the chain of signifiers not of our creation (as per the psychoanalyst Lacan who says our deepest selves are constituted in and through language and its structures of reference and deferral).
The word “astray” is especially apt—it brings to mind a stray cat prowling off into dark alleys or sniffing after the hidden remains of forgotten feasts. Where the good stuff is. That’s also where language takes us. Both toward what we want to say but also out of our way, onto unexpected paths. And who would not welcome the unexpected possibility of a new thought, a new path in the darkening, narrowing horizon that straitens our ideas and hopes today at every turn?
This is why Audre Lorde says poetry is not a luxury. Because poetry allows us not just to dream, but to bring into being the possibility of a different future and a different past than the one in which we often feel trapped: “Poetry is the way we help give name to the nameless so it can be thought. The farthest external horizons of our hopes and fears are cobbled by our poems, carved from the rock experiences of our daily lives.”
AI cobbles but it does not carve. It also does not hope, fear, stray, or cherish.
This is how AI works: it searches through a vast sea (corpus) of written words and sentences and paragraphs (that it has appropriated without permission or payment) and it chooses the word that is statistically likely to go after the previous word, based on the algorithm it has been programmed with. It generates a likely imitation of meaning, not because it is choosing a word based on its referential acuity, or its sonic weight, or its assonance or dissonance or etymology or association with your favorite poem in childhood or your mother’s voice: AI chooses words based on the numerical frequency with which one word has followed another in a vast corpus of already written, already chosen words.
Many brilliant writers have shared thoughts about the dangers of AI. I love
’s brutal summary of the salient points: “Personally, I think that the devastating environmental costs of AI, or the fact that it’s the product of massive theft from actual human artists, or that it’s transparently a tool of fascism should each be more than enough to keep anyone from using it for any reason at all (yes, even that small use you’ve convinced yourself is harmless), let alone the combination of all of these reasons.”I agree. Twenty-four books and essays I have written have been stolen without my permission by AI. Years of my writing life. Students who feed their work into the maw of AI for editing or refinement will being fattening the beast further.
I think about language sometimes—and especially large language models—in relation to a passage from Faulkner’s Absalom, Absalom! that puts a finger on our struggle to be human:
“You get born and you try this and you don't know why only you keep on trying it and you are born at the same time with a lot of other people, all mixed up with them, like trying to, having to, move your arms and legs with strings only the same strings are hitched to all the other arms and legs and the others all trying and they don't know why either except that the strings are all in one another's way like five or six people all trying to make a rug on the same loom only each one wants to weave his own pattern into the rug; and it can't matter, you know that, or the Ones that set up the loom would have arranged things a little better, and yet it must matter because you keep on trying or having to keep on trying and then all of a sudden it's all over.”
If language is the warp and weft—the strings we don’t choose that bind us together and constrain us—then AI is just the surrender. Faulkner’s image is dark, but darker still if you give in to it, and that is what AI invites. Just let everyone else pull the strings, or rather, let the loom-makers pull the string and give up altogether because you—your thoughts, your ideas, the particular tenor of your dog’s whine, the smell of coffee that your partner makes in the drip coffee-maker, the sound of rain on the roof in your attic room—the words and sensorial connections that have carved concepts and emotions into the folds of your brain—do not matter.
Don’t give up your power. That’s what I want to tell my students. Don’t give up your struggle with words, your dance with language, your failures and successes that happen when you choose the words that come next, as clumsy as they may seem.
What to Do?
What will I do when I enter the classroom next week? How do I convince my students to keep writing with their own brains? I am considering starting the semester by writing one sentence on the board:
AI makes you stupid. Discuss.
Arguably—and I get this—AI does not make you stupid if what you would like to know is what words many, many other writers have put together in order to create meaning on a given topic in the past. It’s a vast store of information. As well as misinformation. (It can’t tell the difference.) What do people have to say lately about mitosis? I don’t know and I’m sure I can learn many things about mitosis and what has been said about it from AI. Or supply chain management. Lanternflys, probiotics, anaphylaxis.
But why do I care about mitosis and what does the word mitosis activate in me—connect with, disrupt, rearrange? That’s what will appear on the page when I put the sentence together myself, even if I am doing so in a way that is avowedly informational rather than, say, poetic. Because I will have to think about the word mitosis fits with the words I place it in contact with.
Here is what my students will face. Pressure to complete papers in five different courses, a world in which they are told that AI is the future and they need to get on board fast, a learning management system (LMS if you want to trade in the initialisms of academia today) in which “Claude” is now fully integrated by the university. (What were the finances of that deal, I wonder?)
Most recently I made the grave mistake of learning about Grammarly’s new grading-prediction and essay-rewriting tool. Students are invited to feed in the prompt for the assignment as well as the professor’s name and the course title. Grammarly then searches the web for information about the professor (in the case of my students, that would be me) and uses whatever it finds to rewrite a completed essay, now tailored to what it believes (based on its internet sleuthing) the professor would give the highest grade to. And in case a student is worried that the professor might detect this particular kind of subterranean fawning, Grammarly will also feed the paper through an AI detector and rearrange the words until they don’t look detectably AI generated.
I went down the rabbit hole of test-driving this feature of Grammarly on a paper assignment of mine, asking it to grade a paper a student wrote for a class of mine a decade ago. Grammarly gave the paper a B+ and said some things that I might have said—sharpen your thesis, focus your argument—but nothing that I actually did say in the comment I wrote when I gave the student a C-, perhaps the most salient of which was “Please make an appointment to come see me and let’s talk about how to do a close textual analysis.” AI is notoriously incapable of close reading: it can’t connect the inner workings of word, a phrase, a sentence, to a larger context in any meaningful way. That requires thinking, choosing, connecting the unconnected. Fundamentally, AI doesn’t do language; it does numbers. It turns language into numbers and then it does numbers.
Here is what I will face: the temptation to throw up my hands in despair. To go through the motions. To trick the students so that I can catch them. To have them write in blue books. To retire early. To have AI grade the papers because AI can not only write papers now, but also grade them. I am not fond of grading—did I mention that?
Think of the mobius strip we have created: If the papers are written by AI, and I use AI to grade them, no human brain cells will be taxed in the process. A seamless loop of the already written, already said, already known. No thinking involved.
My favorite part of teaching—the part that makes even grading papers worthwhile—is when I see a student arrive at a new understanding of a sentence, a poetic line, a concept, the world around them. The moment when you can tell that something inside of them shifts and a door opens and they walk through it and know that they did it, with their own thinking process. When the hands around the circle raise tentatively and then eagerly: and, and, and there is so much for everyone to say about something –like a line of poetry—that only moments ago seemed opaque, lifeless, inert. Suddenly these same words are everything, for just this moment when meaning is bursting forth.
"Beautiful and tough as chestnut
stanchions against our nightmare of weakness."
These are lines of poetry by Audre Lorde that she quotes in her essay on the necessity of poetry. You can build a house in them. You can dwell in them.
This is what I love. Thinking together. Being led astray by language. Arriving someplace new.
I’m so deeply afraid the doors are closing. That I won’t be able to help prop them ajar. That’s what scares me about stepping into the classroom next week.
[1] For some impressive and helpful iterations of this idea, see
’s superb post, “Botticelli, AI, and Missing the Point,” ’s New York Times Op-Ed, “I Teach Creative Writing. This is What A.I. is Doing to Students,” and ’s podcast (in her delightfully-named podcast/substack “Delusions of Grammar”) with the writer Kyle Beachy who recently taught a workshop called “To Write is to Think and to Think is Human” that I wish I could have taken.[2] At least I think he said it. I’ve tried to find the quote but cannot. This idea, however, has been indelibly etched in my mind as associated with him, so let’s call it a loose citation. It’s possible that I have bungled it or made it up, and in any case it would have originally been written in French so who knows what Ranciére actually wrote or said. But this is what, in the phrase I now hold dear, I believe him to have said.
This is brilliant Elizabeth. What I worry about is the pressures students face and the temptation of shortcuts, which will just cheat them of training their brains to think freely, outside the confines of the garbage and misinformation they’re constantly fed. To avoid wrestling with the beast, to avoid failing, a necessary doorway we all have to pass through if we want to learn, grow, and combat constraints. This generation that is taught to do everything quickly and without error. Everything fired at high speed. No. Slow down. Think. Make mistakes and build slowly and with errors. Higher education shouldn’t be about grades, outcomes, and gaming the system. It should be about the incredible privilege of getting to spend 4 (or more)years of study learning to think critically about the world.
I would love to know how that discussion goes on day one of your class. Would likely reveal the pulse of your class and who is likely to use it or not I would think. I can't imagine what it must be like to have to grade these papers! I like the point you make, you're not there to police but to teach people how to think.