“What will be left of our self-conception once artificial intelligence becomes better than us at writing poems or making movies?” asks Yasha Mounk in an April 17, 2025 piece in Persuasion, titled “The Third Humbling of Humanity.”
As a writer and educator, it’s a question I’ve thought a lot about since ChatGPT3 exploded into the world in Fall 2022.
Generated by Claude 4.0
But Younk’s claim that humanity is facing a crisis matched only twice before in human history provoked me. His trajectory of our humbling goes something like this: First came Copernicus, then came Darwin, and now we have, well, Sam Altman, who has made possible machines that can best humans at their own game: the humanities. It’s not that I don’t agree that GEN-AI presents us with a crisis. I just think that fretting about our reduced self-estimation is no longer productive nor interesting.
Granted, I can only speak to my little corner of this vast discussion about where GEN-AI will take us. I’m not a programmer or policymaker. But as a college writing professor, I care deeply about the future of writing instruction and the cognitive development that happens for students because they read & write in college.
In the past two and a half years, I’ve read hundreds of pages that speculate about the impacts of GEN-AI on society at large and on writing students in particular; I’ve participated in think-tanks and trainings offered by my university and further afield1; and I continue to experiment with various approaches in the classroom, knowing that as fast as AI is changing, any conclusions or successes I arrive at are a moving target.
Most of these offerings have had considerable benefit. I’ve greatly appreciated the opportunity to engage with those who have taken GEN-AI’s threat — and promise — seriously. But then it happens: a fellow novelist bemoans how all of her blood, sweat and tears have been “stolen” by AI and starts in on what-is-even-the-point-of-writing-a-novel-anymore; or, a friend sends an article with a frightening headline about ethically questionable student or faculty behavior with regard to AI, and I find that my predominant feeling is impatience. Yes, I get it. Human good. AI scary. We no like.
That’s when I realized that not enough people are talking about the stuff I want to talk about when it comes to GEN-AI. We’re circling the drain asking the same limited questions. After I read Mounk’s piece, I sat down and, in a flurry, wrote out several ideas that provoked me. (See appendix below). Most of them are feverish little hot-takes that have accumulated without much oxygen since I began thinking about GEN-AI. Not surprisingly, most would benefit from proper ventilation, as it were.
In coming weeks, I plan to take up these ideas as I discuss my practical and intellectual experience with GEN-AI in hopes of thinking with you, my reader, in ways that are a little further afield than I’ve yet seen. Maybe in so doing you’ll direct me to the thinkers I’ve missed out on, thereby enriching later articles in the series before I even get to them. Or maybe you’ll feel provoked into your own new insights and share those with me here. In any case, it can’t hurt to keep asking good questions, can it?
Let’s be clear: I’m neither a champion of GEN-AI nor its detractor. I haven’t got the platform. What I do have is the Monday morning imperative: having to be ready for class. As a novelist and essayist, I feel both curiosity and alarm. As an educator, I feel a pedagogical obligation to take GEN-AI seriously. That’s it. But in taking it seriously, I believe our generalized inquiry, at least in the humanities, also needs to improve.2 To that end, here is a preview of the articles I plan to post in coming weeks.
Anything You Can Do, GEN-AI Can Do Better. I’ll start with the anatomy of an assignment I gave in my recent undergraduate course, Reading and Writing Nature. To my astonishment, Chat GPT 4o seemed to have shed its tell-tale signs of machine-produced creative writing, forcing us all to consider the question of whether the imagination is an exclusively human property. There is an answer, I believe, one that engages us with worthy questions about the nature of writing, the role of technology, and what it means to be human.
Letting Go of Literacy: Taking the Long View. There really is no such thing as “digital literacy.” After my close-up about a classroom experience in the first piece, I want to pull back and take a pan shot of the changes to literacy that happened long before GEN-AI came on the scene and are still happening. Thinkers like Walter S. Ong, Joanna Drucker, Donna Haraway and Gregory Ulmer have considered the long-standing relationship between writing and technology. Arguably, the Greek alphabet itself was an explosive technology.3
Practical Classroom Strategies for Teaching Writing Using AI. Contrary to what many may think, students are not of one mind about the use of AI in school. To some, cheating with AI is already 2nd nature; others still refuse to go near it. Since recent news includes many headlines stating problems arising from use of AI by both students and teachers, I gather that such questions have general relevance. I’ll share an update about what worked and didn’t work in my approach to GEN-AI in the classroom this term, along with proposed adjustments for fall term. Keeping up with GEN-AI’s exponential changes in capability poses the biggest challenge.
Why Nonbinary Thinking Matters For Our Future With GEN-AI. I’ll close the series with my ulterior motive (mwa mwa mwa): grounding this whole discussion in the theme of this Substack column, Between. Most thinking on AI, like Mounk’s, begins with an unquestioned categorical distinction between humans and machines. But just as there is no ontological certitude (or biological basis) for gender and race, neither is there for what defines human. Categories are useful, imperfect, and in the long view impermanent ideas about how to organize our experience in the world. If we organize our experience and fearful expectations of GEN-AI around an uncontested binary of human vs machine, the GEN-AI that results might fulfill those expectations. But it doesn’t have to.
Thanks for joining me.
Appendix
Here are the questions and concepts that tumbled out the day I read Mounk’s piece, which I will try to take up in various ways in the forthcoming series. They are raw, random and possibly contradictory. I share them because, for me, each has enough of a pulse to make it worth exploring further.
Which, if any, interests you the most and why?
1. All writers produce work that is, to a greater or lesser degree, intertextual. Typically, the better read the writer, the more intertextual the imaginative well from which they draw. That likely means better, more imaginative work. Arguably, AI is 100% intertextual. So it's not surprising that it would produce work deemed by "experts" to have value. This was in response to Mounk’s statement that readers of the Odyssey could not distinguish between “expert” translations and Open AI’s GPT4o rendering.
2. If AI can produce a passable mystery novel or beach read that humans cannot recognize as human-authored, is the problem that we have been surpassed by GEN-AI or that we are too easily satisfied? If human writers are so unique or valuable, why are they producing imitable work? Put another way, if the value of human products lies in their unique human qualities, AI, by definition, wouldn't be able to reproduce it.
3. When we envision an AI future, why do we persist in making the distinction between "human" and "AI"? Why do we not ask what we mean by "human" and "robot"? Is there value in thinking about the ways humans are already cyborg? Donna Haraway (and others) recognized this long ago, but not a single generalized article I've read on AI even approaches thinking about AI in this way.
4. Is there value in thinking about AI within a vaster framework, one that doesn't posit the binary of human vs robot? Might we take a cue from mycelial networks, which are neither plant nor animal, and ponder whether distributed consciousness flows across/through the human-constructed binary of the natural and human-made world, as it flows through everything else?
5. Most of my students read for one thing: information. Few have been taught how to experience a poem or to see its value in any non-Utilitarian way. But in poetry, the reader’s experience is where the poem’s value lies. Does it matter so very much, its source? Some human writers were monsters, but we still enjoy their work.
6. Years of research in composition show that when students value the process of producing a written output, they learn more. Still, most students arrive in my classes believing that what matters most is the output — and even that doesn’t really matter because they’re just writing for a professor. These beliefs create the conditions that make cheating with AI rational (if not desirable) behavior. Do we care as much about changing those conditions as we do about our students’ use of AI?
7. No, AI is not "thinking" and certainly not "feeling" like humans do when it produces writing that moves us. But if humans value thinking and feeling, it's likely that we will one day create AI that can think and feel like we do. (Or AI itself will make it happen). One might even argue that if AI does produce poetry that moves us, it deserves to experience it, whatever that would mean.
8. Perhaps one thing that remains unique and beautiful about humans is that we value what we don't know almost more than what we do know. We are seekers. (This may be how AI got created in the first place). Our limitations define us. AI doesn't recognize or understand limits. As far as I can tell, it lacks curiosity. HAL wanted to endure but still lacked curiosity. Even when it achieves singularity, AI will likely remain incurious. Incurious humans make me nervous, so I suppose that incurious AI should make me very nervous.
9. I think people find it deeply upsetting that AI's results happen via computation. But we are so focused on outputs that I've seen almost nothing about process. In what ways are human and AI processing similar functions and in what ways are they different? I’m not sure how much the answers matter, but the questions do. Good questions can help us decide what kind of AI future we want. Rigged as the game is against us, AI is still not automatically or inherently bad, anymore than the Internet is bad. How we behold a thing participates in what it becomes.
10. Human creativity isn't going anywhere. It changed when people became literate. It changed due to the rise of mass reproduction. It has changed (ongoing) as we move from literate to what Greg Elmer calls "electrate" culture. And it will change as AI automates certain aspects of the creative process. The problem will be teaching young writers not to sidestep the "dark acre," that period of chaos that's inevitably part of the creative process. God walks; the devil calls for a limo, as they say. Still, new creative processes are emerging that leave literacy behind. That's inevitable.
BTW, no AI was used to write these comments. But the next thing I'm going to do is feed this to Claude and ask its opinion.
First stop for any writing teacher grade school through college graduate should be Leon Furze’s Practical AI Strategies. It offers a range of helpful introductory courses to ground you in the basics. And although I disagree with his argument, John Warner’s Writing in the Age of AI is essential reading for any writing teacher trying to find a way forward with AI.
Two recent worthy examples have nourished my thinking. The first, by D. Graham Burnett, “Will the Humanities Survive Artificial Intelligence?” in The New Yorker, is the one of the few pieces surprisingly open to GEN-AI’s humanistic possibilities: “What we’re entering is a new consciousness of ourselves, writes Burnett. And John Cassidy’s, “Luddite Lessons” (The New Yorker, April 21 2025) from his book, Capitalism and Its Critics: A History From the Industrial Revolution to AI is a vital piece that turns the common view of Luddites as people opposed to progress on its ear; in so doing, Cassidy explains why we must embrace AI the way that Luddites embraced (yes) industrialization: with a balanced sense of its effects on workers and the understanding that they needed to shape the direction of new technology “before the choices narrowed.”
See Andrew Digh and Kevin Cummings, “Machine Writing, Learning, and the Disappearance of the Pen.”
Just a thought. I probably could secure Greg Ulmer to speak at Drake, maybe. I wasn't his student, but he knew who I was, haha. I saw him at a Society for Cinema and Media Studies meeting years ago (he was a Plenary speaker), and he was great. I then ran into him while I was walking on the beach (nearby) afterwards. He was kind. He's a cool guy, laid back, and, well, super smart. Getting Greg Ulmer to speak on AI at Drake would be a coup!
Well, you anticipated me, Carol, in referencing Ong and Ulmer. We may be moving to a new phase of language and knowing, beyond orality, literacy, or Ulmer's "electracy," although I think the latter was predictive. Ulmer is still teaching! Maybe we have him as a Writers and Critics speaker. He's a chill guy. He's also amazing on the labor front. He worked and worked 4/4 load, I heard, before he became tenure track. He's involved in all sorts of collaborations, including environmental/architectural collaborations. He's kind of the bomb.