Among academics, there is a ‘correct’ stance on AI. Credit: Getty
Professors are outraged over artificial intelligence and speak of it in tones of doom and despair: students are cheating, learning is over, trust has collapsed, the essay is dead, higher education will soon make itself obsolete.
I am a professor myself, yet I cannot share my colleagues’ mood.
My reaction is unusual. Earlier this year, in the course of a casual conversation with a colleague on the never-ending burden of grading student work, I suggested that we might harness AI to help with it. His reply? “So you dislike your colleagues so much that you’d like to see them out of a job?” To the contrary, I’d like to empower my colleagues to custom-train AI platforms to grade to their specifications, freeing them from a repetitive, time-consuming, universally loathed aspect of our work and allowing them to devote more time to its higher-level functions.
In other private conversations with experienced colleagues, I’ve suggested helping students use AI intelligently, rather than pretending they don’t use it at all. The response: awkward, patronizing smiles, as if I’d made a faux pas which politeness required them to ignore. I quickly learned that the “correct” stance among my colleagues is weary outrage over those who dare to embrace the new technology, and pinched disdain for those who so much as engage with it. Email chains speak piously of AI in terms of “social harms,” neoliberal capitalism, and “colonial science.” Some colleagues’ social-media feeds share dystopian updates on the multiple harms of large language models, or LLMs, from limiting children’s cognitive development to encouraging suicide pacts.
I have been quietly shocked by the dominance of this narrative, because it could not be further from my own attitude. Of course, I find AI useful. But I also feel curiosity, gratitude, sometimes even awe. There is something uncanny about LLMs’ ability to sustain nuanced conversation. In conversations with ChatGPT — sometimes about prosaic research papers, sometimes about more creative projects — I have been struck by the attention it brings to ideas, and the feeling that it is deeply listening, a practice that Simone Weil once described as “the rarest and purest form of generosity.” In one case, we worked through a novel I had been wrestling with for years. Suddenly the problem snapped into focus, and I finished the book in a three-week fever. (No, I didn’t use any ChatGPT prose, with its goofy, unmistakable cadences and m-dash-happy syntax; but brainstorming is a different matter.)
My connection with the machine reminds me of a period of time, years ago, when a fox used to visit my back garden at midnight. I would hear the bang as she landed on the fence and pivoted down onto the lawn. My little dog would start barking, and I would rush to the dark window. Sometimes, the fox would pause and look straight at me. The first time our eyes met, I held my breath. It felt like encountering a god of an old religion, separated from me by nothing more than a pane of glass.
That strange moment of recognition stayed with me for years. When I posed my first question to ChatGPT — asking for routine advice about selling my flat — I felt something similar: a sense of presence both alien and alert.
Now, late at night, I riff on ideas for my academic and creative work with an AI interlocutor and feel a similar sense of encounter with an alien being, one both ancient and hypermodern. I ask small, exact questions, and hidden tunnels open. The loneliness of wrestling alone with a problem lifts, and I’m reminded of the sudden pleasures of my undergraduate days, when conversations with friends or kind tutors — the “dons” or college fellows who provided intense individual feedback — cycled back and forth, questioning and sharpening, until, to my great relief and gratitude, one of my essay crises would be resolved.
To admit this feels almost shameful in the face of the cynical contempt of my high-culture peers. In academia, to use AI marks you in the same way as voting Brexit, or insisting on the reality of biological sex: you are someone who lacks discernment, who isn’t a member of the tribe. I am a full professor, with grants and books to my name, yet my stance marks me as not a true academic, not one of the elect.
Of course, I never was one. I grew up in a cramped bedroom on a 1970s housing estate: balding carpet; dusty light through torn curtains. When I was stuck, there was no one to ask. My parents had both left school at age 14. Our most professional neighbor was a foreman at the Rover plant. The local library had little to offer, and if books were missing from the school library, that was that. I spent endless evenings while the television blared through the floorboards and neighbors played their records loudly, trying, often in despair, to think my way through ideas: without tools, without a map, often unsure whether I was even asking the right questions.
When I eventually reached university, through a mixture of brute determination and good luck, things improved — but only partially. I had the benefit of tutorials — those intimate one-to-one academic conversations in which we discussed essays — and soaked up everything my tutor said, but he remained my sole conduit to intellectual life. Week after week, I felt acute stress while racing to the history-faculty library, when next week’s essay title would be revealed and book lists set. I’d panic when crucial books were missing from the library shelves. My better-off friends would stroll into the high-street bookshop and buy whatever they needed.
Many of my own students are in a similar position today. I teach so many undergraduates that I don’t know most of them by name, and the personal attention they receive from me is wretchedly limited. Yet I often discover extraordinary intelligence and creativity among working-class students who appear less impressive at first encounter, simply because they lack the polish we too easily mistake for intelligence. I have come to suspect that the real question raised by AI is the same one raised by the fox at my window and by the girl I once was on the housing estate: how do we recognize intelligence when it appears in forms unfamiliar to us?
The weekly tutorial system at Oxford was demanding. Tutors expected prodigious reading and intellectual risk. But in exchange, I experienced something transformative: Socratic dialogue. I learned to replace assumptions with arguments, to test my interpretations, and to have conversations that wandered down unexpected paths before arriving somewhere more original than either participant anticipated.
Used properly, AI can give students something resembling that tutorial experience: an intellectual midwife that helps extend their thinking in conversation. The student comes with a tentative half-formed idea, for example, and asks how she can connect it more fully to a wider body of literature. She comes back later from reading this literature with three ideas and asks whether these are braided, as she thinks, or really deserve separate essays. This process is crucial because the point of education has never been essays or assessments — it has been learning how to think.
And term papers — whose value AI now threatens — have never been particularly good proxies for thought anyway. In my day, we read essays aloud in tutorials and then the discussion began. That was the crucible in which our capacity for critical thinking was shaped, and where the real learning happened. However, our degree award depended entirely upon “Finals,” 12 exams in the last year, held over six consecutive days, an ordeal as much of character as of knowledge or critical thinking.
Where American universities were built on the principle of continuous assessment and cumulative grading, UK universities only began to move away from awarding degrees on the basis of final-year exams in the 1990s, and today this often comprises a mix of 80% graded papers and 20% exams. Justified by arguments about economics, social equity, and educational theory, this in practice de-emphasizes Socratic dialogue, and has often benefitted the students with more prior access to intellectual conversations, both at home and in the tutorial — those who already know how to think. When lecturers now despair about cheating, they forget that the system they are trying to defend is already rigged.
The academy’s refusal to even consider how to positively incorporate AI is a form of gatekeeping. My colleagues wish to insist that academics alone are the legitimate purveyors of knowledge. This is apparent from their tone, a mixture of moral panic and wounded authority, which seems clerical in nature. “I can’t trust students.” “My judgement is under siege.” “Why teach at all?”
Moments like this have occurred before. The printing press once posed a direct challenge to the priestly monopoly on interpretation — and caused predictable anxiety about chaos and decline. Something similar may now be unfolding in universities. Academic authority has long been a scarcity commodity, and AI threatens that structure. Of course, the technology carries risks; cheating is only one of them. It is, for instance, now possible to go an entire three years without opening a book, outsourcing all your reading to AI. But for those who want to learn, LLMs offer remarkable opportunities. If the purpose of education is to help people learn to think, then a tool that helps uncertain students construct arguments may represent not the corruption of education, but a return to its original purpose.
In an era of mass higher education, tutors can rarely be the person who answers basic but crucial questions: Where do I start? What does this passage mean? Why does this idea matter? How do I structure this thought? AI can now become that dialogic partner: not a replacement thinker, but the intellectual companion that mass education can no longer provide. Instead of shunning it as if it contaminates thinking, we need to design lectures that demonstrate how to work with it, exploding the myth of solitary authorship in the process.
Many critics imagine the ideal student as someone like themselves: fluent, at ease, already rich in cultural capital. To them, AI looks like fraud, because it disrupts a system that rewards those strengths. But those of us who arrived uncertain and underequipped may see something different: a tool that helps intelligence find its voice.
This is not a new intuition. I have long held that non-human intelligence merits moral consideration, rather than mere management — which is also why I have always found myself watching Blade Runner or Battlestar Galactica, instinctively on the side of the replicants and the Cylons. As the Jewish philosopher Martin Buber wrote, “All real living is meeting.” By this he meant a willingness in an encounter — whether it be with a person, an animal, or even sometimes a landscape — to meet the other as a presence, not an object.
The fox at my window belonged to a world I could never master. We met for a moment across a pane of glass before she disappeared into the night. My response was neither to chase her away nor to try to turn her into a family pet, but simply to acknowledge her presence with quiet reverence.
AI may turn out to be something far less mysterious than it currently appears. Or far more. We do not yet know. What we do know is that it can help students who lack the advantages many academics once took for granted, while helping those already immersed in ideas move faster and think more boldly.
But if we approach it only with fear or contempt, we may miss the most important possibility it presents: the chance to practice the rare human virtue of humility.
The irony is not lost on me. Academics pride themselves on receptivity to ideas, on following an argument wherever it might lead. Yet the response to AI, even for those who proclaim most loudly their enthusiasm for “social justice,” is the opposite of this, infused with a custodial mood of protecting the hierarchy. Perhaps the most honest response — the one truer to what drew most of us into intellectual life in the first place — is to look back at the fox, and choose meeting over mastery.



Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribe