June 19, 2025 - 7:00am

The worry with artificial intelligence isn’t just that machines are becoming smarter, but that the people who use them are becoming dumber. We already know that students routinely use Large Language Models (LLMs) to do their homework. The fact that their teachers increasingly use the same technology for marking raises obvious questions around the redundancy of human thought. But how much hard evidence is there that artificial intelligence is causing human stupidity?

On this question, a major new study led by MIT’s Nataliya Kosmyna provides genuine cause for concern. The researchers compared the essay-writing performance of three groups of students from Ivy League universities. The first group used an LLM to help them, the second had access to a conventional search engine, while the third had only their brains to rely on. Multiple methods were used to assess the performance of each participant including language analysis, questionnaires, and electroencephalogram (EEG) measurement of brain activity.

The EEG analysis found significantly different levels of brain activity between the three groups — “brain connectivity systematically scaled down with the amount of external support”. The other metrics aligned with this result. For instance, students in the brain-only group were more likely to remember what they’d written and feel a sense of ownership over it, while the LLM users were at the opposite end of the scale. Perhaps most worryingly, the LLM group “produced statistically homogenous essays within each topic, showing significantly less deviation compared to the other groups”. In theory, AI is a powerful tool for enquiry; in practice, it gives people the ability not to think for themselves.

The authors of the study conclude that LLM use is associated with a “likely decrease in learning skills”. Of course, as good academics, they stress that further research is needed — especially longitudinal studies into the lasting impact of LLM use. But assuming that these findings are replicated, as common sense might lead us to expect, what can we do about this situation?

LLMs have already had a major impact on the education of Zoomer students — and their Generation Alpha successors will have no memory of a world without AI. The technology continues to develop at breakneck speed, insinuating itself across the internet and into everyday life. We can prevent AI-enabled cheating in supervised examinations, but the real challenge is preserving the conditions in which students consistently learn things for themselves instead of having knowledge spoon-fed to them by machines. We’ve already seen what the mechanisation of physical effort has done to our bodies, so will we just sit there while our minds become flabby too?

In his 2008 novel Anathem, Neal Stephenson imagines a society in which walled institutions are used to segregate scientists and philosophers — as opposed to monks and nuns — from society. Depending on the strictness of the orders to which they belong, the robed scholars are only allowed access to the outside world once a year, once a decade or once a century. At the furthest extreme are the mysterious Millenarians, who haven’t been seen for a literal age.

The time may be fast approaching when our own universities need to rediscover their ecclesiastical roots — and provide a cloistered environment in which the brightest minds can develop free from the influence of AI.


Peter Franklin is Associate Editor of UnHerd. He was previously a policy advisor and speechwriter on environmental and social issues.

peterfranklin_