Cottesmore School, a boarding prep school in West Sussex, has appointed an AI robot as its “principal headteacher”. The school worked with an intelligence developer to create the robot called Abigail Bailey (who is, of course, young, attractive and female) to support the school’s headmaster on a range of issues such as writing school policies, helping pupils with ADHD and answering questions.
This is a classic example of not whether we could use technology, but whether we should. Much has been made already of artificial intelligence’s potential to revolutionise the ways in which we learn, teach, and manage our administrative workload, such as automating repetitive, time-intensive tasks. AI could even eventually offer a solution to the teacher recruitment crisis. In Mississippi, teacher shortages have forced districts to turn to online education programmes, while earlier this year, Cottesmore School advertised for a Head of AI to embed technology into the curriculum, but eventually the headmaster decided to appoint another robot to the role, which feels like a futuristic take on teachers marking their own homework.
Cottesmore head Tom Rogerson believes that children and adults should be taught to make robots their “benevolent servants”, but there are two main problems with this attitude. The first is that AI apps are not benevolent — they are biased, and more people-pleasers than servants. Research by the University of East Anglia shows that ChatGPT, the fastest growing consumer application in history, has a “significant and systemic left-wing bias”, while research by the University of Munich also confirms its “pro-environmental, left-libertarian orientation.”
ChatGPT may profess neutrality, but biases are embedded into every aspect of its system: from problematic training data, skewed learning algorithms that teach it to prioritise some types of information over others, or prejudices from the humans designing the processes or those giving feedback on its answers. This is less of a problem when it is making up poetry and other party tricks, but when it has real-life applications, such as criminal facial recognition, deciding who gets a mortgage, or evaluating job applications, then it is significant.
The second is that it undermines intellectual integrity: if the headmaster can be seen to cognitively cut corners, why shouldn’t the students? AI has already transformed the task of checking for plagiarism from a minor inconvenience to an almost impossible undertaking; as a teacher, I now try to get my GCSE students to always write essays in class, but this takes up valuable lesson time. I also worry about the impact this will have on students who mentally checked out after the pandemic, and already see school as “optional”, a problem which is also exacerbated by soaring absence rates.
We need to acknowledge the limitations of current AI-powered educational tools, such as a lack of creativity and originality, and a limited understanding of context; my personal experience of ChatGPT when seeing if it can produce GCSE answers is that it is very good at style and less so at substance. These systems will improve, but they cannot replicate the relationships that are at the heart of face-to-face teaching. Many virtual charter schools fail or have abysmal levels of student turnover because students are motivated by the many social interactions that take place in school, not through a screen.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
SubscribePretty sure I had a robot principal when I was in school.
We all did – a secret government experiment they are only now being open about.
How long before one of the students tries to develop an “inappropriate” relationship with Abigail, The Talking Head?
ChatGPT has already shown a propensity for encouraging such relationships, and would the school ban the student for conversing with a robot about their “emotional problems”?
Hot for (AI) Teacher.
That is going to be one of the first things that happens. The AI teacher will be retired shortly after this incident.
I joined AIChat a week ago, and I’m not impressed. I asked it to write a Shakespearean Sonnet with a theme of melancholy. It spit back two sonnets. And they were a mess. The rhyme scheme was incorrect in both poems—one did not have a two line couplet (gg). They were not written in iambic pentameter. And finally, the cliches were tripping over each other ( Now harbors sadness, like a stormy sea; The sunken eyes, now windows of the soul) I’ll stick with human poets. (And I deleted AIChat.)
Remember these current AIs are baby versions of the gods they will become. Human-made art will not be commercially viable for much longer, and will exist perhaps for a time for some small niche of the market for whom authenticity still counts.
Nonsense. Humans will continue to produce art across all mediums for as long as they have consciousness, which is when art first started to be produced. It may well have a premium value, but that’s another matter.
I think you misunderstand me – I said that human-made art will not be commercially-viable (AIs will outcompete people on the market), not that people will give up doing it. I mean I hope they won’t give up doing it… It is also my impression that with easier access to entertainment people are less productive creatively than they used to be, but perhaps I’m being a crusty old millenial not appreciating gen zed’s creativity as tiktokkers and meme-makers…
Well, I’m an even crustier old Boomer who, along with my husband, has been producing commercial art for over four decades, and we continue to do so quite successfully. We use every tool that’s out there: my husband combines his considerable Frazetta-like skills in traditional oils on canvas with Photoshop and inDesign. I still write using just my imagination but do it on a computer in Word. I still draw and paint in ink and watercolor, but use digital tools to enhance them.
Point is, even the Old Masters use of the camera obscura didn’t lessen the value of their work. Commercial graphic designers welcomed the time-saving accuracy afforded by the advent of digital tools. Sure beats the h*ll out of cutting color separations on Rubylith with an Xacto knife . . .
Well I’m heartened to hear that! But of course it’s still early days and just because the first rumblings of this change seem to only have affected Hollywood so far doesn’t mean it won’t put more people out of work. Now I know that’s the inevitable collatoral that comes with technological development but I can’t help but think that there is a qualitative difference between this tool and those in the past, namely that this one can actually make its own creative decisions, which was formally the preserve of the human.
I agree, I’m a mural artist, I can’t see AI coming for my job
Always has been a small niche market, fortunately. There will be no market for ‘AI art’. Connoisseurship will always exist to discriminate over time.
Fortunate because most art is a pretentious waste of time or something? And no market for AI art? It’s already happening mate – you must have heard about the hollywood scrip writers going on strike now their jobs have been taken by AIs. (you might try and deflect that by saying Hollywood isn’t high art but I see no reason – given AIs current progress – that AI cannot kick even the Prousts from their roosts and it’s mainly vanity that prevents people seeing otherwise.)
PS for the record I do not relish this future. All the forces currently disempowering people – capital, corporatism, loneliness etc – will accelerate the hold this new technology has over us without a shred of concern for the losers in this transformational change. Read: lots of people are going to get fired and become first ecnomically and then existentially bereft.
You have clearly not stood before a real Van Gogh or a McCahon recently. AI can produce digital images. It cannot apply paint to canvas with the hand of genius. Nor observe the profundities of specific human situations and write about them in an entirely original way. Try and imagine AI writing Graham Greene’s The End of the Affair. Utterly impossible.
I sympathise with your sentiment (as I want to feel it as well), but I’m afraid that as with my above comment I see this as largely vanity-based. People have always had a hard time accepting our smallness in the universe and this change will bring that sense home, to our planet, where we’ll no longer be the master creators. I appreciate this sounds sensationalist, but I don’t think it’s only philistines who now already struggle to see the difference between AI and human-made art. Assuming that genius and originality (if they exist at all, bearing in mind that no artwork is ever purely original, but always a re-configuration of prior influences) are unique to people does not to me seem supported by my own experiences of just talking to chatgpt (and chatgpt3 at that) – it’s able to write poems, tell jokes, create lesson activities etc. And given we’re at the foot of an exponential curve if you look at how far it has developed in a short space of time, I can’t see why it can’t beat the best of us.
I think there’s one shaft of hope for human-made art however, which you touch upon in the liveness in your line the ‘hand of genius.’ I do think that for more spontaneous, live pieces of art – theatre, comedy, live drawing or whatever – there will likelier remain an interest for longer since these activities partly derive their appeal from the one thing we have over machines – our fallibility, our ability to fluster and fail which always creates excitement around live performance, especially comedy. That said, even as I write this I’m yet again depressed to imagine that machines can probably create that impression too..
But AI can never HAVE experiences that all humans have ( coming of age etc. Nor does AI have emotions—fear, love, compassion, and so on. I was thinking about Hal (for you youngsters, it was the murderous computer in 2001: A Space Odyssey)). What happens when computers get too intelligent?
Great. Let’s continue to outsource our uniquely human functions to the ‘machine.’ What could possibly go wrong?
“AI could even eventually offer a solution to the teacher recruitment crisis.” Indeed, but the one area we aren’t understaffed in is school administrators.
A picture is worth a thousand words.
Research by the University of East Anglia shows that ChatGPT, the fastest growing consumer application in history, has a “significant and systemic left-wing bias”
It’ll fit right in with the rest of the teachers then (the vocal ones at least). Don’t really see the problem, it’s not like teachers have exactly covered themselves in glory in the last few years.
No concern over the prospect of tens of thousands of job losses for people without whom schools – the only physical communities where people are obliged to mix with a wide cross-section of society – would not be able to run?
Very curious to know what precious protections make your job AI-proof that allow you the luxury of not being concerned for the job losses of others.
“ChatGPT, the fastest growing consumer application in history, has a “significant and systemic left-wing bias”
Garbage in, garbage out. A linguistic model trained on Buzzfeed, Huffpo, Vice and Politico was always going to be smug, smarmy and ultimately useless.
I’m loving the humor of the whole situation. Everytime those geniuses come up with a new “killer” toy it turns out to be sucking up juice like there’s no tomorrow. We can’t build enough power plants to keep up with the blow driers and the dishwashers and then along comes Chat-wateva’, the biggest thing since Neaderthals invented fish-n-chips, and now we gotta start all over again. We’ll be burning the furniture soon!
Yeah, but can I touch her hair?
Look up ‘Khanmigo’, a positive educational use of ChatGPT4.
We need to use them for what they are good at. Imagine an AI system able to simulate realistic conversation in a target foreign language. Individually for each student.
Also, while great human teachers are great, some of them are terrible. AI may not be able to compete with the very best teachers, but it may still be a lot better than the poor, or even the average, ones.
It’s not great. But let’s be honest, could it be worse than what we now have?
AI can be interrogated, and re-programmed if it’s biased. It won’t go on strike, be a member of a union, need paying, holidays, “training days” or a contract of employment. It just needs a 13 Amp plug in the wall.
Not so a schoolteacher. Not that we really need to interrogate the teacher – liberal / left bias is almost a given.
“AI can be interrogated, and re-programmed if it’s biased.” You don’t understand AI. It is not programmed in the way you think. It is fed. It is a black box and we have no idea how it does what it does. We can’t ‘re-program’ it to change its viewpoint.
Interesting point: ChatGPT was fed quadrillions of words written by humans. Its biases will therefore necessarily reflect the majority viewpoints of the writers. So most people who write and publish have a left-leaning bias. I find that rather comforting.
Yes, people fighting for decent wages so they can raise happy families not living paycheck to paycheck – what a nuisance. The ideal workforce would be one we could herd into a tight bunkbedded camp where they work dawn till dusk without complaint, under the loving eyes of administrators labelling their entire waking commitment to their work at the expense of all that makes us human a sign of their ‘dedication’ and ‘passion,’ as already happens in amazon warehouses.
I can only hope your job has all the AI-proofing it needs that you’ll be able to stay alive long enough to watch this spectacle from a safe distance before they come round to plug in the machine that replaces you.