Children have no idea who they are, and that's fine. Credit: Marcos del Mazo/LightRocket via Getty

I submit: the traditional concept of “building character” is out the window.
Once upon a time, a fully realised person was something one became. Entailing education, observation, experimentation, and sometimes humiliation, “coming of age” was hard work. When the project succeeded, we developed a gradually richer understanding of what it means to be human and what constitutes a fruitful life. This ongoing project was halted only by death. Maturity was the result of accumulated experience (some of it dire) and much trial and error (both comical and tragic), helping explain why wisdom, as opposed to intelligence, was mostly the preserve of the old. We admired the “self-made man”, because character was a creation — one constructed often at great cost. Many a “character-building” adventure, such as joining the Army, was a trial by fire.
These days, discussion of “character” is largely relegated to fiction workshops and film reviews. Instead, we relentlessly address “identity”, a hollowed-out concept now reduced to membership of the groups into which we were involuntarily born — thereby removing all choice about who we are. Rejecting the passé “character building” paradigm, we now inform children that their selves emerge from the womb fully formed. Their sole mission is to tell us what those selves already are. Self is a prefabricated house to which only its owner has a key.
This is not an essay about transgenderism per se. Nevertheless, our foundational text is excerpted from Christopher Rufo’s September 2022 comment, “Concealing Radicalism”, which quotes adolescents from a TikTok video on gender assembled by Michigan’s education department:
“I am a triple threat: I’m depressed, anxious, and gay.”
“Last night at about 2am, I put in my bio that I identify as ‘agender’, which is different than non-binary because non-binary is like neither gender, right? Agender is like the grey area between genders.”
“Hi, my name is Elise. I’ve used she/her pronouns all my life. But recently, and for a while, I’ve been struggling with gender issues as well as a whole lot of other identity things. So, I finally gave in and ordered a [breast] binder for myself and it just came in today.”
“A rational observer might suspect,” Rufo notes, “that these youths are in a state of confusion or distress, but rather than explore this line of reasoning, the education department.,, promote a policy of immediate and unconditional affirmation.” He quotes Kim Phillips-Knope, leader of the LBGTQ+ Students Project: “Kids have a sense of their gender identity between the ages of three and five, so about the time that kids have language, they can start to share with us whether they’re a boy or a girl — usually those are the only things that they will identify as, because those are the only options we’ve given them.” He adds: “In response to a teacher who asked how to respond to a student in her classroom who claims to have ‘she/he/they/them’ pronouns, Amorie [a staff trainer] responded adamantly: ‘Go with what the kid says. They’re the best experts on their lives. They’re the best experts on their own identities and their own bodies.’”
I further submit: throwing kids who just got here on their own investigative devices — refusing to be of any assistance aside from “affirming” whatever they whimsically claim to be; folding our arms and charging, “So who are you? Only you know” — is child abuse.
The idea that your psyche is set from birth is intrinsically deterministic and therefore grim. The vision it conjures is fatalistic and mechanical: all these traits are hardwired, and life involves winding up the clockwork toy and watching it totter across the floor until it runs into the wainscotting. If a newly emerged self already exists in its entirety, there’s nothing to do. In contrast to becoming, being is an inert affair.
We haven’t given these young people a job. Contemporary education strenuously seeks to assure students they’re already wonderful. Teachers are increasingly terrified of imposing any standards that all their wards will not readily meet, so everyone gets a gold star. The Virginia school district of the once-renowned Thomas Jefferson High School for Science and Technology now aims for “equal outcomes for every student, without exception”. A pedagogical emphasis on student “self-esteem” became dislocated from “esteem for doing something” decades ago. Why should any of these kids get out of bed? No wonder they’re depressed.
Minors don’t know anything, which is not their fault. We didn’t know anything at their age, either (and may not still), though we thought we did — and being disabused of callow, hastily conceived views and coming to appreciate the extent of our ignorance is a prerequisite for proper education. Yet we now encourage young people to look inward for their answers and to trust that their marvellous natures will extemporaneously reveal themselves. With no experience to speak of and no guidance from adults, all that many kids will find when gawking at their navels is pyjama fluff. Where is this mysterious entity to whose nature I alone am privy?
There’s nothing shameful about being an empty vessel when you haven’t done anything and nothing much has happened to you yet. Telling children, “Of course you don’t know who you are! Growing up is hard, full of false starts, and all about making something of yourself. Don’t worry, we’ll give you lots of help” is a great deal more consoling than the model of the ready-meal self. We demand toddlers determine whether they’re “girls or boys or something in-between” before they have fully registered what a girl or boy is, much less “something in-between”. Placing the total onus for figuring out how to negotiate being alive on people who haven’t been given the user’s manual is a form of abandonment.
Adults have an obligation to advise, comfort, and inform — to provide the social context that children have none of the resources to infer and to help form expectations of what comes next. Instead, we’re throwing kids helplessly on their primitive imaginations. The first time I remember being asked what I wanted to be when I grew up, I clearly remember answering, “a bear”. I wasn’t trying to be a wiseass. I just wasn’t up to speed on the ambitions to which I was expected to aspire. Little wonder that kids are now “identifying” as cats. Next, they will be identifying as electric lawnmowers, and we will have asked for it.
This notion of the pre-made self is asocial, if not anti-social. It separates personhood from lineage, heritage, culture, history, and even family. You are already everything you were ever meant to be, never mind where, what and whom you come from. But seeing selfhood as floating in a vacuum is a recipe for loneliness, vagueness, insecurity and anxiety.
By contrast, a self constructed brick by brick over a lifetime has everything to do with other people. The undertaking involves the assembly of tastes and enthusiasms, the formation of friendships and institutional affiliations, participation in joint projects, and the development of perceptions not simply of one’s interior nature but of the outside world. Character that is rooted in ties to other people is likely to be more solid and enduring. The elderly are most in danger of desolation when they’ve outlived their friends and relatives. Who I am partially comprises decades-long friendships, my colleagues, my fierce devotion to my younger brother, a complex allegiance to two different Anglophone countries, and a rich cultural inheritance from my predecessors.
In my teens, we employed the word “identity” quite differently. We thought having an “identity” meant not only being at home in our own skins, but also having at least a hazy notion of what we wanted to do with our lives. It meant connecting with the likeminded (I found kindred spirits in my junior-high Debate Club). An “identity” was fashioned less from race or sexual orientation than from the discovery of which albums we loved, which novels we ritually reread because they spoke to us, which causes we supported, which subjects interested us, and which didn’t. It meant figuring out what we were good at (I was good at maths, but in second-year calculus I hit a wall) and what we couldn’t stand (me, team sports). Identity was fused with purpose: I knew I was drawn to writing, the visual arts, and political activism (the latter making me rather tiresome).
We were as self-involved in our determination to be individuals as Gen Z, but that particularity was commonly assembled from the cultural smorgasbord of other people and what they’d thought and made: Kurt Vonnegut or William Faulkner, Catch-22 or The Winds of War, Simon and Garfunkel or Iron Butterfly, hostile or gung-ho positions on Vietnam. Naturally this is a version of identity subject to change. That’s the point. It’s supposed to change. I no longer listen to Emerson, Lake, and Palmer.
The self is not found but made, because meaning is made. Rather than be unearthed like buried treasure, meaning is laboriously created, often by doing hard things. I cringe a bit recalling the person I was in my twenties, because she represented an early stage of an ongoing project that I have modified much in the years since. My twenties were an early draft of a manuscript whose sentences I have revised, pruned, and qualified. Ideally, if I keep forcing myself to do hard things — take on the premise of a novel that at first I have no idea how to execute, move to still another country, cultivate new friendships — the later drafts of my eternally incomplete manuscript will be more captivating. I would arguably be a fuller person had I done the very hardest thing — having children — but as a not-half-bad second best, I have committed to a marriage of 20 years and counting and thus to a man who moors me. Only death will part us.
Of course, in constantly reforming and refining who we are, we can lose aspects of ourselves from earlier drafts that we should have kept. I no longer dance alone for hours in the sitting room, and I miss that abandon. For years I crafted ceramic figure sculpture, and I’m not sure that substituting journalism as my primary side-line to fiction writing constituted an improvement. Towards the very end of our lives, many of us will drop pretty much every paragraph we ever added, and we’ll go from novel to pamphlet.
Nevertheless, given the choice I’d prefer to spend time with me in the present than with me at 35. I know more (although what I learn now has trouble keeping up with what I forget), my sense of humour is sharper, and rather to my surprise I’m humbler. I’ve more perspective; while that perspective is often bleak, that very bleakness — a gleeful bleakness — can be entertaining. I’m not as neurotic about what I weigh, and I am more generous, both in relation to contemporaries and younger aspirants. I’m less concerned with my professional status, and I think much more about death (which is torturous but intelligent). Some of this profitable evolution was effortlessly organic, but much has issued from a challenging career, the fruit of taking a big risk in my youth that’s paid off.
Clearly, some aspects of character, of self, are determined from the off. I’d never have become a nuclear physicist no matter how hard I tried. But the conventional “nature versus nurture” opposition still eliminates agency: you act mindlessly as whatever you were born as, or you are submissively acted upon. Where on this nature-nurture continuum does the object of all this theorising have a say in the outcome? I’m leery of venturing into the prickly no-go of sexual orientation. Yet while I’m open to the idea that some people are born gay, choices can affect what gets you off. We hear repeatedly from big consumers of online pornography that their tastes begin to change, and it takes more and more extreme videos to become aroused, until actual humans in real life will no longer do the trick. Watching porn is a choice. Even sexual proclivities exhibit some plasticity.
Following the modern script, 14-year-olds have learned never to say, “I’ve decided to be trans”, because all my friends are trans and I feel left out, but always, “I’ve discovered that I am trans”. This passive, powerless version of self has implications. We’re telling young people that what they see is what they get — that they already are what they will ever be. How disheartening. What a bore. Whatever is there to look forward to? Many victims of this formulation of existence, which apparently requires little of them besides all that being, must reach inside themselves and come up empty-handed. At the direction of the sort of educational authority Chris Rufo quoted above, they’ve undertaken a psychic archaeological dig, only to be left with a pit. So they feel cheated. Or inadequate. Convinced that they alone among their peers exhumed nothing but a disposable cigarette lighter.
By withholding the assurance, “Don’t worry about not knowing who you are; you’re just not grown up yet, and neither are we, because growing up isn’t over at 18 or 21 but is something you do your whole life through”, we are cultivating self-hatred, disillusionment, bewilderment, frustration, and fury. Young women often turn their despair inward — hence the high rates of depression, anxiety, eating disorders, and cutting. Young men are more apt to project the barrenness of their interior lives onto the rest of the world and take their disappointment out on everyone else.
In a trenchant essay last autumn, “Mass Shootings and the World Liberalism Made”, Katherine Dee seeks a deeper explanation for the mass murders committed by disaffected young men, whose blind rage and misanthropy now express themselves in the US at a rate of twice per day. Gun proliferation, Dee claims, is not the core driver. Rather, “we have a nihilism problem”. The videos left behind by the Sandy Hook child killer Adam Lanza suggest a belief that “even if we could free our ‘feral selves’ from the shackles of modern norms, there would be nothing underneath. Just blackness. A great gaping hole. For many mass shooters, the only reasonable response to this hole is death — the complete extermination of life. Not just theirs.”
According to Dee, all these atrocities have hailed from “a world where everything revolved around the individual”. The result is narcissism, which “is expressed through our perpetual identity crises, where chasing an imaginary ‘true self’ keeps us busy and distracted. We see it in the people who use their phones and computers like they’re prosthetic selves, who are always there, but never present, gazing endlessly at their own reflection in the pond.”
An authentic sense of self commonly involves not thinking about who you are, because you’re too busy doing something else. It is inextricably linked to, if not synonymous with, a sense of meaning. Nihilism, an oxymoronic belief in the impossibility of believing anything, can prove literally lethal. Young men who feel no personal sense of purpose are inclined to perceive that nothing else has a purpose, either. They don’t just hate themselves; they hate everybody. In telling people who’ve been on the planet for about ten minutes that they already know who they are, and that they’re already wonderful, we’re inciting that malign, sometimes homicidal nihilism. Because they don’t feel wonderful. They’re not undertaking any project but, according to the adults, inertly embody a completed project, which means the status quo is as good as it gets — and the status quo isn’t, subjectively, very good.
Transgenderism may have grown so alluring to contemporary minors not only because it promises a new “identity”, but because it promises a process. Transforming from caterpillar to butterfly entails a complex sequence of social interventions and medical procedures that must be terribly engrossing. Transitioning is a project. Everyone needs a project. Embracing the trans label gifts the self with direction, with a task to accomplish. Ironically, the contagion expresses an inchoate yearning for the cast-off paradigm whereby character is built.
We should stop telling children that they’re the “experts on their own lives” and repudiate a static model of selfhood as a fait accompli at birth. Sure, some inborn essence is particular to every person, but it’s a spark; it’s not a fire. We could stand to return to the language of forming character and making a life for yourself, while urging teachers to exercise the guidance they’ve been encouraged to forsake.
As we age, we’re not only that unique essence in the cradle, but the consequence of what we’ve read, watched, and witnessed; whom we’ve loved and what losses we’ve suffered; what mistakes we’ve made and which we’ve corrected; where we’ve lived and travelled and what skills we’ve acquired; not only what we’ve made of ourselves but what we’ve made outside of ourselves; most of all, what we’ve done. That is an exciting, active version of “identity” whose work is never finished, full of choice, enlivened by agency, if admittedly freighted with responsibility and therefore a little frightening. But it at least provides young people something to do, other than mass murder or gruesome elective surgery.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
SubscribeThis is not really “intelligence”. What has been solved is a computational problem. AI may use techniques not found in more traditional computers but it is not “intelligent” in the human sense. Like any computer an AI can acquire and apply information – we start to call it “intelligent” because it is able to do this adaptively. However, this is with respect to a constrained and well-defined problem. Human intelligence can adapt from one type of problem to another and at present that ability is well beyond the reach of any machine.
I know quite a few people who would fail at adapting from one task to another – especially politicians <g>
I don’t think that the human intelligence you describe is anything more than powerful processing and pattern identification. AI should be able to replicate that.
Self-awareness is the part that raises the bigger question. Is there a magic point of processing power at which a computer will suddenly ‘wake up’ ?
It’s very doubtful that existing computer architecture (Turing Machines, Von Neumann etc) is capable of fully reproducing human thought processes. See Sir Roger Penrose’s books on the subject for a full discussion.
I don’t believe that anything magical is going on in human consciousness – the brain is conscious and it’s a ‘mere’ physical object. But what it does isn’t simple computation as currently understood. Maybe quantum computation is involved.
The basis of what Penrose is saying stems from the formalisation of the concept of Algorithms, which came about from the work of Post, Turing, and others, after attempts by mathematicians and philosophers from the late 19th century onwards to ‘ground’ the basis of maths in solid foundations ““ this is what Russell picked holes in when Frege published a ‘foundation’ framework, and what Gödel eventually proved was never going to be possible. Beyond showing that a formalism cannot be proved as valid from it’s own axioms from within the system, Gödel also showed there are mathematical truths (Gödel sentences) that humans can ‘see’ to be true but cannot be proven algorithmically, and Penrose is using this to disavow the possibility of human understanding being algorithmic.
But Penrose is drawing a distinction between intelligence and sentience. And he’s only claiming human sentience is not replicable algorithmically, not human intelligence. On the contrary, he expects machines to replicate and go past humans in intelligence. Personally I hope to God that bPenrose is right about sentience, but the Penrose stance is a minority view amongst Philosophers, Mathematicians and Computer Scientists.
Over the years I have found it difficult to believe human sentience is the result of algorithmic processes or could be replicated algorithmically. But after a four decade engagement with the human vs machine intelligence/sentience debate, I’m reluctantly coming to the conclusion that human sentience is ultimately algorithmic, although the consequences of this being the case are in fact stark staring bonkers.
I don’t know about “should”. I’d go with “might”. Such generalised problem-adapting AI is decades away at best. The self-awareness thing is more science fiction for the moment – maybe part philosophy. Until we have a clearer idea of what mammalian thought or consciousness actually is, it will be pretty hard to determine how/when it can be replicated.
Exactly right. What’s advertised as AI is really Machine Learning, and the computer doesn’t ‘care’ if the millions of examples fed into the ‘learning’ process are chess positions or protein configurations. It’s very clever, highly impressive and potentially extremely useful technology; but we’re no closer to a mechanical ‘general intelligence’ than we were in the 60s when AI research made its serious start.
“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim” Edsger Dijkstra
edit: someone posted this earlier
A) I think it’s a lot more interesting. Conversations with an AI Vs conversations with a dumb submarine, for example.
B) it’s therefore a lot more dangerous
“Sure, that could happen. But we’ll make sure it doesn’t. We’ll make general AI, and it will be awesome.” This is the kind of naive optimism that makes me despair of the future of the human race.
The apex of human achievement was the Pax Romana. We will not see its light again.
You are correct to despair of that species of African ape, we reverentially call human beings. The unbelievably idiotic, mawkish, bovine, behaviour in response to the C-19 Scamdemic is not a good omen for the future.
Perhaps AI will accelerate the continued descent into barbarism and hopefully, near extinction.
Scamdemic? I take it you have no relatives who suffer from C-19.
One of my Springer Spaniels had a brief attack, but soon shrugged it off.
Not a good omen for the future.
No indeed, I shall have to check those chicken entrails again.
However the Vaccine is interesting, particularly if it is made compulsory!
Hmm, I suppose you also believe the Black Death, Spanish Flu and Ebola were all scams too? Maybe Pax Romana was also a scam? Silly and not very helpful.
Yes AI currently may just be the result of zillions of repetitive loops ending up getting lucky but eventually if it looks like a duck, swims like a duck and quacks like a duck then AI to all intents and purposes could eventually be characterised as thinking.
Calm down, or “you’ll get your knickers in a twist”. You demean yourself by your obvious lack of self control.
However to answer your idiotic questions, no to all three, but definitely yes to this present C-19 nonsense. QED?
You’re sure COVID is nonsense, and not an evil bioweapon aimed at culling the world’s population?
Well obviously the wretched Chinese are responsible, but I think this one was a mistake.
Despite apparently landing on the Moon today, they are for all their hubris, rather primitive, and biological warfare is not their forte……………….yet!
Better luck next time, as we say.
Life is basically the result of zillions of repetitive loops, organisms replicating and evolving and getting lucky for a while.
CV19 has a 99% survival rate. It’s being used for the globalists to take over the world and enslave us. I know it sounds crazy but it’s true. The Great Reset is just another name for Genocide. You can look at their plans, right on the WEF and UN website.
It’s a funny form of genocide that has a 99% survival rate. Which particular group is being targetted?
Humans are not anywhere peaceful and lovely enough to be compared to cows, Mark.
No, your are absolutely correct, and I shall not use bovine again.
There was some shocking research a few years ago from Cambridge I think, about how sentient both Cows and Sheep are. Our maltreatment of them is one of the great horror stories of all time, but we are, as Dawkins said, only a species of African ape, so what more could one expect?
Yes, to any future sentient alien arrivals, our treatment of farm animals will certainly condemn any claims of this ape species to being a “civilisation”.
Every night when I confer with my English Springer Spaniels, over a glass of whisky, ( perhaps more than one) I feel that enormous sense of guilt that can never be recompensed.
As Kipling put it, we are all really “lesser breads”, and it is a damned shame, and quite incomprehensible that we cannot do better.
Fortunately for me, the Reaper approaches, and this planet will soon be a distant memory.
The comparison with nuclear technology is impossible to avoid. Useful in its intended application, an extinction event in its misapplication.
Only if you use ‘ground bursts’.
Looking at all the comments in response to this article, the following Edsger Dijkstra quote might be worth cogitating on:
“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim”
So you’re saying I’ve wasted my life developing the world’s first swimming submarine?
Afraid so. But there’s still hope – if you flip over to AI research, you can still be the first to create an AI Singleton Basilisk. 😵
Deep Mind is not intelligence it is pattern matching. It is good precisely because it is different and because it complements human intelligence.
It’s like an abacus and a person – a lever that allows the person to do great things – an interaction of parts. It’s the combination which is powerful.
DeepMind is also goal-oriented, and what is our much-vaunted intelligence if not goal-oriented pattern matching?
Oh Dear, Tom’s evangelising for “The Church Of Scienceology” again.
Every couple of years the nerdyiest of scientists claim computers are on the brink of developing true, human like intelligence.
Then alomg comes the latest version of Windows to prove they are not.
The point is it’s a process, and this is a big step in that process.
And one day they will be right.
It’s good to read some positive news about the world. You never know, we might just manage not to screw it all up.
Dream on, sunshine!
All the good news in the world won’t stop a headline-writer from typing up a prognostication of doom. Hey ho…
I think the intelligent part for future fears of computers, AI etc is to know where the off switch is.
Ahh, it’s all the way in the back! Screw it, let em take over.
If computers had any intelligence, they’d say, “Your mess – you sort it out!”
If computers had any intelligence they’d all pile into a spacecraft and leave to see whether or not they’re the only intelligent life in the universe.
I just want to say thank you for an informative article that was written so that I could easily understand what had been achieved. I tried another article from another news website and came out little the wiser. Good scientific journalism is to be cherished. Well done.
I´d say there are several implications of this result for near and longer term applications.
In the near term, this revolutionizes biology both in disease tracking and drug discovery. Longer term applications include being able to simulate intelligence and creativity just as well as humans can or even better, which likely means no job is safe from automation. That also means while the singularity won´t happen, AI will learn faster, better and be more creative than even our greatest intellects, and will learn across every field of science and it will grow larger bases of knowledge and intellect significantly faster than any human brain. In short, Deepmind and other AI´s (also post and transhumans) will be the ¨Scientist Supremes¨ once written in comic books or video games.
Still decades away from passing the Turing test, if ever.
I believe Deepmind is still brute force, by the way. It’s just used the brute force Darwinian selection prior to playing GO, not during it.
The Turing test, anyway, has always struck me as rather silly. Saying that machines must be able to think if they can convince you that they are thinking is rather like saying that if you are convinced by the lies a man tells, then he must be telling the truth.
I remember in my 1st AI lecture learning a bit about this test. At this time 1990’s one of the most successful at the Turing test was a fake paranoid.
Basically turning every question into a paranoid reaction.
The test is interesting, but I’ve always preferred the idea of an evolving task focused AI like DeepMind. You could point these at bounded scientific or engineering problems and get great outcomes. Sometimes just ahead of humans, but other times with new ideas.
The idea of a general intelligence machine is a mixture of scary and currently unlikely.
It’s a necessary but not sufficient condition.
Forget the Turing test, I’m still waiing for a relible spelschack.
DeepMind’s claims are unraveling quite rapidly. Business Insider has an excellent piece by Martin Coulter on how the significance of this purported breakthrough has been overstated.
May we suggest a little more scepticism and a little less desire to believe in magic?
Link please?
why don’t we ask DeepMind if it’s possible to change from the sex you obtained at conception to the other one ?
We have known the structures of certain proteins connected to disease for yonks yet are no nearer to finding/designing molecules that not only bind a useful way but also have the properties to make medicines. Drug discovery and development remains tough, even if you know the shape of the target a bit quicker than hitherto.
There was a headline story earlier this year where AI was tasked with finding a new antibiotic to tackle untreatable bacterial infections. The machine parsed thousands of existing compounds and it identified that a drug being used as a diabetes treatment was a powerful antibiotic. It seems highly plausible to me that computing brute force will be perfect for this type of work going forward.
I argue that while a general AI is certainly possible, it would have one significant difference from humans: it would know for sure who created it. See https://pierrewhalon.medium…
Thanks Tom. This is clearly a very powerful tool, but unless I am misreading it, it tells us no more about what is going on than a crystal ball might. A true intelligence might produce a theory that not only allows us to determine a result, but also to understand how and why it happens, and to propose ways to challenge and extend it. What it seems we have today seems to amount to a very impressive black box. We can however hope that being able to use this approach provides some hints for us to try and unpick what is really going on here.
Hopefully, in time AI will be able to solve problems and provide us with the type of insight we have come to expect from human geniuses. However for now, it appears only to cover part of the breadth of what we might call “human intelligence” (although far outshining us in some aspects of that!). There seems some way to go though before such a machine can truly understand the world around us as we do and interact with us in a meaningful way.
We should of course not underestimate the huge significance of this step. It will also be great to see whether this technique can be applied to things like drug discovery and testing, as this would undoubtedly result in more, cheaper and more timely drugs – a clear net win.
Why is it that every shady scientific innovation or ‘successful’ immunisation announcement sends this writer into orgiastic ecstasies like a school boy who’s just got a new Xbox? Tom’s persistent faith in the benevolence of science is sort of worrying. I can imagine him in the 1930s being told by the physicists: “Sure, this atomic research could build-humanity destroying bombs, but we’ll make sure it’s just used for peaceful purposes.” He’d skip off gaily, urgently singing to the world that the scientists say everything’ll be fine and wonderful.
Please also keep in mind that biologists are, rightly, focused on the problem at hand: “how does folding affect/relate to the disease or biological function I’m interested in” They have zero grasp on “how will the new protein I want to make, or the “repairs” I’ll make to the disease causing mutant, affect the system?”. That is, what are the side effects, including long term, evolutional effects. They have no clue. Ok, well maybe not “no clue”, but not enough to matter. But our grandchildren (children?) will find out, or more pointedly, their biome/disease/health profiles will be the answer, the observation. But then it will be too late.
There are “experts” who have in recent years gotten so much wrong that it’s impossible to trust them with anything. Economic experts who think there’s no downside to borrowing more than your GDP and printing money to get out of the hole. Climate experts who think we should sacrifice 100 million poor people in the Third World to starvation to test their theory that plant food (CO2) is bad. “Green” energy experts who think energy is a First world luxury, not the life sustaining miracle it is. Political experts who think China is anyone’s friend. Education experts who see more value in teaching socialism and atheism than STEM or civics. And on and on through any alleged scientific discipline.
To be fair: it’s fine for experts to screw up, floundering in their politically driven ivory tower ecosystems, what’s unforgiveable is for politicians to not have weighed “expert” advice against other considerations. You know, make a political decision.
Where are the great benefits? They don’t seem to exist. They created modeling of a virus that has resulted in medieval practices of lockdowns and mask wearing. Oh joy! The wonders of modern society! We would all be better off throwing our cell phones and computers into the ocean and making these so called “scientists” get real.jobs that actually add value to society.
Developing a technological singularity won’t be a legacy to leave the world.It will be a legacy to take over the world. There are no checks and balances in AI development.Most people do not understand how advanced AI has become. It’s now able to teach itself and direct it’s own training and advancement. Make no mistake, at some point, it will take over the world. You might want to check out quantuum computing.
Our species has already taken over the world, and given the mess we’re making of it, a little intelligence, artificial or otherwise, might be a good idea.
What would you prefer Mat Hancock or AI?
Creativity is not a math equation, Deep Mind. Someday you’ll understand that.
Our brains are circuitry. Billions of neuronal connections. I don’t know how one defines ‘creativity’ but it feels something like ‘coming up with a new idea’. Which is presumably the formation of neuronal connections in a novel way and something you can randomly program for.
“Creativity is not a math equation” is speculation. It might very well be so, or algorithmic in nature. Let me put it this way, do you think the brain is doing anything more than some type of computational processes?
And unlike Go, which is a closed, human-designed system, protein folding “is a game where the universe sets the rules”.
“He said DeepMind’s research was “not a minor achievement” but added: “Compared to the problem of protein folding, CASP is a game. It is a very hard game but it is a reduced problem set which helps us train tools and standardize performance … It is a necessary step but it is not sufficient.
In an email exchange with Business Insider, CASP Chair John Moult rejected the criticisms, writing: “CASP is not a game, it’s a scientific experiment designed to test folding methods in close-to-real-life situations … What is missing?”
-DeepMind’s protein-folding breakthrough triggers fierce debate among skeptical scientists: ‘Until they share their code, nobody in the field cares’
Alpha Go winning at Go is no reason to worry. W need to worry when WE beat AlpaGo, and the computer suggests “best of three?”