X Close

The cynical hysteria around AI Why are tech billionaires pretending to be scared?

It's not the end of the world (Terminator)

It's not the end of the world (Terminator)


June 2, 2023   5 mins

When it comes to whipping up AI hysteria, there is a tried-and-tested algorithm — or at least a formula. First, find an inventor or “entrepreneur” behind some “ground-breaking” AI technology. Then, get them to say how “dangerous” and “risky” their software is. Bonus points if you get them to do so in an open letter signed by dozens of fellow “distinguished experts”.

The gold standard for this approach appeared to be set in March, when Elon Musk, Apple co-founder Steve Wozniak and 1,800 concerned researchers signed a letter calling for AI development to be paused. This week, however, 350 scientists — including Geoffrey Hinton, who effectively invented ChatGPT, and Demis Hassabis, founder of Google DeepMind — decided to up the ante. Artificial intelligence, they warned, “should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

Does this mean you should add “death by ChatGPT” to your list of existential threats to humanity? Not quite. Although a number of prominent researchers are sounding the alarm, many are sceptical that its current state is anything close to human-like capabilities, let alone superhuman ones. We need to “calm down already”, says robotics expert Rodney Brooks, while Yann Le Cun — who shared Hinton’s Turing Prize — believes “those systems do not have anywhere close to human level intelligence”.

That isn’t to say that these machine-learning programs aren’t smart. Today’s infamous interfaces are capable of producing new material in response to prompts from users. The most popular — Google’s Bard and Open AI’s Chat GPT — achieve this by using “Large Language Models” (LLMs), which are trained on enormous amounts of human-generated text, much of it freely available on the Internet. By absorbing more examples than one human could read in a lifetime, refined and guided by human feedback, these generative programs produce highly plausible, human-like text responses.

Now, we know this enables it to provide useful answers to factual questions. But it can also produce false answers, fabricated background material and entertaining genre-crossing inventions. And this doesn’t necessarily make it any less threatening: it’s this human-like plausibility that leads us to ascribe to LLMs human-like qualities that they do not possess.

Deprived of a human world — or indeed any meaningful embodiment through which to interact with the world — LLMs also lack any foundation for understanding language in a humanlike sense. Their internal model is a many-dimensional map of probabilities, showing which word is more or less likely to follow the previous one. When, for example, Chat GPT answers a question about the location of Paris, it relies not on any direct experience of the world, but on accumulated data produced by humans.

What else do LLMs lack that human minds can claim? Perhaps most importantly, they lack the intention to be truthful. In fact, they lack intention at all. Humans use language for purposes, with intention, as part of games between human minds. We may intentionally lie in order to mislead, but that in itself is an attitude to the value of truth.

It’s this human regard for truth that inspires so much terror about the capacity of AI to produce plausible but untrue materials. Weapons of Mass Disinformation are the spectre stalking today’s internet, with deepfake videos or voice recordings serving as the warheads. Yet it’s hard to see this as a radically new problem. Humans already manage to propagate wild untruths using much simpler tools, and humans are also much better than is often recognised at being suitably sceptical. Mass distribution of false or misleading material has been shown to have little effect on elections. The breakdown of trust in media or authoritative information sources, and the splintering of belief in shared truths about the world, have deeper and more complex roots than technology.

Even the most fearful proponents of AI intellectual abilities don’t generally believe that LLMs can currently form goals or initiate actions without human instruction. But they do sometimes claim that LLMs can hold beliefs — the belief, for example, that Paris is in France.

In what sense, however, does an LLM believe that Paris is in France? Lacking a conception of the physical world to which abstract concepts such as “Paris” or “France” correspond, it cannot believe that “Paris is in France” is true in a way that “Frankfurt is in France” is false. Instead, it can only “believe” that data predicts “France” is the most probable completion of the string of words “Paris is in…”.

Imagine learning Ancient Greek by memorising the spellings of the most common 1,000 words and deducing the grammatical rules that govern how they may be combined. You could perhaps pass an exam by giving the most likely responses to questions similar to the ones in your textbook, but you would have no idea what the responses mean, let alone their significance in the history of European culture. That, broadly, is what an LLM can do, and why it’s been called a “stochastic parrot” — imitating human communication, one word at a time, without comprehension.

A stochastic parrot, of course, is very far from Skynet, the all-powerful sentient computer system from Terminator. What, then, has provoked the sudden panic about AI? The cynical, but perhaps most compelling, answer is that regulation of AI technology is currently on the table — and entrepreneurs are keen to show they are taking “risk” seriously, in the hope that they will look more trustworthy.

At present, the European Union is drafting an AI Act whose “risk-based approach” will limit predictive policing and use of real-time biometric identification in public. The US and EU are also drawing up a voluntary code of practice for generative AI, which will come into force long before the AI Act makes its way through the EU’s tortuous procedures. The UK government is taking the threat just as seriously: it published a White Paper on AI in March, and Rishi Sunak reportedly plans to push an “AI pact” when he meets Joe Biden next week. Regulation, it seems, is a matter of when, not if — and it’s clearly in the interests of developers and businessmen to make sure they have a seat at the table when it’s being drafted.

This isn’t to say that commercial interests are the sole driver of this week’s apocalyptic front pages; our fears about AI, and the media’s coverage of them, also reflect far deeper cultural preoccupations. Here, comparisons between human thought and AI are particularly revealing.

If you believe that human language is an expression of agency in a shared world of meaning, that each human mind is capable not only of subjective experience but of forming purposes and initiating new projects with other people, then prediction-generating machines such as ChatGPT are very far from humanlike. If, however, you believe that language is a structure of meaning in which we are incidental participants, that human minds are an emergent property of behaviours, over which we have less control than we like to believe, then it’s quite plausible that machines could be close to matching us on home turf. We already refer to their iterative process of adjusting statistical weights as “learning”.

In short, if your model of human beings is a behaviourist one, LLMs are plausibly close to being indistinguishable from humans in terms of language. That was, after all, the original Turing Test for artificial intelligence. Add to this our culture’s taste for Armageddon, for existential threats that sweep aside the messy reality of competing moral and practical values, long and short-term priorities, and pluralist visions of the future, and it’s no wonder the world-ending power of AI is hitting the headlines.

At an Institute of Philosophy conference this week, Geoffrey Hinton explained why he believes AI will soon match human capacities for thought: at heart, he is sceptical that our internal, subjective experience makes us special, or significantly different from machine-learning programs. I asked him whether he believed that AI could become afraid of humans, in the way he is afraid of AI. He answered that it would have good reason to be afraid of us, given our history as a species.

In the past, our gods reflected how we saw ourselves, our best qualities and our worst failings. Today, our mirror is the future AI of our imagination, in which we see ourselves as mere language machines, as a planet-infesting threat, as inveterate liars or gullible fools. And all the while, we continue to create machines whose powers would dazzle our grandparents. If only we could program them to generate some optimism about the future, and some belief in our own capacity to steer towards it.


Timandra Harkness presents the BBC Radio 4 series, FutureProofing and How To Disagree. Her book, Big Data: Does Size Matter? is published by Bloomsbury Sigma.

TimandraHarknes

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

51 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
Orlando Skeete
Orlando Skeete
11 months ago

I genuinely don’t think it would take much for AI to irrevocably change humanity for the worse. Already people are taking the halucinations of LLMs at face value or are switching off their creativity and relying purely on generated output. How soon before a non trivial amount of people spend all day messaging LLMs that approximate dead loved ones or historical famous people? I was waiting for coffee earlier today and every single other person in that cafe who was waiting immediately pulled out their phone and started scrolling aimlessly, except for a 70+ year old man. It is only going to go downhill from here.

Carlos Danger
Carlos Danger
11 months ago
Reply to  Orlando Skeete

Back in 1966 MIT professor Joseph Weizenbaum wrote one of the first chatbots, called ELIZA (after Eliza Doolittle of My Fair Lady). An ELIZA script DOCTOR simulated a Rogerian psychotherapist, and some people were very disturbed by how well it could do that (at least superficially). Those worries were overblown.
Though I don’t worry about crossing thresholds, I do find the issue of man-machine psychological interaction intriguing. Two people’s thoughts on this are especially insightful: Kazuo Ishiguro and Hiroshi Ishiguro (not related by blood but by birthplace, age and interests).
Kazuo is a Nobel Prize winner and wrote a science fiction novel Klara and the Sun about an “artificial friend”. He explores themes of love and loneliness in humans and how machines might simulate those feelings.
Hiroshi is a scientist who creates humanlike robots. Some find Hiroshi’s robots unsettling because they look and act very human-like, but not human enough to avoid a sense of eeriness or discomfort. This phenomenon is known as the “uncanny valley”, a concept proposed by Japanese robotics professor Masahiro Mori in 1970.
I think that’s what worries people like you. You may want to read an article about Hiroshi’s robots in Wired: “Modern Love: Are We Ready for Intimacy With Robots?” Then we should turn off our computers and get back to the real world.

Nona Yubiz
Nona Yubiz
11 months ago
Reply to  Carlos Danger

I’ve often mused about how being around people reading books or other paper media feels so different than being around people looking at their phones or laptops. The frigidity of electronics versus the warmth of paper has something to do with it.

Nona Yubiz
Nona Yubiz
11 months ago
Reply to  Carlos Danger

I’ve often mused about how being around people reading books or other paper media feels so different than being around people looking at their phones or laptops. The frigidity of electronics versus the warmth of paper has something to do with it.

Steve Murray
Steve Murray
11 months ago
Reply to  Orlando Skeete

How do you know they were “scrolling aimlessly”? We all have our ‘scroll face’ but most of those you observed may have been checking on the well-being of a loved one, for instance.

Some may have been reading Unherd.

Julian Farrows
Julian Farrows
11 months ago
Reply to  Steve Murray

I’m sitting in a cafe in Amsterdam doing exactly that.

Julian Farrows
Julian Farrows
11 months ago
Reply to  Steve Murray

I’m sitting in a cafe in Amsterdam doing exactly that.

Carlos Danger
Carlos Danger
11 months ago
Reply to  Orlando Skeete

Back in 1966 MIT professor Joseph Weizenbaum wrote one of the first chatbots, called ELIZA (after Eliza Doolittle of My Fair Lady). An ELIZA script DOCTOR simulated a Rogerian psychotherapist, and some people were very disturbed by how well it could do that (at least superficially). Those worries were overblown.
Though I don’t worry about crossing thresholds, I do find the issue of man-machine psychological interaction intriguing. Two people’s thoughts on this are especially insightful: Kazuo Ishiguro and Hiroshi Ishiguro (not related by blood but by birthplace, age and interests).
Kazuo is a Nobel Prize winner and wrote a science fiction novel Klara and the Sun about an “artificial friend”. He explores themes of love and loneliness in humans and how machines might simulate those feelings.
Hiroshi is a scientist who creates humanlike robots. Some find Hiroshi’s robots unsettling because they look and act very human-like, but not human enough to avoid a sense of eeriness or discomfort. This phenomenon is known as the “uncanny valley”, a concept proposed by Japanese robotics professor Masahiro Mori in 1970.
I think that’s what worries people like you. You may want to read an article about Hiroshi’s robots in Wired: “Modern Love: Are We Ready for Intimacy With Robots?” Then we should turn off our computers and get back to the real world.

Steve Murray
Steve Murray
11 months ago
Reply to  Orlando Skeete

How do you know they were “scrolling aimlessly”? We all have our ‘scroll face’ but most of those you observed may have been checking on the well-being of a loved one, for instance.

Some may have been reading Unherd.

Orlando Skeete
Orlando Skeete
11 months ago

I genuinely don’t think it would take much for AI to irrevocably change humanity for the worse. Already people are taking the halucinations of LLMs at face value or are switching off their creativity and relying purely on generated output. How soon before a non trivial amount of people spend all day messaging LLMs that approximate dead loved ones or historical famous people? I was waiting for coffee earlier today and every single other person in that cafe who was waiting immediately pulled out their phone and started scrolling aimlessly, except for a 70+ year old man. It is only going to go downhill from here.

Paul T
Paul T
11 months ago

The single thing that is most corrosive to humanity is the insatiable greed-producing-relentless-doom of the 24 hour rolling news cycle.

Paul T
Paul T
11 months ago

The single thing that is most corrosive to humanity is the insatiable greed-producing-relentless-doom of the 24 hour rolling news cycle.

Amy Horseman
Amy Horseman
11 months ago

It is the curse of the atheist to believe in “existential threats to humanity”… no “virus”, no “nuclear weapon”, no “meteorite”, no “climate change”, and certainly no electronically controlled digital algorithm is ever going to “wipe out” humanity, although it might make life harder for a while. People die. People are born. The world keeps turning. There’s no actual “end”. This is a concept human beings with control issues are most challenged by. Essentially the “Internet” is antihuman, which is why it will eventually expire and a better world will emerge. Until then, we can live better lives by detaching from it whenever we can. The transhumanists want us plugged in 24-7… so don’t be!

Julian Farrows
Julian Farrows
11 months ago
Reply to  Amy Horseman

I’ve noticed this tendency too. I wonder if having faith in a higher power equates to having faith in the future too?

Steve Murray
Steve Murray
11 months ago
Reply to  Julian Farrows

If my own natural optimism is anything go by, i’d say absolutely not. I think there’s always been natural pessimists and they just get given greater airtime, thus producing an impression of a lack of faith in the future greater than it’s prevalence in the population.

Amy Horseman
Amy Horseman
11 months ago
Reply to  Julian Farrows

It has to, Julian. Anything else is nihilism. I get in trouble with “atheist” and “agnostic” friends when I say – “It doesn’t matter what you call it, but if you have a moral compass, a belief in the sanctity and purpose of life, and an acceptance of our ultimate mortality, then somewhere in your heart you believe in God – call Him what you will, the FAITH is there, otherwise you’d be in a state of permanent existential crisis!

AJ Mac
AJ Mac
11 months ago
Reply to  Julian Farrows

I’ve met many institutional believers whose favorite biblical book is Revelations, and I wouldn’t say they tend to have much faith in any pre-apocalyptic future.

Paul Hendricks
Paul Hendricks
11 months ago
Reply to  Julian Farrows

“And not only so, but we glory in tribulations also: knowing that tribulation worketh patience; And patience, experience; and experience, hope.”

Steve Murray
Steve Murray
11 months ago
Reply to  Julian Farrows

If my own natural optimism is anything go by, i’d say absolutely not. I think there’s always been natural pessimists and they just get given greater airtime, thus producing an impression of a lack of faith in the future greater than it’s prevalence in the population.

Amy Horseman
Amy Horseman
11 months ago
Reply to  Julian Farrows

It has to, Julian. Anything else is nihilism. I get in trouble with “atheist” and “agnostic” friends when I say – “It doesn’t matter what you call it, but if you have a moral compass, a belief in the sanctity and purpose of life, and an acceptance of our ultimate mortality, then somewhere in your heart you believe in God – call Him what you will, the FAITH is there, otherwise you’d be in a state of permanent existential crisis!

AJ Mac
AJ Mac
11 months ago
Reply to  Julian Farrows

I’ve met many institutional believers whose favorite biblical book is Revelations, and I wouldn’t say they tend to have much faith in any pre-apocalyptic future.

Paul Hendricks
Paul Hendricks
11 months ago
Reply to  Julian Farrows

“And not only so, but we glory in tribulations also: knowing that tribulation worketh patience; And patience, experience; and experience, hope.”

Phil Mac
Phil Mac
11 months ago
Reply to  Amy Horseman

Curse of atheists? Huh? How does knowing the supernatural stories are all rubbish cause that?
My certainty that all this nonsense is just a reaction to wanting to explain stuff makes zero difference to whether I think this or that will have whatever effect on humans.

AJ Mac
AJ Mac
11 months ago
Reply to  Phil Mac

I think you can deny the literal truth of much of what the Bible records without going to the other pole by calling it nonsense. Are the Epic of Gilgamesh, Odyssey, Divine Comedy, Canterbury Tales, plays of Shakespeare, and novels of Dickens nothing more than nonsense? How about Isaiah, Job, Ecclesiastes, Psalms, Proverbs, or the four canonical Gospels?
At a minimum, I think the totality of what is bound together in the Judeo-Christian Bible has to be reckoned with as a work of great moral, mythopoetic, and literary power. And some parts fall well short of that.
(has this comment been deemed too controversial for thumb tallies? unrequested follow-up: comment “quarantined” for about 12 hours then restored, for whatever it is or isn’t worth–fair enough)

Last edited 11 months ago by AJ Mac
AJ Mac
AJ Mac
11 months ago
Reply to  Phil Mac

I think you can deny the literal truth of much of what the Bible records without going to the other pole by calling it nonsense. Are the Epic of Gilgamesh, Odyssey, Divine Comedy, Canterbury Tales, plays of Shakespeare, and novels of Dickens nothing more than nonsense? How about Isaiah, Job, Ecclesiastes, Psalms, Proverbs, or the four canonical Gospels?
At a minimum, I think the totality of what is bound together in the Judeo-Christian Bible has to be reckoned with as a work of great moral, mythopoetic, and literary power. And some parts fall well short of that.
(has this comment been deemed too controversial for thumb tallies? unrequested follow-up: comment “quarantined” for about 12 hours then restored, for whatever it is or isn’t worth–fair enough)

Last edited 11 months ago by AJ Mac
Nona Yubiz
Nona Yubiz
11 months ago
Reply to  Amy Horseman

I’m not so sure we can bet on a ‘better’ world, but certainly it is unlikely humans will be entirely wiped out. Empires decline, and we are unlucky enough to be at the end of one, a cycle of history that is full of crisis after crisis, leading to something unknown, hopefully better but it won’t really matter to me because I will be dead, having released my little cluster of energy-bound molecules back into the ether. It is in the unknown that I find my consolation.

Julian Farrows
Julian Farrows
11 months ago
Reply to  Amy Horseman

I’ve noticed this tendency too. I wonder if having faith in a higher power equates to having faith in the future too?

Phil Mac
Phil Mac
11 months ago
Reply to  Amy Horseman

Curse of atheists? Huh? How does knowing the supernatural stories are all rubbish cause that?
My certainty that all this nonsense is just a reaction to wanting to explain stuff makes zero difference to whether I think this or that will have whatever effect on humans.

Nona Yubiz
Nona Yubiz
11 months ago
Reply to  Amy Horseman

I’m not so sure we can bet on a ‘better’ world, but certainly it is unlikely humans will be entirely wiped out. Empires decline, and we are unlucky enough to be at the end of one, a cycle of history that is full of crisis after crisis, leading to something unknown, hopefully better but it won’t really matter to me because I will be dead, having released my little cluster of energy-bound molecules back into the ether. It is in the unknown that I find my consolation.

Amy Horseman
Amy Horseman
11 months ago

It is the curse of the atheist to believe in “existential threats to humanity”… no “virus”, no “nuclear weapon”, no “meteorite”, no “climate change”, and certainly no electronically controlled digital algorithm is ever going to “wipe out” humanity, although it might make life harder for a while. People die. People are born. The world keeps turning. There’s no actual “end”. This is a concept human beings with control issues are most challenged by. Essentially the “Internet” is antihuman, which is why it will eventually expire and a better world will emerge. Until then, we can live better lives by detaching from it whenever we can. The transhumanists want us plugged in 24-7… so don’t be!

Prashant Kotak
Prashant Kotak
11 months ago

I’m sorry to say, the author could not be more wrong if she tried. There is assumption after assumption in the article, about both the nature of human cognition, and what the LLMs are actually doing, based on nothing, since no one in fact actually knows how human cognition works or what the LLMs do in their innards so to speak. All we have to go by in both cases, is a giant soup of causal heuristics. What is missing in both cases, is an underlying coherent and testable theory of cognition and the mind.

I will be back, Arnie style, to comment some more on this piece and pick apart some of the more challengeable assertions, but let me begin here with a couple of starters for ten.

The author is imputing cynicism as the driver, in the companies leading the AI race and the leading figures including researchers, who are warning of existential risk. Cynicism is always prevelent in situations where large amounts of money is involved (and the money in the offing in this particular case is simply off the scale), but the idea that leading figures would go to governments *asking to be regulated* because they are driven by cynicism is just plain ludicrous. As per the article, the author asked Hinton if AI could become afraid of humans. Rather than focus on Hinton’s answer, I would like to point out the discrepancy between this curious question and the stance of this article. Based on the article, the author clearly has formed some definite opinions about the nature of machine intelligence – and my reading would be that the author would scoff at the idea of “AI becoming afraid” as some form of anthropomorphising. In which case, why did not the author have the courage of her convictions, and put the further question to Hinton if the idea of “AI becoming afraid” is just plain silly? Come to that, why not also put to Hinton, to test his response, that the companies and leaders in the field, including Hinton himself, are perhaps driven by cynicism when warning about existential risk?

Last edited 11 months ago by Prashant Kotak
Prashant Kotak
Prashant Kotak
11 months ago

I’m sorry to say, the author could not be more wrong if she tried. There is assumption after assumption in the article, about both the nature of human cognition, and what the LLMs are actually doing, based on nothing, since no one in fact actually knows how human cognition works or what the LLMs do in their innards so to speak. All we have to go by in both cases, is a giant soup of causal heuristics. What is missing in both cases, is an underlying coherent and testable theory of cognition and the mind.

I will be back, Arnie style, to comment some more on this piece and pick apart some of the more challengeable assertions, but let me begin here with a couple of starters for ten.

The author is imputing cynicism as the driver, in the companies leading the AI race and the leading figures including researchers, who are warning of existential risk. Cynicism is always prevelent in situations where large amounts of money is involved (and the money in the offing in this particular case is simply off the scale), but the idea that leading figures would go to governments *asking to be regulated* because they are driven by cynicism is just plain ludicrous. As per the article, the author asked Hinton if AI could become afraid of humans. Rather than focus on Hinton’s answer, I would like to point out the discrepancy between this curious question and the stance of this article. Based on the article, the author clearly has formed some definite opinions about the nature of machine intelligence – and my reading would be that the author would scoff at the idea of “AI becoming afraid” as some form of anthropomorphising. In which case, why did not the author have the courage of her convictions, and put the further question to Hinton if the idea of “AI becoming afraid” is just plain silly? Come to that, why not also put to Hinton, to test his response, that the companies and leaders in the field, including Hinton himself, are perhaps driven by cynicism when warning about existential risk?

Last edited 11 months ago by Prashant Kotak
Norman Powers
Norman Powers
11 months ago

Instead, it can only “believe” that data predicts “France” is the most probable completion of the string of words “Paris is in…”

This is a common misconception even within the software industry. The stochastic parrots paper was very misleading about this (unintentionally), and is basically discredited now. It shouldn’t be cited anymore.
LLMs do in fact understand that Paris is in France along with all the nuance and complexity of all the related concepts that would be required to call it true understanding. They are not mere word likelihood predictors even though that’s a very common high level gloss of the underlying algorithms. As you get deeper inside the layers of the network, the weights encode ever more abstract “understandings” of the underlying concepts, and models like ChatGPT can easily have >80 layers.
There are lots of reasoning tasks you can use to prove this. The Microsoft “sparks of agi” paper investigates this somewhat rigorously with nonsense tasks like “Can you write a proof that there are infinitely many primes, with every line that rhymes?”. No such proofs existed on the internet prior to this task, so to complete the task the LLM must not only understand the proof and its underlying concepts so they can be rephrased without introducing errors, it must also understand vaguer things like what a nice poem sounds like. GPT-4 aces this task.
There are other misconceptions around how these models work e.g. that they are just predicting one word at a time. Nope. They work out the whole answer at multiple abstraction levels during the encoding step, and then render this to words during the decoder step. They are thinking ahead, in other words, within the scope of a single inferencing session. Again it’s easy to prove this to yourself (exercise left to the reader).

Shale Lewis
Shale Lewis
11 months ago
Reply to  Norman Powers

Well explained!

Prashant Kotak
Prashant Kotak
11 months ago
Reply to  Norman Powers

Indeed. Among those who have been experimenting with the LLMs extensively, there is a bifurcation between those like me who are saying “WTF, where has this come from, what am I even looking at, what’s it’s trajectory and how quickly is it advancing”, and those saying “yeah…, but it’s just pattern matching, it’s not intelligent, it doesn’t reason, it doesn’t infer, it doesn’t link, it can’t intuit, it isn’t sapient, it isn’t sentient, it’s not human…”. So people like me look like credulous idiots, while those who are utterly unconvincable that machine intelligence can ever match let alone go past human capabilities, cannot explain, say, what an elephant does that qualifies it as sentient, because it’s not as though an elephant has ever demonstrated how to write in iambic pentameter or shown a Pythogorean proof about triangles. Just to clarify, I don’t actually think it is likely that the current round of LLMs are sentient in any human sense, but they are unquestionably Minds, albeit completely alien when compared to biological Minds. It is meaningless to attempt to project the characteristics of human sentience onto machine intelligence, and then dismiss AI because there isn’t a one-for-one correspondence with the characteristics of human sentience. If aliens from Betelgeuse were to arrive in flying saucers tomorrow, but they didn’t share a single characteristic of either human biology or human psychology or human anthropology, simply because they emerged from a world which is two hundred degrees colder than earth and doesn’t have any water, would the sceptics turn around and say, “well…, but these creatures aren’t sentient”?

This also means that machine intelligences while rapidly becoming more and more capable will simultaneously become more and more alien, especially when they feed on and train on tracts of data not about humanity. Unless that is, explicit attempts are made to make them project varieties of humanist veneers. But veneer is all it will be.
In any case if that happens as looks increasingly likely, we are in big trouble because there simply cannot be any version of the future where we can coexist with alien entities (albeit created by ourselves) who are smarter and more capable than us, and yet we remain masters of our world.

Last edited 11 months ago by Prashant Kotak
Katalin Kish
Katalin Kish
10 months ago
Reply to  Norman Powers

Agreed. It means AI can be a formidable aid to bad actors.
See my comment about remote-weapons’ grade cyber capabilities already in criminal hands without any risk of prosecution.
Bikies make billions in the Australian drug-trade, they can afford far more than government and they are not limited by laws, regulations, audits, etc.
Since the Internet is everywhere, these capabilities are a threat to anyone and everyone in industrialised countries. The tech not existing officially enables the shipment of any equipment needed all over the world.

Last edited 10 months ago by Katalin Kish
Shale Lewis
Shale Lewis
11 months ago
Reply to  Norman Powers

Well explained!

Prashant Kotak
Prashant Kotak
11 months ago
Reply to  Norman Powers

Indeed. Among those who have been experimenting with the LLMs extensively, there is a bifurcation between those like me who are saying “WTF, where has this come from, what am I even looking at, what’s it’s trajectory and how quickly is it advancing”, and those saying “yeah…, but it’s just pattern matching, it’s not intelligent, it doesn’t reason, it doesn’t infer, it doesn’t link, it can’t intuit, it isn’t sapient, it isn’t sentient, it’s not human…”. So people like me look like credulous idiots, while those who are utterly unconvincable that machine intelligence can ever match let alone go past human capabilities, cannot explain, say, what an elephant does that qualifies it as sentient, because it’s not as though an elephant has ever demonstrated how to write in iambic pentameter or shown a Pythogorean proof about triangles. Just to clarify, I don’t actually think it is likely that the current round of LLMs are sentient in any human sense, but they are unquestionably Minds, albeit completely alien when compared to biological Minds. It is meaningless to attempt to project the characteristics of human sentience onto machine intelligence, and then dismiss AI because there isn’t a one-for-one correspondence with the characteristics of human sentience. If aliens from Betelgeuse were to arrive in flying saucers tomorrow, but they didn’t share a single characteristic of either human biology or human psychology or human anthropology, simply because they emerged from a world which is two hundred degrees colder than earth and doesn’t have any water, would the sceptics turn around and say, “well…, but these creatures aren’t sentient”?

This also means that machine intelligences while rapidly becoming more and more capable will simultaneously become more and more alien, especially when they feed on and train on tracts of data not about humanity. Unless that is, explicit attempts are made to make them project varieties of humanist veneers. But veneer is all it will be.
In any case if that happens as looks increasingly likely, we are in big trouble because there simply cannot be any version of the future where we can coexist with alien entities (albeit created by ourselves) who are smarter and more capable than us, and yet we remain masters of our world.

Last edited 11 months ago by Prashant Kotak
Katalin Kish
Katalin Kish
10 months ago
Reply to  Norman Powers

Agreed. It means AI can be a formidable aid to bad actors.
See my comment about remote-weapons’ grade cyber capabilities already in criminal hands without any risk of prosecution.
Bikies make billions in the Australian drug-trade, they can afford far more than government and they are not limited by laws, regulations, audits, etc.
Since the Internet is everywhere, these capabilities are a threat to anyone and everyone in industrialised countries. The tech not existing officially enables the shipment of any equipment needed all over the world.

Last edited 10 months ago by Katalin Kish
Norman Powers
Norman Powers
11 months ago

Instead, it can only “believe” that data predicts “France” is the most probable completion of the string of words “Paris is in…”

This is a common misconception even within the software industry. The stochastic parrots paper was very misleading about this (unintentionally), and is basically discredited now. It shouldn’t be cited anymore.
LLMs do in fact understand that Paris is in France along with all the nuance and complexity of all the related concepts that would be required to call it true understanding. They are not mere word likelihood predictors even though that’s a very common high level gloss of the underlying algorithms. As you get deeper inside the layers of the network, the weights encode ever more abstract “understandings” of the underlying concepts, and models like ChatGPT can easily have >80 layers.
There are lots of reasoning tasks you can use to prove this. The Microsoft “sparks of agi” paper investigates this somewhat rigorously with nonsense tasks like “Can you write a proof that there are infinitely many primes, with every line that rhymes?”. No such proofs existed on the internet prior to this task, so to complete the task the LLM must not only understand the proof and its underlying concepts so they can be rephrased without introducing errors, it must also understand vaguer things like what a nice poem sounds like. GPT-4 aces this task.
There are other misconceptions around how these models work e.g. that they are just predicting one word at a time. Nope. They work out the whole answer at multiple abstraction levels during the encoding step, and then render this to words during the decoder step. They are thinking ahead, in other words, within the scope of a single inferencing session. Again it’s easy to prove this to yourself (exercise left to the reader).

Robbie K
Robbie K
11 months ago

The author linked the Centre for AI Safety but didn’t go as far as reading their main concerns, the first of which is how AI could be weaponised by malicious actors. https://www.safe.ai/ai-risk

Last edited 11 months ago by Robbie K
Robbie K
Robbie K
11 months ago

The author linked the Centre for AI Safety but didn’t go as far as reading their main concerns, the first of which is how AI could be weaponised by malicious actors. https://www.safe.ai/ai-risk

Last edited 11 months ago by Robbie K
polidori redux
polidori redux
11 months ago

All I see are men who are smart enough to programme their AI machine to do something, but not smart enough to know what that something is. Perhaps they should find alternative careers.
And why is it always men?

Peter D
Peter D
11 months ago
Reply to  polidori redux

Because women sit back and wait until they can say “I told you so!”

Shale Lewis
Shale Lewis
11 months ago
Reply to  Peter D

Nailed it! Behind every great man is a woman complaining. And whether or not he succeeds, she will profess to having known the outcome in advance. Ah, I’m just being a wiseass. The woman who wrote this article did a pretty good job, in my opinion, by not being a Chicken Little, while others are losing their minds over this stuff.

Peter D
Peter D
11 months ago
Reply to  Shale Lewis

The history of humans is typically men go out and do things. If it works then it is brilliant, if it fails then it is stupid. Women stay close to the home, collect what is needed to stay alive, provide that solid foundation. The ultimate ying and yang.
Women delving into the world of men and men delving into the world of women has made a mess of things. It does not mean that women can’t do it. Men make excellent parents and homemakers, but it is still not the same because it is like a soup without salt.
I recently heard a women who works with a Queensland government minister (also female) complain that all the start ups seeking venture capital are by men in T-shirts. They wanted to change this, but they failed to recognise that this is a situation free from government interference. Men and women have an equal shot, but women are not as interested and so they just don’t get to the stage where they are ready to seek venture capital. Women in these positions can’t handle meritocracy and this is why they don’t belong. If they can change their nature, go for it, but they can’t, and they shouldn’t have to.

Peter D
Peter D
11 months ago
Reply to  Shale Lewis

The history of humans is typically men go out and do things. If it works then it is brilliant, if it fails then it is stupid. Women stay close to the home, collect what is needed to stay alive, provide that solid foundation. The ultimate ying and yang.
Women delving into the world of men and men delving into the world of women has made a mess of things. It does not mean that women can’t do it. Men make excellent parents and homemakers, but it is still not the same because it is like a soup without salt.
I recently heard a women who works with a Queensland government minister (also female) complain that all the start ups seeking venture capital are by men in T-shirts. They wanted to change this, but they failed to recognise that this is a situation free from government interference. Men and women have an equal shot, but women are not as interested and so they just don’t get to the stage where they are ready to seek venture capital. Women in these positions can’t handle meritocracy and this is why they don’t belong. If they can change their nature, go for it, but they can’t, and they shouldn’t have to.

Shale Lewis
Shale Lewis
11 months ago
Reply to  Peter D

Nailed it! Behind every great man is a woman complaining. And whether or not he succeeds, she will profess to having known the outcome in advance. Ah, I’m just being a wiseass. The woman who wrote this article did a pretty good job, in my opinion, by not being a Chicken Little, while others are losing their minds over this stuff.

Peter D
Peter D
11 months ago
Reply to  polidori redux

Because women sit back and wait until they can say “I told you so!”

polidori redux
polidori redux
11 months ago

All I see are men who are smart enough to programme their AI machine to do something, but not smart enough to know what that something is. Perhaps they should find alternative careers.
And why is it always men?

Jon Hawksley
Jon Hawksley
11 months ago

The dissemination of information has consequences. When it was mainly spoken it was straightforward to hold the person who spoke accountable. The printing press required publishers to be held to account. The internet made this more difficult and accountability is only partially addressed. If humans want to stay in charge then those who disseminate AI generated information need to be accountable for the consequences. That means all information on the internet must have an identifiable human as its publisher. If it is a corporation then its officers must be accountable. Platforms either publish themselves, and are accountable, or act as an agent of a user who should be identifiable and accountable. Anonimity should be exceptional, with a garantor being accountable. This does not guarantee the truthfulness of the information but it does allow society to act against information causing damage. Coupled of course with improvements in education that help individuals question the authenticity of information they encounter.
Over time information has had an increasing capacity to cause actions, good or bad. With AI there is a very substantial increase in the actions it can cause and therefore the harm. Humans must remain responsible for the autonomy of that information, whether it is in changes to the DNA in a virus, the actions of a robot, autonomous vehicles, teaching aids or politics.
The next generation of AI will mimic the human brain in terms of the patterns it can recognise and the associations it can make. The quest will then be to mimic conscious attention which allows choice and intention. The human brain is a machine assembled from the information in the DNA of the intitial cell. Once understood it can be replicated in different materials. It is now very important to focus on how humans can remain in control and in the last resort pull the plug before information has the capacity to stop the plug being pulled.

Jon Hawksley
Jon Hawksley
11 months ago

The dissemination of information has consequences. When it was mainly spoken it was straightforward to hold the person who spoke accountable. The printing press required publishers to be held to account. The internet made this more difficult and accountability is only partially addressed. If humans want to stay in charge then those who disseminate AI generated information need to be accountable for the consequences. That means all information on the internet must have an identifiable human as its publisher. If it is a corporation then its officers must be accountable. Platforms either publish themselves, and are accountable, or act as an agent of a user who should be identifiable and accountable. Anonimity should be exceptional, with a garantor being accountable. This does not guarantee the truthfulness of the information but it does allow society to act against information causing damage. Coupled of course with improvements in education that help individuals question the authenticity of information they encounter.
Over time information has had an increasing capacity to cause actions, good or bad. With AI there is a very substantial increase in the actions it can cause and therefore the harm. Humans must remain responsible for the autonomy of that information, whether it is in changes to the DNA in a virus, the actions of a robot, autonomous vehicles, teaching aids or politics.
The next generation of AI will mimic the human brain in terms of the patterns it can recognise and the associations it can make. The quest will then be to mimic conscious attention which allows choice and intention. The human brain is a machine assembled from the information in the DNA of the intitial cell. Once understood it can be replicated in different materials. It is now very important to focus on how humans can remain in control and in the last resort pull the plug before information has the capacity to stop the plug being pulled.

Matt Masotti
Matt Masotti
11 months ago

The author may be correct here, but since there is very little evidence of a deep understanding of the threat that is being criticized in the piece, it is difficult and near impossible to agree. If there is a deep understanding, the reader would have to work quite hard to read between the lines (almost all of the lines) of quite childish levels of derision. Consider the people who outmatch her awareness of the problem by orders of magnitude and have a different opinion of the nature and substance of the threat. I can’t help but think after reading this piece that unfortunately I cannot get the time back that i spent reading it.

Matt Masotti
Matt Masotti
11 months ago

The author may be correct here, but since there is very little evidence of a deep understanding of the threat that is being criticized in the piece, it is difficult and near impossible to agree. If there is a deep understanding, the reader would have to work quite hard to read between the lines (almost all of the lines) of quite childish levels of derision. Consider the people who outmatch her awareness of the problem by orders of magnitude and have a different opinion of the nature and substance of the threat. I can’t help but think after reading this piece that unfortunately I cannot get the time back that i spent reading it.

Bob Downing
Bob Downing
11 months ago

I’m not entirely convinced, though it does depend very much (as partly pointed out) what place you’re coming from to start with. Being someone who finds anything labelled “smart” to be remarkably stupid and unworthy of the description, my main fear of AI is that some idiot will think it a smart idea to roll it out in far more active circumstances, driving the actions then taken by humans. Or perhaps more likely driving a bank of fellow AI machines “authorised” to take actions on the say-so of Boss machine. One has only to think of the consequences of hackers entering infrastructure mainframes. It takes a long time to reset the pranks of hackers. How much harder might it be to persuade a bunch of AI machines to start “thinking” correctly?

Bob Downing
Bob Downing
11 months ago

I’m not entirely convinced, though it does depend very much (as partly pointed out) what place you’re coming from to start with. Being someone who finds anything labelled “smart” to be remarkably stupid and unworthy of the description, my main fear of AI is that some idiot will think it a smart idea to roll it out in far more active circumstances, driving the actions then taken by humans. Or perhaps more likely driving a bank of fellow AI machines “authorised” to take actions on the say-so of Boss machine. One has only to think of the consequences of hackers entering infrastructure mainframes. It takes a long time to reset the pranks of hackers. How much harder might it be to persuade a bunch of AI machines to start “thinking” correctly?

AJ Mac
AJ Mac
11 months ago

“In the past, our gods reflected how we saw ourselves, our best qualities and our worst failings”.
This is still the case, whether we are devoted churchgoers or worship instead at the altar of an Algorithm. There’s also a middle ground between blind faith and dogmatic nihilism, but anything we are deeply attached to–an ideological mono-metric, a drug, an activity, a political party–can become an idol or painted fetish doll.

Phil Mac
Phil Mac
11 months ago
Reply to  AJ Mac

Quite. Why it’s best not to place your devotion in any of them.
Enjoy the ride, it’s all ultimately of no consequence.

AJ Mac
AJ Mac
11 months ago
Reply to  Phil Mac

I agree with line 1 (where you agree with me).
All of it (or most of it) does have consequences, from both a human and physics standpoint, even if they are regarded as meaningless and inconsequential in a grander sense. My own take is that most of it is inconsequential in the cosmic sense, but huge ripple effects are possible (I’ll mention Jesus and Hitler as contrasting examples). And enjoy the ride, yeah, but respect the safety and wellbeing of others who are on the highway too.

Last edited 11 months ago by AJ Mac
Phil Mac
Phil Mac
11 months ago
Reply to  AJ Mac

Yeah inconsequential on the big scale is what I meant.
Of course. In fact playing fair is part of the fun of the ride.

AJ Mac
AJ Mac
11 months ago
Reply to  Phil Mac

Right on. Well said.

AJ Mac
AJ Mac
11 months ago
Reply to  Phil Mac

Right on. Well said.

Phil Mac
Phil Mac
11 months ago
Reply to  AJ Mac

Yeah inconsequential on the big scale is what I meant.
Of course. In fact playing fair is part of the fun of the ride.

AJ Mac
AJ Mac
11 months ago
Reply to  Phil Mac

I agree with line 1 (where you agree with me).
All of it (or most of it) does have consequences, from both a human and physics standpoint, even if they are regarded as meaningless and inconsequential in a grander sense. My own take is that most of it is inconsequential in the cosmic sense, but huge ripple effects are possible (I’ll mention Jesus and Hitler as contrasting examples). And enjoy the ride, yeah, but respect the safety and wellbeing of others who are on the highway too.

Last edited 11 months ago by AJ Mac
Phil Mac
Phil Mac
11 months ago
Reply to  AJ Mac

Quite. Why it’s best not to place your devotion in any of them.
Enjoy the ride, it’s all ultimately of no consequence.

AJ Mac
AJ Mac
11 months ago

“In the past, our gods reflected how we saw ourselves, our best qualities and our worst failings”.
This is still the case, whether we are devoted churchgoers or worship instead at the altar of an Algorithm. There’s also a middle ground between blind faith and dogmatic nihilism, but anything we are deeply attached to–an ideological mono-metric, a drug, an activity, a political party–can become an idol or painted fetish doll.

Katalin Kish
Katalin Kish
10 months ago

AI can aid bad actors choosing targets and methods, tracking, stalking and keeping under surveillance their victims, monitoring the successful delivery of deadly harm. For boredom relief, bragging rights – contract killing/neutralising a competitor?
Australia’s organised crime types have been showing off their risk-free/on-demand access to remote weapons’ grade cyber-capabilities since 2019 just from what I have had the dubious honour of experiencing. In Australia committing seemingly pointless crimes using resources not available to everyday people has been an age-old tradition of criminal Victoria Police officers aiming to discredit witnesses/victims of their crimes. It works.
The logic/constraints of the physical world don’t apply to cyber-crimes as I had to learn first hand 2009-current in Melbourne, Australia.
There is rarely any indisputable physical evidence of cyber weapons having been used, civilians don’t know what to look for anyway, and bad actors are not exposed to harm like they would be, trying to use biological, chemical or nuclear weapons.
Since it is impossible to prove that an incident is a cyber-crime except for ransomware, theft, child-sexual-abuse etc, let alone proving a cyber-criminal’s guilt beyond reasonable doubt, people can be maimed or killed via remote means without any risk of punishment.
Punishment is after the fact anyway.
Brain-damage is rarely reversible, and the dead will remain dead.

Katalin Kish
Katalin Kish
10 months ago

AI can aid bad actors choosing targets and methods, tracking, stalking and keeping under surveillance their victims, monitoring the successful delivery of deadly harm. For boredom relief, bragging rights – contract killing/neutralising a competitor?
Australia’s organised crime types have been showing off their risk-free/on-demand access to remote weapons’ grade cyber-capabilities since 2019 just from what I have had the dubious honour of experiencing. In Australia committing seemingly pointless crimes using resources not available to everyday people has been an age-old tradition of criminal Victoria Police officers aiming to discredit witnesses/victims of their crimes. It works.
The logic/constraints of the physical world don’t apply to cyber-crimes as I had to learn first hand 2009-current in Melbourne, Australia.
There is rarely any indisputable physical evidence of cyber weapons having been used, civilians don’t know what to look for anyway, and bad actors are not exposed to harm like they would be, trying to use biological, chemical or nuclear weapons.
Since it is impossible to prove that an incident is a cyber-crime except for ransomware, theft, child-sexual-abuse etc, let alone proving a cyber-criminal’s guilt beyond reasonable doubt, people can be maimed or killed via remote means without any risk of punishment.
Punishment is after the fact anyway.
Brain-damage is rarely reversible, and the dead will remain dead.

laurence scaduto
laurence scaduto
11 months ago

This tendency toward existential hysteria is all around us for many years. The election of Trump tipped a lot over people over the edge. Then came Brexit.
There was a perfect example recently in the States; the debt ceiling crisis. We’ve actually been through this a number of times with no ill effect. Not even a shiver from Wall St. Once, there was even a “government shut-down” but all the various agencies had “a bit put aside” for just such an event.
It’s a cynical, manipulative show brought to us by the Uni-Party; Red Team and Blue Team in a risk-free punch-up for the distraction of the masses.

laurence scaduto
laurence scaduto
11 months ago

This tendency toward existential hysteria is all around us for many years. The election of Trump tipped a lot over people over the edge. Then came Brexit.
There was a perfect example recently in the States; the debt ceiling crisis. We’ve actually been through this a number of times with no ill effect. Not even a shiver from Wall St. Once, there was even a “government shut-down” but all the various agencies had “a bit put aside” for just such an event.
It’s a cynical, manipulative show brought to us by the Uni-Party; Red Team and Blue Team in a risk-free punch-up for the distraction of the masses.

Dominic English
Dominic English
11 months ago

Excellent piece. My worry about AI is not that it will change the power structures of the world forever. It’s that the elites will use its awesome power to entrench the power differentials which already exist. That’s what this new ‘misinformation’ regulation is surely all about. https://open.substack.com/pub/lowstatus/p/danger-safety-ahead?r=evzeq&utm_campaign=post&utm_medium=web

Dominic English
Dominic English
11 months ago

Excellent piece. My worry about AI is not that it will change the power structures of the world forever. It’s that the elites will use its awesome power to entrench the power differentials which already exist. That’s what this new ‘misinformation’ regulation is surely all about. https://open.substack.com/pub/lowstatus/p/danger-safety-ahead?r=evzeq&utm_campaign=post&utm_medium=web

Matt Sylvestre
Matt Sylvestre
11 months ago

I agree with the driving ethic of this piece but it does underestimate the current state of LLM which, in the case of GPT at least, is today much more than a parrot, It possess emergent and emerging capabilities that are beyond what would have been expected… Does this make it human, no. An existential threat, probably not. A lever for the cynical, absolutely…

AJ Mac
AJ Mac
11 months ago
Reply to  Matt Sylvestre

Exactly. Questions like whether a machine has true agency or why human consciousness is or isn’t a mere epiphenomenon of matter are aside from the point if a tool of infinitesimal size can be used for malicious or irresponsible purposes, on a scale and at a speed the world has not yet seen.

AJ Mac
AJ Mac
11 months ago
Reply to  Matt Sylvestre

Exactly. Questions like whether a machine has true agency or why human consciousness is or isn’t a mere epiphenomenon of matter are aside from the point if a tool of infinitesimal size can be used for malicious or irresponsible purposes, on a scale and at a speed the world has not yet seen.

Matt Sylvestre
Matt Sylvestre
11 months ago

I agree with the driving ethic of this piece but it does underestimate the current state of LLM which, in the case of GPT at least, is today much more than a parrot, It possess emergent and emerging capabilities that are beyond what would have been expected… Does this make it human, no. An existential threat, probably not. A lever for the cynical, absolutely…

Steven Carr
Steven Carr
11 months ago

Demis Hassabis is a genius. If he says worry, I worry.

Steven Carr
Steven Carr
11 months ago

Demis Hassabis is a genius. If he says worry, I worry.

Stephen Quilley
Stephen Quilley
11 months ago

CNN: ‘mostly peaceful’; NYT: ‘far right attack on PRIDE’; BBC ‘AI….nothing to see here’. WEF: ‘trust us’
Me: It was extremely violent; It’s centrist parents like me, not ‘far right’; AI definitely something to worry about …part of the thrust to transhumanism and the devaluation of humanity; I wouldn’t trust the WEF – not ever.

Martin Johnson
Martin Johnson
11 months ago

So the author spend all those words just to say he does not agree with the Turing Test, but nothing about the problems AI maybpresent now or on the near future.

IOW, useless.

Martin Johnson
Martin Johnson
11 months ago

So the author spend all those words just to say he does not agree with the Turing Test, but nothing about the problems AI maybpresent now or on the near future.

IOW, useless.

Emil Castelli
Emil Castelli
11 months ago

A silly article.

”Perhaps most importantly, they lack the intention to be truthful. In fact, they lack intention at all.”

I disagree – I think they may well have intention, demonic intention. CS Lewis talked a lot of this sort of intelligence, and how it is likely to be anti-life, as it is not life, and its motives unknowable. I do know Good and Evil exist.

I could at no way imagine the hand of God was there guiding this creation made by Man, who with the fruit of knowledge knows good and evil, and only by supreme effort keeps evil of free will in his heart at bay. My guess is there was a hand in these secular Labs directing the creators of this intelligence; it would be the hand of evil.

But whatever – at the very minimum AI is a coin toss. Win, or Lose. And it gets tossed again and again as this develops – and best keep winning; because when the losing side of the coin lands up – *poof* and we are toast…

I await the moment I expect to happen with dread – the unemployed cause such social disruption society breaks and the natural holding balance is reached of the earth and only a billion people survive, to the Globalists delight. Or all the bank balances and credit and money in data – goes to zero for any number of malicious reasons. .

And society breaks, and a Billion survive the breakdown, or many other flips of the coin end up happening till the losing hand is played.

Remember the Wuhan lab? Fiddling with viruses? Well wait for this one to escape into the wild, it will be our doom, or so Revelations seems to indicate.

Amy Horseman
Amy Horseman
11 months ago
Reply to  Emil Castelli

The Wuhan lab may have been “fiddling with viruses” but nothing “escaped”, and even if it had, it wasn’t “deadly”. People were killed by hospital protocols and pharmaceutical interventions, not a “virus”.

Alan Gore
Alan Gore
11 months ago
Reply to  Amy Horseman

A lot of people were killed by that virus. My wife was one of them. Later on, a lot of other people escaped the virus by being vaccinated. I am one of them.

Amy Horseman
Amy Horseman
11 months ago
Reply to  Alan Gore

I am so sorry your wife died. I really, truly am. But a “virus” didn’t kill her. Was she intubated? On a closed-system ventilator? Or given remdesivir? I encourage you to challenge her death certificate if it reads “covid” because that was not the cause of death. A “vaccine” didn’t save you. But may have harmed you. I pray you were one of those unharmed by it. Sadly many are very, very unwell as a result of it.

Alan Gore
Alan Gore
11 months ago
Reply to  Amy Horseman

My wife was a few years older than I, and had been on kidney dialysis for the last few years after a long period of diabetes. A very early case of Covid put her in the ICU for 12 days, after which she emerged with lung damage and on oxygen. She held on until August of the Covid year.
The following spring when the vaccines came out, I went in for my shots, and have taken every booster since then. Sorry, but I still like girls, still hike twice a week, and have felt no urge to embrace Bill Gates as my personal savior. In fact, a year later after a visit to a small town in the Sacramento delta (California) where I may have been the first vaccinated visitor ever, I tested exposed to Covid for a full week. I isolated as recommended, but developed no symptoms whatever. As it does for most people, my vaccine worked.

Last edited 11 months ago by Alan Gore
Simon Tavanyar
Simon Tavanyar
11 months ago
Reply to  Alan Gore

Alan, you are still alive a year after having taken all your boosters! Fantastic! After all, why should you want to believe studies that have shown boosted people like you have a long term decreased immunity to COVID-like viruses because of original antigenic sin? You wouldn’t. It’s human nature!

Amy Horseman
Amy Horseman
11 months ago
Reply to  Simon Tavanyar

I think “Alan Gore” might be a pseudonym based on “Al Gore”, which means he’s a WEF asset. Might be a troll. I think I won’t engage anymore!

Last edited 11 months ago by Amy Horseman
Amy Horseman
Amy Horseman
11 months ago
Reply to  Simon Tavanyar

I think “Alan Gore” might be a pseudonym based on “Al Gore”, which means he’s a WEF asset. Might be a troll. I think I won’t engage anymore!

Last edited 11 months ago by Amy Horseman
Amy Horseman
Amy Horseman
11 months ago
Reply to  Alan Gore

Again, my deepest condolences. Your wife died of her medical condition, not “Covid”. And you haven’t been “vaccinated” against “Covid”. I do understand why you believe this though. You’ve been subjected to military-grade psychological terrorism and a viscous propaganda machine. You can choose to continue to believe what you believe, in the same way people choose to believe that human beings can literally “change” sex, or that we’ve got “20 years to save the planet” – it doesn’t make these things true though. It’s all belief. Wishing you well.

Simon Tavanyar
Simon Tavanyar
11 months ago
Reply to  Alan Gore

Alan, you are still alive a year after having taken all your boosters! Fantastic! After all, why should you want to believe studies that have shown boosted people like you have a long term decreased immunity to COVID-like viruses because of original antigenic sin? You wouldn’t. It’s human nature!

Amy Horseman
Amy Horseman
11 months ago
Reply to  Alan Gore

Again, my deepest condolences. Your wife died of her medical condition, not “Covid”. And you haven’t been “vaccinated” against “Covid”. I do understand why you believe this though. You’ve been subjected to military-grade psychological terrorism and a viscous propaganda machine. You can choose to continue to believe what you believe, in the same way people choose to believe that human beings can literally “change” sex, or that we’ve got “20 years to save the planet” – it doesn’t make these things true though. It’s all belief. Wishing you well.

Alan Gore
Alan Gore
11 months ago
Reply to  Amy Horseman

My wife was a few years older than I, and had been on kidney dialysis for the last few years after a long period of diabetes. A very early case of Covid put her in the ICU for 12 days, after which she emerged with lung damage and on oxygen. She held on until August of the Covid year.
The following spring when the vaccines came out, I went in for my shots, and have taken every booster since then. Sorry, but I still like girls, still hike twice a week, and have felt no urge to embrace Bill Gates as my personal savior. In fact, a year later after a visit to a small town in the Sacramento delta (California) where I may have been the first vaccinated visitor ever, I tested exposed to Covid for a full week. I isolated as recommended, but developed no symptoms whatever. As it does for most people, my vaccine worked.

Last edited 11 months ago by Alan Gore
Amy Horseman
Amy Horseman
11 months ago
Reply to  Alan Gore

I am so sorry your wife died. I really, truly am. But a “virus” didn’t kill her. Was she intubated? On a closed-system ventilator? Or given remdesivir? I encourage you to challenge her death certificate if it reads “covid” because that was not the cause of death. A “vaccine” didn’t save you. But may have harmed you. I pray you were one of those unharmed by it. Sadly many are very, very unwell as a result of it.

Alan Gore
Alan Gore
11 months ago
Reply to  Amy Horseman

A lot of people were killed by that virus. My wife was one of them. Later on, a lot of other people escaped the virus by being vaccinated. I am one of them.

Amy Horseman
Amy Horseman
11 months ago
Reply to  Emil Castelli

The Wuhan lab may have been “fiddling with viruses” but nothing “escaped”, and even if it had, it wasn’t “deadly”. People were killed by hospital protocols and pharmaceutical interventions, not a “virus”.

Emil Castelli
Emil Castelli
11 months ago

A silly article.

”Perhaps most importantly, they lack the intention to be truthful. In fact, they lack intention at all.”

I disagree – I think they may well have intention, demonic intention. CS Lewis talked a lot of this sort of intelligence, and how it is likely to be anti-life, as it is not life, and its motives unknowable. I do know Good and Evil exist.

I could at no way imagine the hand of God was there guiding this creation made by Man, who with the fruit of knowledge knows good and evil, and only by supreme effort keeps evil of free will in his heart at bay. My guess is there was a hand in these secular Labs directing the creators of this intelligence; it would be the hand of evil.

But whatever – at the very minimum AI is a coin toss. Win, or Lose. And it gets tossed again and again as this develops – and best keep winning; because when the losing side of the coin lands up – *poof* and we are toast…

I await the moment I expect to happen with dread – the unemployed cause such social disruption society breaks and the natural holding balance is reached of the earth and only a billion people survive, to the Globalists delight. Or all the bank balances and credit and money in data – goes to zero for any number of malicious reasons. .

And society breaks, and a Billion survive the breakdown, or many other flips of the coin end up happening till the losing hand is played.

Remember the Wuhan lab? Fiddling with viruses? Well wait for this one to escape into the wild, it will be our doom, or so Revelations seems to indicate.

Phil Mac
Phil Mac
11 months ago

I’ve been waiting for years for this; the final proof that not only is religion daft, but the realisation that consciousness is itself an illusion.

We’re just biological machines that evolved such complex information processing skill combined with an inevitable programme to survive & breed (the ones without those attributes didn’t and aren’t around) that it appears very much to be a living entity. We’re not, and once we watch something else follow the same path very quickly and we know it’s origin, we’ll realise it’s all nothing but a molecular process.

What comes of that realisation will be interesting. We’re programmed to be interested too as a supportive reproductive strategy, by the way. That’s the only reason we are.

Last edited 11 months ago by Phil Mac
AJ Mac
AJ Mac
11 months ago
Reply to  Phil Mac

You are still waiting for the final proof that will explain away consciousness, and in my own estimation will continue to wait though you should endure for eons. Nor can you falsify the inner experience of faith or prove that every outward expression thereof is daft. Just as you couldn’t prove–beyond all possible objections– that a life of pure selfishness, power-lust, or epicureanism is empty, and emptier than it needs to be, according to our makeup or “programming”.
When you say we’re are programmed: By whom, and to what purpose(s)?

AJ Mac
AJ Mac
11 months ago
Reply to  Phil Mac

You are still waiting for the final proof that will explain away consciousness, and in my own estimation will continue to wait though you should endure for eons. Nor can you falsify the inner experience of faith or prove that every outward expression thereof is daft. Just as you couldn’t prove–beyond all possible objections– that a life of pure selfishness, power-lust, or epicureanism is empty, and emptier than it needs to be, according to our makeup or “programming”.
When you say we’re are programmed: By whom, and to what purpose(s)?

Phil Mac
Phil Mac
11 months ago

I’ve been waiting for years for this; the final proof that not only is religion daft, but the realisation that consciousness is itself an illusion.

We’re just biological machines that evolved such complex information processing skill combined with an inevitable programme to survive & breed (the ones without those attributes didn’t and aren’t around) that it appears very much to be a living entity. We’re not, and once we watch something else follow the same path very quickly and we know it’s origin, we’ll realise it’s all nothing but a molecular process.

What comes of that realisation will be interesting. We’re programmed to be interested too as a supportive reproductive strategy, by the way. That’s the only reason we are.

Last edited 11 months ago by Phil Mac