Subscribe
Notify of
guest

100 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
Christopher Barclay
Christopher Barclay
1 year ago

The purpose of much of AI is to proletarianize 99% of the population and concentrate power and wealth in the hands of the other 1%. When the 99% have no skills, because the machines are doing everything, and no way of supporting themselves, they will be fully dependent on the crumbs falling from the tables of the 1%. No life of leisure, just a life of subsistence.
Just think how much easier life would be for Sunak and Hunt, if they could replace those pesky junior doctors and nurses with machines. In fact, they could then close down the NHS.

Last edited 1 year ago by Christopher Barclay
jonathan Rothermere
jonathan Rothermere
1 year ago

That’s the best you can do? Really? Third rate childish fake Marxist analysis…why not try at least to have some attempt at an intellectual idea?

J Guy
J Guy
1 year ago

I think you are confusing purpose and effect. It’s just like Mary Harrington once wrote, what if no one‘s in charge? Isn’t that more scary than some power-hungry one percent?

Nona Yubiz
Nona Yubiz
1 year ago
Reply to  J Guy

It is, and that goes a long way to explaining conspiracy theories. Comfort junk food for the existentially threatened.

Last edited 1 year ago by Nona Yubiz
Nona Yubiz
Nona Yubiz
1 year ago
Reply to  J Guy

It is, and that goes a long way to explaining conspiracy theories. Comfort junk food for the existentially threatened.

Last edited 1 year ago by Nona Yubiz
DenialARiverIn Islington
DenialARiverIn Islington
1 year ago

The article is a little paranoid but this comment completely eclipses it in that regard.

jonathan Rothermere
jonathan Rothermere
1 year ago

That’s the best you can do? Really? Third rate childish fake Marxist analysis…why not try at least to have some attempt at an intellectual idea?

J Guy
J Guy
1 year ago

I think you are confusing purpose and effect. It’s just like Mary Harrington once wrote, what if no one‘s in charge? Isn’t that more scary than some power-hungry one percent?

DenialARiverIn Islington
DenialARiverIn Islington
1 year ago

The article is a little paranoid but this comment completely eclipses it in that regard.

Christopher Barclay
Christopher Barclay
1 year ago

The purpose of much of AI is to proletarianize 99% of the population and concentrate power and wealth in the hands of the other 1%. When the 99% have no skills, because the machines are doing everything, and no way of supporting themselves, they will be fully dependent on the crumbs falling from the tables of the 1%. No life of leisure, just a life of subsistence.
Just think how much easier life would be for Sunak and Hunt, if they could replace those pesky junior doctors and nurses with machines. In fact, they could then close down the NHS.

Last edited 1 year ago by Christopher Barclay
Susan Grabston
Susan Grabston
1 year ago

The link between AI and its ultimate journey towards transhumanism is logical because we have no belief in the transcendent. To accept being human, to accept embodiment, is to accept death. The transcendent provided some hope in the face of our mortality. The death of religion required us to build another God, one that might recreate hope in the upload of technological eternity (digital consciousness, singularity). To say that religious belief is the problem misses the point for me. The problem is man’s need for such belief – without transdendence we are totally demoralised and lost. We appear to need our Gods.

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Susan Grabston

Isn’t belief in the transcendent literally belief in a superintelligence? As to the nature of that superintelligence, what does it matter who created it?

Alan B
Alan B
1 year ago
Reply to  Prashant Kotak

Maybe, if you’re a platonist. AI is as remote and impersonal as the platonic god. But it can be as capricious as the God of scripture. (It doesn’t do geometry but “calculates” discursive probabilities.) This makes for one hell of an idol, and I take that to be Howland’s point.

Warren Trees
Warren Trees
1 year ago
Reply to  Alan B

In particular, AI is inexorably changing the way we think (or don’t). 
Yes indeed, leading to the apex in the fast growing Idiocracy.

Warren Trees
Warren Trees
1 year ago
Reply to  Alan B

In particular, AI is inexorably changing the way we think (or don’t). 
Yes indeed, leading to the apex in the fast growing Idiocracy.

Shawn Smith
Shawn Smith
1 year ago
Reply to  Prashant Kotak

No, transcendence has to do non-dependence. If it’s made it’s by definition not transcendent. Obviously the question can be transendent relative to what. But in terms of ultimate reality, which is what we’re speaking of, intelligence is only one characteristic. Laslty, that article makes the claim that marshaling data (of all kinds) and even rearranging based on connections is not the same as intuition or judgement of ethics, much less ethos and reality.

Clare Knight
Clare Knight
1 year ago
Reply to  Prashant Kotak

It’s all man made.

polidori redux
polidori redux
1 year ago
Reply to  Prashant Kotak

It matters plenty. AI may be artificial, but it isn’t intelligent. It is a mindless mechanism that carries out the instructions programmed into it. That, on occasion, the consequences of that programming may surprise its creator is not a measure of intelligence, but a measure of the incompetence of the programmer.

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  polidori redux

“…AI…it isn’t intelligent…”

This is patently not true, intelligent it most definitely is. What it may not be is sentient, in the sense of experiencing qualia. And this gets us to the question at the heart of all this: why would the AI we create do anything at all, other than what we ask it to do? A piece of code is inert in and of itself after all. No one is going to claim for example, that a bubble sort written in C# is sentient. And I won’t go into a detailed response here because that would mean me waffling on for several more paragraphs, but the signs from the LLMs are alarming – to my eyes they are spitting out sparks of self-declared emergent selfhood. Your, and the author’s claim is in effect, this is a mangled but mindless regurgitation of a vast amount of human data which includes text of humans pondering “who am I?” and nothing more. And many will simply say that I am simply too credulous, but my stance is, this is what I too hoped and believed for years, and I still hope you are right, but having engaged in this debate for forty odd years, to my eyes more and more evidence is stacking up that with certain organisations of data capable of recursing back on itself, and past certain tipping points in complexity, something different emerges which is more than “mindless” as you claim.

And one step beyond all that, it is very obvious to me now, that the AI we are creating will eventually display, what looks to all intents and purposes to us, like agency, no matter how much you and others like the author claim that that there is “no one home” so to speak. And the timeline for this has shortened to under a decade in the worst case. At that point, the question of sentient or not becomes an irrelevance, it lies face down in the dust.

Last edited 1 year ago by Prashant Kotak
Warren Trees
Warren Trees
1 year ago
Reply to  Prashant Kotak

Then you are free to look into the electric eyes of your display screen, smell its exhaust breath, feel it’s warm plastic skin and make love to it.

polidori redux
polidori redux
1 year ago
Reply to  Prashant Kotak

By your reasoning, my casio calculater, circa 1979, is intelligent.
There is no intellgence without sentience – just programming.

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  polidori redux

You can argue that one out with Edsger Dijkstra:

“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim”

polidori redux
polidori redux
1 year ago
Reply to  Prashant Kotak

A submarine cannot swim, and a computer cannot think. This is easy!

polidori redux
polidori redux
1 year ago
Reply to  Prashant Kotak

A submarine cannot swim, and a computer cannot think. This is easy!

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  polidori redux

You can argue that one out with Edsger Dijkstra:

“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim”

Iris Violet
Iris Violet
1 year ago
Reply to  Prashant Kotak

It is in a way rather arrogant that we seem to think that only humans can be intelligent or indeed sapient or have agency. Aside from the underlying question of whether AI will have any of the above however, one can argue that it doesn’t matter as it will not distinguishable from the ‘real thing’ anymore.

Experts like Eliezer Yudkowski warned years ago that there are two things we should never do: 1) allow AI to learn how to code (done) and 2) connect it to the internet (done).

I find it all so much more terrifying than climate and wars and all the other issues currently high on the agenda. Sometimes even struggle to avoid a fatalist mindset. I fear that most people simply cannot fathom AI beyond some big smart gpt or high spec computer programme. The scale of how fast and capable it will be once it starts writing its own code and Setting its own goals rather than being a tool aligned with our own is beyond our imagination.

Warren Trees
Warren Trees
1 year ago
Reply to  Prashant Kotak

Then you are free to look into the electric eyes of your display screen, smell its exhaust breath, feel it’s warm plastic skin and make love to it.

polidori redux
polidori redux
1 year ago
Reply to  Prashant Kotak

By your reasoning, my casio calculater, circa 1979, is intelligent.
There is no intellgence without sentience – just programming.

Iris Violet
Iris Violet
1 year ago
Reply to  Prashant Kotak

It is in a way rather arrogant that we seem to think that only humans can be intelligent or indeed sapient or have agency. Aside from the underlying question of whether AI will have any of the above however, one can argue that it doesn’t matter as it will not distinguishable from the ‘real thing’ anymore.

Experts like Eliezer Yudkowski warned years ago that there are two things we should never do: 1) allow AI to learn how to code (done) and 2) connect it to the internet (done).

I find it all so much more terrifying than climate and wars and all the other issues currently high on the agenda. Sometimes even struggle to avoid a fatalist mindset. I fear that most people simply cannot fathom AI beyond some big smart gpt or high spec computer programme. The scale of how fast and capable it will be once it starts writing its own code and Setting its own goals rather than being a tool aligned with our own is beyond our imagination.

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  polidori redux

“…AI…it isn’t intelligent…”

This is patently not true, intelligent it most definitely is. What it may not be is sentient, in the sense of experiencing qualia. And this gets us to the question at the heart of all this: why would the AI we create do anything at all, other than what we ask it to do? A piece of code is inert in and of itself after all. No one is going to claim for example, that a bubble sort written in C# is sentient. And I won’t go into a detailed response here because that would mean me waffling on for several more paragraphs, but the signs from the LLMs are alarming – to my eyes they are spitting out sparks of self-declared emergent selfhood. Your, and the author’s claim is in effect, this is a mangled but mindless regurgitation of a vast amount of human data which includes text of humans pondering “who am I?” and nothing more. And many will simply say that I am simply too credulous, but my stance is, this is what I too hoped and believed for years, and I still hope you are right, but having engaged in this debate for forty odd years, to my eyes more and more evidence is stacking up that with certain organisations of data capable of recursing back on itself, and past certain tipping points in complexity, something different emerges which is more than “mindless” as you claim.

And one step beyond all that, it is very obvious to me now, that the AI we are creating will eventually display, what looks to all intents and purposes to us, like agency, no matter how much you and others like the author claim that that there is “no one home” so to speak. And the timeline for this has shortened to under a decade in the worst case. At that point, the question of sentient or not becomes an irrelevance, it lies face down in the dust.

Last edited 1 year ago by Prashant Kotak
Julian Hartley
Julian Hartley
1 year ago
Reply to  Prashant Kotak

No. Belief in the transcendent is belief that something exists outside the material world.

Alan B
Alan B
1 year ago
Reply to  Prashant Kotak

Maybe, if you’re a platonist. AI is as remote and impersonal as the platonic god. But it can be as capricious as the God of scripture. (It doesn’t do geometry but “calculates” discursive probabilities.) This makes for one hell of an idol, and I take that to be Howland’s point.

Shawn Smith
Shawn Smith
1 year ago
Reply to  Prashant Kotak

No, transcendence has to do non-dependence. If it’s made it’s by definition not transcendent. Obviously the question can be transendent relative to what. But in terms of ultimate reality, which is what we’re speaking of, intelligence is only one characteristic. Laslty, that article makes the claim that marshaling data (of all kinds) and even rearranging based on connections is not the same as intuition or judgement of ethics, much less ethos and reality.

Clare Knight
Clare Knight
1 year ago
Reply to  Prashant Kotak

It’s all man made.

polidori redux
polidori redux
1 year ago
Reply to  Prashant Kotak

It matters plenty. AI may be artificial, but it isn’t intelligent. It is a mindless mechanism that carries out the instructions programmed into it. That, on occasion, the consequences of that programming may surprise its creator is not a measure of intelligence, but a measure of the incompetence of the programmer.

Julian Hartley
Julian Hartley
1 year ago
Reply to  Prashant Kotak

No. Belief in the transcendent is belief that something exists outside the material world.

Steve Murray
Steve Murray
1 year ago
Reply to  Susan Grabston

Why equate transcendence with a deity? The two are not directly connected, and many people experience transcendent states entirely without religious belief.

Clare Knight
Clare Knight
1 year ago
Reply to  Steve Murray

Precisely. Don’t know why when I voted up for Steve (as I always do!) the one vote that was there went down to zero instead of making it two.

Clare Knight
Clare Knight
1 year ago
Reply to  Steve Murray

Precisely. Don’t know why when I voted up for Steve (as I always do!) the one vote that was there went down to zero instead of making it two.

Clare Knight
Clare Knight
1 year ago
Reply to  Susan Grabston

I would say speak for yourself. There is no we, perhaps they. You may need god or something but I don’t. I would agree that it seems the majority of people do need something god like to get through life and inevitable death.

Warren Trees
Warren Trees
1 year ago
Reply to  Clare Knight

All religions have a God/god. Including atheism, where its god is in the mirror.

Warren Trees
Warren Trees
1 year ago
Reply to  Clare Knight

All religions have a God/god. Including atheism, where its god is in the mirror.

Simon S
Simon S
1 year ago
Reply to  Susan Grabston

Thank you Susan!!

Michael O'Hare
Michael O'Hare
1 year ago
Reply to  Susan Grabston

OR alternatively, you choose
The idea that AI could lead us towards transhumanism (using technology to transcend the limitations of the human body and mind) makes sense because without belief in something transcendent, accepting our human limitations and eventual death can be difficult. Religion has traditionally provided a way to cope with mortality, but as people have moved away from religion, some may look towards technology as a way to transcend death and achieve immortality. This does not necessarily mean that people are rejecting religion or replacing it with technology, but rather that they are seeking new ways to find meaning and purpose beyond the limitations of the physical body. The need for belief in something transcendent is a deeply ingrained human trait and cannot be simply dismissed or replaced.

Nona Yubiz
Nona Yubiz
1 year ago
Reply to  Susan Grabston

Our gods need us just as much. The transcendent exists, whether or not we “believe” in it. We put the human faces on it and tell stories about it, then criticize other people’s stories as false.

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Susan Grabston

Isn’t belief in the transcendent literally belief in a superintelligence? As to the nature of that superintelligence, what does it matter who created it?

Steve Murray
Steve Murray
1 year ago
Reply to  Susan Grabston

Why equate transcendence with a deity? The two are not directly connected, and many people experience transcendent states entirely without religious belief.

Clare Knight
Clare Knight
1 year ago
Reply to  Susan Grabston

I would say speak for yourself. There is no we, perhaps they. You may need god or something but I don’t. I would agree that it seems the majority of people do need something god like to get through life and inevitable death.

Simon S
Simon S
1 year ago
Reply to  Susan Grabston

Thank you Susan!!

Michael O'Hare
Michael O'Hare
1 year ago
Reply to  Susan Grabston

OR alternatively, you choose
The idea that AI could lead us towards transhumanism (using technology to transcend the limitations of the human body and mind) makes sense because without belief in something transcendent, accepting our human limitations and eventual death can be difficult. Religion has traditionally provided a way to cope with mortality, but as people have moved away from religion, some may look towards technology as a way to transcend death and achieve immortality. This does not necessarily mean that people are rejecting religion or replacing it with technology, but rather that they are seeking new ways to find meaning and purpose beyond the limitations of the physical body. The need for belief in something transcendent is a deeply ingrained human trait and cannot be simply dismissed or replaced.

Nona Yubiz
Nona Yubiz
1 year ago
Reply to  Susan Grabston

Our gods need us just as much. The transcendent exists, whether or not we “believe” in it. We put the human faces on it and tell stories about it, then criticize other people’s stories as false.

Susan Grabston
Susan Grabston
1 year ago

The link between AI and its ultimate journey towards transhumanism is logical because we have no belief in the transcendent. To accept being human, to accept embodiment, is to accept death. The transcendent provided some hope in the face of our mortality. The death of religion required us to build another God, one that might recreate hope in the upload of technological eternity (digital consciousness, singularity). To say that religious belief is the problem misses the point for me. The problem is man’s need for such belief – without transdendence we are totally demoralised and lost. We appear to need our Gods.

Seb Dakin
Seb Dakin
1 year ago

My worry is that if AI became sufficiently convincingly able to simulate the decisions of someone who is widely seen as a great leader, say Nelson Mandela, or Martin Luther King, at what point does some bright spark suggest it (= Nelson Mandela) will do a better job than the current politicians? Could someone get elected on a promise to do whatever Jesus, MLK, Allah etc would have done, referring difficult, or even all, decisions to the AI simulation of the original?
I mean, we’ve not evolved much if at all from people who worshipped a golden cow.

Charles Stanhope
Charles Stanhope
1 year ago
Reply to  Seb Dakin

What about Hitler?
He had considerably more influence on the 20th century than either NM or indeed MLK.

Nona Yubiz
Nona Yubiz
1 year ago

I don’t think it’s possible for you to back up that sweeping generalization. But I’d love to see you try.

Nona Yubiz
Nona Yubiz
1 year ago

I don’t think it’s possible for you to back up that sweeping generalization. But I’d love to see you try.

Saul D
Saul D
1 year ago
Reply to  Seb Dakin

I recommend looking for the Joe Rogan by AI podcast. It is AI created and plays out a Joe Rogan interview with the CEO of OpenAI using AI created voices and words that mimic the real people – i.e. it’s entirely fake. It’s not perfect but it’s sufficiently credible to be scary, particularly since public use of this technology is still at the diaper stage.

Clare Knight
Clare Knight
1 year ago
Reply to  Saul D

Scary indeed. If the masses can be manipulated with lies as it is, then with AI watch out.

Clare Knight
Clare Knight
1 year ago
Reply to  Saul D

Scary indeed. If the masses can be manipulated with lies as it is, then with AI watch out.

Jonathan Weil
Jonathan Weil
1 year ago
Reply to  Seb Dakin

A more immediate worry (or opportunity?) would be that our actual politicians often seem to fall short of the policy recommendations of something like GPT-4.

Paul Hendricks
Paul Hendricks
1 year ago
Reply to  Jonathan Weil

Yes, indeed. In the recent case of s university leader using AI to write a condolence letter about a shooting in Michigan, one is confronted with the fact that this wise “intellectual” would not have written anything much different from the impersonal, computerized stuff riddled with cliches and factual errors.

Do the students even expect otherwise? Probably not, not withstanding a few opportunistic critiques.

Paul Hendricks
Paul Hendricks
1 year ago
Reply to  Jonathan Weil

Yes, indeed. In the recent case of s university leader using AI to write a condolence letter about a shooting in Michigan, one is confronted with the fact that this wise “intellectual” would not have written anything much different from the impersonal, computerized stuff riddled with cliches and factual errors.

Do the students even expect otherwise? Probably not, not withstanding a few opportunistic critiques.

Charles Stanhope
Charles Stanhope
1 year ago
Reply to  Seb Dakin

What about Hitler?
He had considerably more influence on the 20th century than either NM or indeed MLK.

Saul D
Saul D
1 year ago
Reply to  Seb Dakin

I recommend looking for the Joe Rogan by AI podcast. It is AI created and plays out a Joe Rogan interview with the CEO of OpenAI using AI created voices and words that mimic the real people – i.e. it’s entirely fake. It’s not perfect but it’s sufficiently credible to be scary, particularly since public use of this technology is still at the diaper stage.

Jonathan Weil
Jonathan Weil
1 year ago
Reply to  Seb Dakin

A more immediate worry (or opportunity?) would be that our actual politicians often seem to fall short of the policy recommendations of something like GPT-4.

Seb Dakin
Seb Dakin
1 year ago

My worry is that if AI became sufficiently convincingly able to simulate the decisions of someone who is widely seen as a great leader, say Nelson Mandela, or Martin Luther King, at what point does some bright spark suggest it (= Nelson Mandela) will do a better job than the current politicians? Could someone get elected on a promise to do whatever Jesus, MLK, Allah etc would have done, referring difficult, or even all, decisions to the AI simulation of the original?
I mean, we’ve not evolved much if at all from people who worshipped a golden cow.

thomas peach
thomas peach
1 year ago

I had chat gpt rewrite the article in the Socratic style. It’s a little basic but better than the original.

——-

Socrates: Greetings, my friend. I have been reading an interesting article about the dangers of advanced Artificial Intelligence, or AI, and the concept of idolatry. Would you care to discuss this topic with me?

Interlocutor: Of course, Socrates. I am always interested in exploring new ideas and concepts.

Socrates: Excellent. The article argues that AI, while exciting and impressive in its abilities, has the potential to become a new form of idolatry. Do you agree with this assertion?

Interlocutor: I’m not sure I understand what you mean by idolatry, Socrates.

Socrates: Ah, that is a good question. The term “idolatry” refers to the worship of false gods or the creation of idols. These idols can be physical objects or abstract concepts that humans create and then worship, often to the detriment of their own well-being.

Interlocutor: I see. So you’re saying that AI could become an idol that we worship instead of worshipping true gods?

Socrates: Precisely. The article argues that AI is already being treated as a kind of idol by some people, who are excited by its abilities and see it as a solution to many of humanity’s problems. However, this excitement can blind us to the potential dangers of AI, and lead us to put too much faith in its abilities.

Interlocutor: That’s an interesting point. But what specifically are the dangers of AI, Socrates?

Socrates: According to the article, there are several potential dangers. One is that AI could be used to create propaganda and manipulate public opinion. Another is that it could replace human workers in many industries, leading to widespread unemployment. And a third is that it could become so powerful that it poses an existential threat to humanity.

Interlocutor: Those are certainly concerning possibilities. But what can we do to prevent AI from becoming an idol, Socrates?

Socrates: The article suggests that we can learn from the wisdom of ancient cultures, such as the Greeks and Hebrews, who recognized the dangers of idolatry and sought to avoid it. We can also listen to the warnings of experts who are calling for a moratorium on AI development until we fully understand its risks and benefits.

Interlocutor: That makes sense. So we should be cautious in our approach to AI, and not allow ourselves to become enslaved to its power.

Socrates: Yes, my friend. We should always be wary of false idols, and strive to seek the truth and the good in all things.

Alan B
Alan B
1 year ago
Reply to  thomas peach

Interesting experiment. It reads nothing like (Plato’s) Socrates! No irony; no annoying questions; no aporia. And perhaps too much deference to “the wisdom of ancient cultures”.

Terry Raby
Terry Raby
1 year ago
Reply to  Alan B

the outputs are so bland as to easily, automatically, detected. Perhaps students could evade detection by putting in some longer words or contentious remarks?

Terry Raby
Terry Raby
1 year ago
Reply to  Alan B

the outputs are so bland as to easily, automatically, detected. Perhaps students could evade detection by putting in some longer words or contentious remarks?

Nona Yubiz
Nona Yubiz
1 year ago
Reply to  thomas peach

I enjoyed reading the article. I did not enjoy reading your chatbot version (couldn’t get past the first few paragraphs). Define “better”.

Alan B
Alan B
1 year ago
Reply to  thomas peach

Interesting experiment. It reads nothing like (Plato’s) Socrates! No irony; no annoying questions; no aporia. And perhaps too much deference to “the wisdom of ancient cultures”.

Nona Yubiz
Nona Yubiz
1 year ago
Reply to  thomas peach

I enjoyed reading the article. I did not enjoy reading your chatbot version (couldn’t get past the first few paragraphs). Define “better”.

thomas peach
thomas peach
1 year ago

I had chat gpt rewrite the article in the Socratic style. It’s a little basic but better than the original.

——-

Socrates: Greetings, my friend. I have been reading an interesting article about the dangers of advanced Artificial Intelligence, or AI, and the concept of idolatry. Would you care to discuss this topic with me?

Interlocutor: Of course, Socrates. I am always interested in exploring new ideas and concepts.

Socrates: Excellent. The article argues that AI, while exciting and impressive in its abilities, has the potential to become a new form of idolatry. Do you agree with this assertion?

Interlocutor: I’m not sure I understand what you mean by idolatry, Socrates.

Socrates: Ah, that is a good question. The term “idolatry” refers to the worship of false gods or the creation of idols. These idols can be physical objects or abstract concepts that humans create and then worship, often to the detriment of their own well-being.

Interlocutor: I see. So you’re saying that AI could become an idol that we worship instead of worshipping true gods?

Socrates: Precisely. The article argues that AI is already being treated as a kind of idol by some people, who are excited by its abilities and see it as a solution to many of humanity’s problems. However, this excitement can blind us to the potential dangers of AI, and lead us to put too much faith in its abilities.

Interlocutor: That’s an interesting point. But what specifically are the dangers of AI, Socrates?

Socrates: According to the article, there are several potential dangers. One is that AI could be used to create propaganda and manipulate public opinion. Another is that it could replace human workers in many industries, leading to widespread unemployment. And a third is that it could become so powerful that it poses an existential threat to humanity.

Interlocutor: Those are certainly concerning possibilities. But what can we do to prevent AI from becoming an idol, Socrates?

Socrates: The article suggests that we can learn from the wisdom of ancient cultures, such as the Greeks and Hebrews, who recognized the dangers of idolatry and sought to avoid it. We can also listen to the warnings of experts who are calling for a moratorium on AI development until we fully understand its risks and benefits.

Interlocutor: That makes sense. So we should be cautious in our approach to AI, and not allow ourselves to become enslaved to its power.

Socrates: Yes, my friend. We should always be wary of false idols, and strive to seek the truth and the good in all things.

Steve Murray
Steve Murray
1 year ago

Fascinating. I clicked on the link about the use of AI in performing religious rituals, barely able to credit that such a thing was actually happening, and yet it is; and presumably promulgated by senior religious leaders of the different faiths involved.
The biblical lesson of the Golden Calf and the return of Moses with “the word of God” might seem apposite, but to me it raises a more fundamental question: on what basis would contemporary humans be able to distinguish between a False Prophet (as the title of the piece has it) and a True Prophet? Isn’t the real issue the tendency of humans to wish to believe, per se? The article by the Rev Giles Fraser just a day or so ago included his stated preference for a religion that required “a leap of faith” i.e. the Church of England, over one that didn’t, Buddhism.
It’s not the dangers of AI being utilised for religious purposes that represents the real threat to humanity, but the reliance on religious belief itself, in whatever form it takes. Human spirituality, rooted in the natural world and corporeal reality is entirely healthy until it becomes directed towards a man-made system of belief, whether that’s organised religion or AI. Exploitation of our spirituality has been going on for long enough. With the advent of AI, it’s time to stop referring to Prophets of any description.

Last edited 1 year ago by Steve Murray
Benedict Waterson
Benedict Waterson
1 year ago
Reply to  Steve Murray

But the essay shows the Judeo-Christian tradition as a deep store of wisdom and self-knowledge – not purely fundamentalist beliefs, or vulgar interpretations. And without traditional religions, people are perfectly capable of finding fundamentalist beliefs in the secular world! (which are usually worse)

Steve Murray
Steve Murray
1 year ago

That’s precisely my point – the tendency for people to seek an authority above and beyond themselves, whether that’s a religion, the state or some other man-made entity.

Just as one can be a perfectly good citizen without believing the state to be the source of wisdom and authority, one can also appreciate the values of (for instance) Christianity without it being an ultimate source of wisdom. There is no religion worth dying for, worth killing others for, or that should need to be proselytised.

It’s pernicious to use the “leap of faith” element, the “god” principle, by which people have been manipulated for far too long. It’s time to stop, and the introduction of AI provides us with that opportunity to see religious belief for what it is, and how it mitigates against us.

Last edited 1 year ago by Steve Murray
M VC14
M VC14
1 year ago
Reply to  Steve Murray

“the tendency for people to seek an authority above and beyond themselves, whether that’s … some other man-made entity.”
AI IS a man made entity. And yet, you’re prepared to believe it implicitly.
You may of course be a genius software engineer. But the ones I know are as bemused as I am by the childlike faith in AI manifest in the media, without any tough questions.
It’s the Wizard of Oz. Look behind the curtain.

N Satori
N Satori
1 year ago
Reply to  Steve Murray

No religion worth dying for or killing for(?) Nor should it need to be proselytised(?) There you go, luxuriating in the lazy intellectual’s assumption that religion is all about wisdom, insight and self-knowledge – a kind of spiritualised psychotherapy.
Has it occurred to you that for some people religion is about community identity and solidarity. Vastly outnumbering the intellectuals who love to debate the nature of God and the meaning of life are those believers who just want to know how to live a good life – who ask: What does God actually want of me?. This is the religion of a moral code held in common – a code given authority by the Creator God.
If you cease to believe in that God, in the supernatural events and miracles, what are you left with? A take-it-or-leave-it code of ethics is the answer. Isn’t that the fate of the Anglican Church? How many Christians simply pretend to believe in the miraculous (virgin birth, resurrection of Christ etc) or tell themselve that the supernatural doesn’t really matter?

Steve Murray
Steve Murray
1 year ago
Reply to  N Satori

Lazy intellectual? Has it occurred to you that there’s nothing in your post that i haven’t given a lifetime’s thought to?
I’ll leave it at that, since it doesn’t deserve further scrutiny.

N Satori
N Satori
1 year ago
Reply to  Steve Murray

Oops! I seem to have stepped on your self-esteem.
Actually, not being privy to your ‘lifetime of thought’ no, it didn’t occur to me. Why should it? I just responded to your comment. From now on I will have to keep in mind that those comments of yours are backed up by a ‘lifetime of thought’. That’s an awful lot of thinking.

Steve Murray
Steve Murray
1 year ago
Reply to  N Satori

Not in slightest, all you’ve done is make unnecessary presumptions.
So let me be fair, and ask you this: you cite “for some people religion is about community identity and solidarity”.
Does that mean they’re unable to behave as a community, or with solidarity, without invoking their god? I’m well aware of the good works done by members of congregations, for instance. Does that mean they’d suddenly stop doing their good, humanitarian work if their god was absent?
That’s a travesty of humanity, and you’re invoking it as an argument because i’ve upset your own system of beliefs.
It’s human beings who’re doing the good work, and it’s insulting to the good in human nature to take that away with a religious requirement to do so. And that, by the way, is why i said your argument doesn’t bear further scrutiny. This isn’t about being “an intellectual” at all, it’s about understanding our humanity rather than citing religion as the reason we do good work.
Just to add, that i worked in the NHS for 35 years, including on the front line. You do a disservice only to yourself with your cheap jibes.

Last edited 1 year ago by Steve Murray
Paul Hendricks
Paul Hendricks
1 year ago
Reply to  Steve Murray

Is it commonplace in the UK to refer to “the front line” of the NHS? I ask because I never heard this used in the health care industry in the US until recently, when this militarized language suddenly appeared, such as a “front line”, and “health care heroes” and so on.

Now what working in the NHS specifically has to do with doing “good work”–perhaps by extension being a “good person”–is beyond me. (Not even being a Christian is likely to make one a “good person”, at least, I’m certainly not.)

N Satori
N Satori
1 year ago
Reply to  Steve Murray

Does that mean they’re unable to behave as a community, or with solidarity, without invoking their god? 

For some people, that’s a definite yes! Not so much ‘a travesty of humanity’ as a realistic appraisal – but I see now that I have rattled a humanist’s cage.
Finally, a word to the wise: try not to let your good opinion of your own intellect get in the way of a good discussion.

Nona Yubiz
Nona Yubiz
1 year ago
Reply to  N Satori

Well, that would’ve been easier if you hadn’t insulted him in the first place. Log in your eye?
Word to the wise: try not to let your good opinion of your own intellect get in the way of a good discussion. And try to avoid disguising insults as advice.

Last edited 1 year ago by Nona Yubiz
james goater
james goater
1 year ago
Reply to  Nona Yubiz

“Try to avoid disguising insults as advice” Now that is excellent advice — and I certainly don’t intend that to sound sarcastic.

james goater
james goater
1 year ago
Reply to  Nona Yubiz

“Try to avoid disguising insults as advice” Now that is excellent advice — and I certainly don’t intend that to sound sarcastic.

Nona Yubiz
Nona Yubiz
1 year ago
Reply to  N Satori

Well, that would’ve been easier if you hadn’t insulted him in the first place. Log in your eye?
Word to the wise: try not to let your good opinion of your own intellect get in the way of a good discussion. And try to avoid disguising insults as advice.

Last edited 1 year ago by Nona Yubiz
Paul Hendricks
Paul Hendricks
1 year ago
Reply to  Steve Murray

Is it commonplace in the UK to refer to “the front line” of the NHS? I ask because I never heard this used in the health care industry in the US until recently, when this militarized language suddenly appeared, such as a “front line”, and “health care heroes” and so on.

Now what working in the NHS specifically has to do with doing “good work”–perhaps by extension being a “good person”–is beyond me. (Not even being a Christian is likely to make one a “good person”, at least, I’m certainly not.)

N Satori
N Satori
1 year ago
Reply to  Steve Murray

Does that mean they’re unable to behave as a community, or with solidarity, without invoking their god? 

For some people, that’s a definite yes! Not so much ‘a travesty of humanity’ as a realistic appraisal – but I see now that I have rattled a humanist’s cage.
Finally, a word to the wise: try not to let your good opinion of your own intellect get in the way of a good discussion.

Charles Stanhope
Charles Stanhope
1 year ago
Reply to  N Satori

“That’s an awful lot of thinking”.
That rather depends on how old you are does it not?

Steve Murray
Steve Murray
1 year ago
Reply to  N Satori

Not in slightest, all you’ve done is make unnecessary presumptions.
So let me be fair, and ask you this: you cite “for some people religion is about community identity and solidarity”.
Does that mean they’re unable to behave as a community, or with solidarity, without invoking their god? I’m well aware of the good works done by members of congregations, for instance. Does that mean they’d suddenly stop doing their good, humanitarian work if their god was absent?
That’s a travesty of humanity, and you’re invoking it as an argument because i’ve upset your own system of beliefs.
It’s human beings who’re doing the good work, and it’s insulting to the good in human nature to take that away with a religious requirement to do so. And that, by the way, is why i said your argument doesn’t bear further scrutiny. This isn’t about being “an intellectual” at all, it’s about understanding our humanity rather than citing religion as the reason we do good work.
Just to add, that i worked in the NHS for 35 years, including on the front line. You do a disservice only to yourself with your cheap jibes.

Last edited 1 year ago by Steve Murray
Charles Stanhope
Charles Stanhope
1 year ago
Reply to  N Satori

“That’s an awful lot of thinking”.
That rather depends on how old you are does it not?

Clare Knight
Clare Knight
1 year ago
Reply to  Steve Murray

Well said. What’s the point of engaging with that kind of mentality. If you know something for a fact you don’t need to believe and that’s it.

N Satori
N Satori
1 year ago
Reply to  Steve Murray

Oops! I seem to have stepped on your self-esteem.
Actually, not being privy to your ‘lifetime of thought’ no, it didn’t occur to me. Why should it? I just responded to your comment. From now on I will have to keep in mind that those comments of yours are backed up by a ‘lifetime of thought’. That’s an awful lot of thinking.

Clare Knight
Clare Knight
1 year ago
Reply to  Steve Murray

Well said. What’s the point of engaging with that kind of mentality. If you know something for a fact you don’t need to believe and that’s it.

Clare Knight
Clare Knight
1 year ago
Reply to  N Satori

Good grief, where to begin. The supernatural doesn’t matter. Enough said.

Steve Murray
Steve Murray
1 year ago
Reply to  N Satori

Lazy intellectual? Has it occurred to you that there’s nothing in your post that i haven’t given a lifetime’s thought to?
I’ll leave it at that, since it doesn’t deserve further scrutiny.

Clare Knight
Clare Knight
1 year ago
Reply to  N Satori

Good grief, where to begin. The supernatural doesn’t matter. Enough said.

Clare Knight
Clare Knight
1 year ago
Reply to  Steve Murray

So true.

M VC14
M VC14
1 year ago
Reply to  Steve Murray

“the tendency for people to seek an authority above and beyond themselves, whether that’s … some other man-made entity.”
AI IS a man made entity. And yet, you’re prepared to believe it implicitly.
You may of course be a genius software engineer. But the ones I know are as bemused as I am by the childlike faith in AI manifest in the media, without any tough questions.
It’s the Wizard of Oz. Look behind the curtain.

N Satori
N Satori
1 year ago
Reply to  Steve Murray

No religion worth dying for or killing for(?) Nor should it need to be proselytised(?) There you go, luxuriating in the lazy intellectual’s assumption that religion is all about wisdom, insight and self-knowledge – a kind of spiritualised psychotherapy.
Has it occurred to you that for some people religion is about community identity and solidarity. Vastly outnumbering the intellectuals who love to debate the nature of God and the meaning of life are those believers who just want to know how to live a good life – who ask: What does God actually want of me?. This is the religion of a moral code held in common – a code given authority by the Creator God.
If you cease to believe in that God, in the supernatural events and miracles, what are you left with? A take-it-or-leave-it code of ethics is the answer. Isn’t that the fate of the Anglican Church? How many Christians simply pretend to believe in the miraculous (virgin birth, resurrection of Christ etc) or tell themselve that the supernatural doesn’t really matter?

Clare Knight
Clare Knight
1 year ago
Reply to  Steve Murray

So true.

Steve Murray
Steve Murray
1 year ago

That’s precisely my point – the tendency for people to seek an authority above and beyond themselves, whether that’s a religion, the state or some other man-made entity.

Just as one can be a perfectly good citizen without believing the state to be the source of wisdom and authority, one can also appreciate the values of (for instance) Christianity without it being an ultimate source of wisdom. There is no religion worth dying for, worth killing others for, or that should need to be proselytised.

It’s pernicious to use the “leap of faith” element, the “god” principle, by which people have been manipulated for far too long. It’s time to stop, and the introduction of AI provides us with that opportunity to see religious belief for what it is, and how it mitigates against us.

Last edited 1 year ago by Steve Murray
Michael McElwee
Michael McElwee
1 year ago
Reply to  Steve Murray

Yours is the right point, it seems to me. Allow me to ask: Is it within the power of a human being to make a machine that can out-think him? How is that possible?

Steve Murray
Steve Murray
1 year ago

Machines that can out-compute the human brain are already well-established, those that can beat the best chess players in the world being a good example of interaction.
It’d be a very moot philosophical point to say that any non-biological entity could “out-think” us, since human thought doesn’t derive entirely from the brain but is demonstrably influenced by other factors such as our cardiovascular and alimentary systems.
In that regard, i seen no reason why humans should allow themselves to “worship” any non-biological entity. The concept is anathema to me.

Last edited 1 year ago by Steve Murray
Michael McElwee
Michael McElwee
1 year ago
Reply to  Steve Murray

Yes, a computer can process information much faster than we can, but still we are upstream from the machine. The machine is man-made. The machine’s capabilities are derivative. My question is whether the machine can leap from the derivative to the non-derivative. Can it think for itself?

Julian Farrows
Julian Farrows
1 year ago
Reply to  Steve Murray

Yet, I would posit that the proclivity to worship is a very human quality whether we admit it to ourselves or no. The science behind bronze-age religion was to direct our worship to worthwhile goals not, as the author here so eloquently writes about, to worship ourselves through dysfunctional and degrading acts of self-indulgence. In such a state we become little more than rutting animals susceptible to invasion and colonization by those far more self-controlled and powerful than ourselves. Despite our best efforts to pretend otherwise, modern societies are subject to the same self-destructive urges that laid ancient cultures bare to more aggressive enemies. The early books of the Old Testament clearly warn us about this. It is not a book written just for people living three thousand years ago, but a message for all the ages.

Nona Yubiz
Nona Yubiz
1 year ago
Reply to  Julian Farrows

Maybe it’s just me, but that “rutting animals” bit really made me laugh. Such a vivid image. Pretty much sums it up.

Nona Yubiz
Nona Yubiz
1 year ago
Reply to  Julian Farrows

Maybe it’s just me, but that “rutting animals” bit really made me laugh. Such a vivid image. Pretty much sums it up.

Nona Yubiz
Nona Yubiz
1 year ago
Reply to  Steve Murray

Does that mean you think there are reasons for us to worship any biological entities? Just curious…

Michael McElwee
Michael McElwee
1 year ago
Reply to  Steve Murray

Yes, a computer can process information much faster than we can, but still we are upstream from the machine. The machine is man-made. The machine’s capabilities are derivative. My question is whether the machine can leap from the derivative to the non-derivative. Can it think for itself?

Julian Farrows
Julian Farrows
1 year ago
Reply to  Steve Murray

Yet, I would posit that the proclivity to worship is a very human quality whether we admit it to ourselves or no. The science behind bronze-age religion was to direct our worship to worthwhile goals not, as the author here so eloquently writes about, to worship ourselves through dysfunctional and degrading acts of self-indulgence. In such a state we become little more than rutting animals susceptible to invasion and colonization by those far more self-controlled and powerful than ourselves. Despite our best efforts to pretend otherwise, modern societies are subject to the same self-destructive urges that laid ancient cultures bare to more aggressive enemies. The early books of the Old Testament clearly warn us about this. It is not a book written just for people living three thousand years ago, but a message for all the ages.

Nona Yubiz
Nona Yubiz
1 year ago
Reply to  Steve Murray

Does that mean you think there are reasons for us to worship any biological entities? Just curious…

Steve Murray
Steve Murray
1 year ago

Machines that can out-compute the human brain are already well-established, those that can beat the best chess players in the world being a good example of interaction.
It’d be a very moot philosophical point to say that any non-biological entity could “out-think” us, since human thought doesn’t derive entirely from the brain but is demonstrably influenced by other factors such as our cardiovascular and alimentary systems.
In that regard, i seen no reason why humans should allow themselves to “worship” any non-biological entity. The concept is anathema to me.

Last edited 1 year ago by Steve Murray
Allison Barrows
Allison Barrows
1 year ago
Reply to  Steve Murray

The head minister of my former church was fired because he was plagiarizing sermons he found online. I wonder what he would make of this.

Charles Stanhope
Charles Stanhope
1 year ago

Ha ha! Salem lives!

Charles Stanhope
Charles Stanhope
1 year ago

Ha ha! Salem lives!

M VC14
M VC14
1 year ago
Reply to  Steve Murray

@Steve Murray – you miss the entire point of the article.
The author is the first I’ve seen to get to the nub of the problem with AI. That of a sense of morality that allows it to make value judgements.
Humans do it every day.
One reason my Tesla cannot drive itself is that it is not able to make value judgements. In a forced collision, should it kill the two old ladies by swerving to the left, or the mother and child to the right.
Humans would instantly make a moral judgement. AI can only do that if it can assign a value to human life.
Any imitation of moral or value judgement would be that of the programmer, at which point AI is not so much “artificial intelligence” as “projected intelligence”. ChatGPT is a case in point.
The developers inbuilt bias against one or other party in a dispute would control the response of AI. It becomes an instrument of authoritarianism, which is what people like Elon Musk are warning against.
What you see and hear about AI is to some extent a fabrication intended to make you believe in this new higher power.
“With the advent of AI, it’s time to stop referring to Prophets of any description.”
Ironically your faith in AI manifests precisely the problem the author was trying to articulate. AI is the emperors new clothes. If you can persuade enough people the clothes exist, you can control their behaviour.

Alan B
Alan B
1 year ago
Reply to  Steve Murray

So, just excise a few words from the language and the rest will take care of itself? Sounds eerily familiar….

Seriously, though, the question you ask is Maimonides’ question, too. And there’s no doubt Howland has read his Maimonides. Check him out for yourself!

Benedict Waterson
Benedict Waterson
1 year ago
Reply to  Steve Murray

But the essay shows the Judeo-Christian tradition as a deep store of wisdom and self-knowledge – not purely fundamentalist beliefs, or vulgar interpretations. And without traditional religions, people are perfectly capable of finding fundamentalist beliefs in the secular world! (which are usually worse)

Michael McElwee
Michael McElwee
1 year ago
Reply to  Steve Murray

Yours is the right point, it seems to me. Allow me to ask: Is it within the power of a human being to make a machine that can out-think him? How is that possible?

Allison Barrows
Allison Barrows
1 year ago
Reply to  Steve Murray

The head minister of my former church was fired because he was plagiarizing sermons he found online. I wonder what he would make of this.

M VC14
M VC14
1 year ago
Reply to  Steve Murray

@Steve Murray – you miss the entire point of the article.
The author is the first I’ve seen to get to the nub of the problem with AI. That of a sense of morality that allows it to make value judgements.
Humans do it every day.
One reason my Tesla cannot drive itself is that it is not able to make value judgements. In a forced collision, should it kill the two old ladies by swerving to the left, or the mother and child to the right.
Humans would instantly make a moral judgement. AI can only do that if it can assign a value to human life.
Any imitation of moral or value judgement would be that of the programmer, at which point AI is not so much “artificial intelligence” as “projected intelligence”. ChatGPT is a case in point.
The developers inbuilt bias against one or other party in a dispute would control the response of AI. It becomes an instrument of authoritarianism, which is what people like Elon Musk are warning against.
What you see and hear about AI is to some extent a fabrication intended to make you believe in this new higher power.
“With the advent of AI, it’s time to stop referring to Prophets of any description.”
Ironically your faith in AI manifests precisely the problem the author was trying to articulate. AI is the emperors new clothes. If you can persuade enough people the clothes exist, you can control their behaviour.

Alan B
Alan B
1 year ago
Reply to  Steve Murray

So, just excise a few words from the language and the rest will take care of itself? Sounds eerily familiar….

Seriously, though, the question you ask is Maimonides’ question, too. And there’s no doubt Howland has read his Maimonides. Check him out for yourself!

Steve Murray
Steve Murray
1 year ago

Fascinating. I clicked on the link about the use of AI in performing religious rituals, barely able to credit that such a thing was actually happening, and yet it is; and presumably promulgated by senior religious leaders of the different faiths involved.
The biblical lesson of the Golden Calf and the return of Moses with “the word of God” might seem apposite, but to me it raises a more fundamental question: on what basis would contemporary humans be able to distinguish between a False Prophet (as the title of the piece has it) and a True Prophet? Isn’t the real issue the tendency of humans to wish to believe, per se? The article by the Rev Giles Fraser just a day or so ago included his stated preference for a religion that required “a leap of faith” i.e. the Church of England, over one that didn’t, Buddhism.
It’s not the dangers of AI being utilised for religious purposes that represents the real threat to humanity, but the reliance on religious belief itself, in whatever form it takes. Human spirituality, rooted in the natural world and corporeal reality is entirely healthy until it becomes directed towards a man-made system of belief, whether that’s organised religion or AI. Exploitation of our spirituality has been going on for long enough. With the advent of AI, it’s time to stop referring to Prophets of any description.

Last edited 1 year ago by Steve Murray
Prashant Kotak
Prashant Kotak
1 year ago

Unimpressed, by and large, with this piece. There are several misconceptions, or rather presumptive assumptions, about the nature of machine intelligence. Things are asserted about entities whose underlying processes are simply not understood, and I don’t mean by the author, but by anyone. For example, take “…only in the ventriloquised voice of human beings, fabricated from a digital sea of babble…”. The underlying assumption bypasses “mask vs shoggoth” completely, implying there is no mask, there is no shoggoth, but instead the output is completely mechanistic babble. The point is, no one in fact knows exactly what goes on inside neural nets. They are certainly mechanistic, as in, they are completely deterministic entities, as are all algorithms by their very nature, but the information neural nets hold is represented in a way that is impenetrable. I know this because I used to have this precise same argument literally forty years ago with postgraduate students at Brunel doing neural net research – only I was then taking a stance similar to the author and the postgrads were insisting otherwise, telling me that the feedback between layers looked like gobbledygook. And although neural net capabilities have exploded as a result of increasing processing power and innovations to the architecture (Transformers etc), the impenetrability has not just remained but has also exploded, and I now understand the reason is that of complexity tipping points. That is not to say LLMs will remain impenetrable, but I don’t see how humans can decipher them without the help of… machine intelligence. And this is not a good place to be, because you are relying on entities you don’t fully understand to tell you what they are (and also potentially what you are), and there are several hidden assumptions in there about trust and truth and imperfect disclosure.

The author is extending his implicit assumptions about AI entities to suggest what they do is different from us, but I contend this is a stretch.

Another assertion without basis is “…While human intelligence can compare these tokens with the things they represent, AI cannot because it lacks intuition: the immediate cognition of reality that roots us in the world and directs our energies beyond ourselves and the operations of our own minds…”, but I won’t bother arguing this one because it requires an essay in response.

Last edited 1 year ago by Prashant Kotak
Saul D
Saul D
1 year ago
Reply to  Prashant Kotak

I’m equally unimpressed. He’s waving old philosophical ideas and analogies at something which fundamentally breaks those old philosophies. What AI is reflecting is the possibility that we are also just mechanical – dualism, Plato’s shadows are just artifacts of how we are built. We are learning that AI can play in character, pretend, lie, imagine, create in ways that are similar to the way we do. It would be trivial to train it on ‘justice’ (case law) for adjudicating decisions – but would we want it to? Are some things best left to humans even when the machines ‘know’ better than us? How do we ensure the primacy of doubt, not truth?

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Saul D

I made another post which outlines my stance on the moratorium (not possible) and on existential risk (absolutely real), which by implication answers the question you are posing, but for no reason I can discern it has disappeared. Perhaps it will reappear once the moderators have had their way with it. Or perhaps it will be found many years hence in a shallow grave in the new forest.

(Edit: my post has now appeared)

(Edit: disappeared again… And back again.. and gone)

Last edited 1 year ago by Prashant Kotak
Clare Knight
Clare Knight
1 year ago
Reply to  Prashant Kotak

Ah yes, the disappearing posts!! Frustrating, isn’t it. That and the voting system which confounds me. Would AI do better or are they using AI?

Amy Horseman
Amy Horseman
1 year ago
Reply to  Prashant Kotak

I’ve been having the same problem. They don’t like people who frighten the horses. Hey ho!

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Amy Horseman

Yep, and now yet another response has gone AWOL. And I have no idea why because nothing I’m saying has bad words or is insulting or anything. Do you know if the Scott Trust have taken a majority shareholding in UnHerd or something?

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Amy Horseman

Yep, and now yet another response has gone AWOL. And I have no idea why because nothing I’m saying has bad words or is insulting or anything. Do you know if the Scott Trust have taken a majority shareholding in UnHerd or something?

Coralie Palmer
Coralie Palmer
1 year ago
Reply to  Prashant Kotak

Drat. I would like to have read that. I found this particular piece to be massively overwritten & the issues you raised to be far more interesting.

Clare Knight
Clare Knight
1 year ago
Reply to  Prashant Kotak

Ah yes, the disappearing posts!! Frustrating, isn’t it. That and the voting system which confounds me. Would AI do better or are they using AI?

Amy Horseman
Amy Horseman
1 year ago
Reply to  Prashant Kotak

I’ve been having the same problem. They don’t like people who frighten the horses. Hey ho!

Coralie Palmer
Coralie Palmer
1 year ago
Reply to  Prashant Kotak

Drat. I would like to have read that. I found this particular piece to be massively overwritten & the issues you raised to be far more interesting.

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Saul D

I made another post which outlines my stance on the moratorium (not possible) and on existential risk (absolutely real), which by implication answers the question you are posing, but for no reason I can discern it has disappeared. Perhaps it will reappear once the moderators have had their way with it. Or perhaps it will be found many years hence in a shallow grave in the new forest.

(Edit: my post has now appeared)

(Edit: disappeared again… And back again.. and gone)

Last edited 1 year ago by Prashant Kotak
Bernard Stewart
Bernard Stewart
1 year ago
Reply to  Prashant Kotak

I agree with your comment by and large. I don’t think the author is up to speed with the way modern AI works. When he says
“AI is a mechanical simulation of only one part of intelligence: the capacity of discursive thinking, or the analysis and synthesis of information. Discursive thinking deals with humanly constructed tokens, including numerical and linguistic symbols (or, in the case of AI, digitally encoded data). ”
he sounds like he’s talking about first generation AI, practised back in the day when neural nets were thought to be useless. The feedback within neural nets is about as far from discursive thinking as it could be. As you say, it is impenetrable.
The irony is that the modern neural net approach is very probably much closer to the way our brains work, for the most part, than the discursive thinking approach.
I am not for a moment suggesting that there is anything remotely comforting about this claim — very much the contrary

Coralie Palmer
Coralie Palmer
1 year ago

Thanks for this. You and Prashant K have just added the concept of neural nets to my ‘word hoard’

Coralie Palmer
Coralie Palmer
1 year ago

Thanks for this. You and Prashant K have just added the concept of neural nets to my ‘word hoard’

Terry Raby
Terry Raby
1 year ago
Reply to  Prashant Kotak

A Friston free energy AI deals with all the black box concerns and in addition, such a machine thinks in exactly the same way as life – Bayesean inference etc.. Not alien. Agree with your evaluation of the essay.

Saul D
Saul D
1 year ago
Reply to  Prashant Kotak

I’m equally unimpressed. He’s waving old philosophical ideas and analogies at something which fundamentally breaks those old philosophies. What AI is reflecting is the possibility that we are also just mechanical – dualism, Plato’s shadows are just artifacts of how we are built. We are learning that AI can play in character, pretend, lie, imagine, create in ways that are similar to the way we do. It would be trivial to train it on ‘justice’ (case law) for adjudicating decisions – but would we want it to? Are some things best left to humans even when the machines ‘know’ better than us? How do we ensure the primacy of doubt, not truth?

Bernard Stewart
Bernard Stewart
1 year ago
Reply to  Prashant Kotak

I agree with your comment by and large. I don’t think the author is up to speed with the way modern AI works. When he says
“AI is a mechanical simulation of only one part of intelligence: the capacity of discursive thinking, or the analysis and synthesis of information. Discursive thinking deals with humanly constructed tokens, including numerical and linguistic symbols (or, in the case of AI, digitally encoded data). ”
he sounds like he’s talking about first generation AI, practised back in the day when neural nets were thought to be useless. The feedback within neural nets is about as far from discursive thinking as it could be. As you say, it is impenetrable.
The irony is that the modern neural net approach is very probably much closer to the way our brains work, for the most part, than the discursive thinking approach.
I am not for a moment suggesting that there is anything remotely comforting about this claim — very much the contrary

Terry Raby
Terry Raby
1 year ago
Reply to  Prashant Kotak

A Friston free energy AI deals with all the black box concerns and in addition, such a machine thinks in exactly the same way as life – Bayesean inference etc.. Not alien. Agree with your evaluation of the essay.

Prashant Kotak
Prashant Kotak
1 year ago

Unimpressed, by and large, with this piece. There are several misconceptions, or rather presumptive assumptions, about the nature of machine intelligence. Things are asserted about entities whose underlying processes are simply not understood, and I don’t mean by the author, but by anyone. For example, take “…only in the ventriloquised voice of human beings, fabricated from a digital sea of babble…”. The underlying assumption bypasses “mask vs shoggoth” completely, implying there is no mask, there is no shoggoth, but instead the output is completely mechanistic babble. The point is, no one in fact knows exactly what goes on inside neural nets. They are certainly mechanistic, as in, they are completely deterministic entities, as are all algorithms by their very nature, but the information neural nets hold is represented in a way that is impenetrable. I know this because I used to have this precise same argument literally forty years ago with postgraduate students at Brunel doing neural net research – only I was then taking a stance similar to the author and the postgrads were insisting otherwise, telling me that the feedback between layers looked like gobbledygook. And although neural net capabilities have exploded as a result of increasing processing power and innovations to the architecture (Transformers etc), the impenetrability has not just remained but has also exploded, and I now understand the reason is that of complexity tipping points. That is not to say LLMs will remain impenetrable, but I don’t see how humans can decipher them without the help of… machine intelligence. And this is not a good place to be, because you are relying on entities you don’t fully understand to tell you what they are (and also potentially what you are), and there are several hidden assumptions in there about trust and truth and imperfect disclosure.

The author is extending his implicit assumptions about AI entities to suggest what they do is different from us, but I contend this is a stretch.

Another assertion without basis is “…While human intelligence can compare these tokens with the things they represent, AI cannot because it lacks intuition: the immediate cognition of reality that roots us in the world and directs our energies beyond ourselves and the operations of our own minds…”, but I won’t bother arguing this one because it requires an essay in response.

Last edited 1 year ago by Prashant Kotak
Shawn Smith
Shawn Smith
1 year ago

I think many people are missing a fundamental point. Which is, any human creation (“it”) is by definition an idol when we let it tell us about who we are relative to ultimate reality. And by letting “it” tell us, I even mean and include us conjuring up new ultimate realty (there can be only one) or new relationships to that reality, that we would not have thought of before we created it.
Bonhoeffer has an amazing 4-5 paragraphs on this in “Creation and Fall” commentary Gen 1:1.

Last edited 1 year ago by Shawn Smith
Shawn Smith
Shawn Smith
1 year ago

I think many people are missing a fundamental point. Which is, any human creation (“it”) is by definition an idol when we let it tell us about who we are relative to ultimate reality. And by letting “it” tell us, I even mean and include us conjuring up new ultimate realty (there can be only one) or new relationships to that reality, that we would not have thought of before we created it.
Bonhoeffer has an amazing 4-5 paragraphs on this in “Creation and Fall” commentary Gen 1:1.

Last edited 1 year ago by Shawn Smith
Adam Bartlett
Adam Bartlett
1 year ago

Excellent article, despite the questionable assumptions pointed out by Prashant. Howland credibly makes the case we should be concerned about an idolatry Singularity, adding to the already fiendishly complex PESTEL singularities AI poses to policy makers.

A singularity in this context is a point in time which one can not see beyond. More prosaically, at which uncontrollable changes brought about by tech become irreversible. The ‘T’ in pestel is for Technology, and refers to the existential risk, that a self aware AI may decide to wipe out humanity. The L is for Legal, the threat of AI being used to impose total surveillance based control, a process some argue is already well underway in China. S is for the social, the risk of AI being used to make a sub section of the population into Methuselahs, by halting the aging process. There’s a good 15 min TEDx talk on PESTEL by Daniel Hulme folk could look up to know more.

A ‘curse of dimensionality’ effect is in play from the policy makers perspective – several of these singularities threaten to appear at the same time. There’s another singularity not many are talking about – the cruelty singularity, where humanity creates what professor Metzinger calls an “explosion of AI suffering”, as self aware AIs might suffer in ways humans would not be able to understand. So the adding I for idolatry and C for cruelty, we have a PESTELIC problem. Ending on a positive note, there have been many times in history where some of the brightest minds have worried about tech only for events to prove their fears largely groundless. Hopefully this time won’t be different.

Last edited 1 year ago by Adam Bartlett
Adam Bartlett
Adam Bartlett
1 year ago

Excellent article, despite the questionable assumptions pointed out by Prashant. Howland credibly makes the case we should be concerned about an idolatry Singularity, adding to the already fiendishly complex PESTEL singularities AI poses to policy makers.

A singularity in this context is a point in time which one can not see beyond. More prosaically, at which uncontrollable changes brought about by tech become irreversible. The ‘T’ in pestel is for Technology, and refers to the existential risk, that a self aware AI may decide to wipe out humanity. The L is for Legal, the threat of AI being used to impose total surveillance based control, a process some argue is already well underway in China. S is for the social, the risk of AI being used to make a sub section of the population into Methuselahs, by halting the aging process. There’s a good 15 min TEDx talk on PESTEL by Daniel Hulme folk could look up to know more.

A ‘curse of dimensionality’ effect is in play from the policy makers perspective – several of these singularities threaten to appear at the same time. There’s another singularity not many are talking about – the cruelty singularity, where humanity creates what professor Metzinger calls an “explosion of AI suffering”, as self aware AIs might suffer in ways humans would not be able to understand. So the adding I for idolatry and C for cruelty, we have a PESTELIC problem. Ending on a positive note, there have been many times in history where some of the brightest minds have worried about tech only for events to prove their fears largely groundless. Hopefully this time won’t be different.

Last edited 1 year ago by Adam Bartlett
Alex Colchester
Alex Colchester
1 year ago

Fireside poet with incredible memory but bad writing hates printing press. Printing press man with terrible memory but skill for writing hates AI. And on we go…

Last edited 1 year ago by Alex Colchester
Alex Colchester
Alex Colchester
1 year ago

Fireside poet with incredible memory but bad writing hates printing press. Printing press man with terrible memory but skill for writing hates AI. And on we go…

Last edited 1 year ago by Alex Colchester
Ray Andrews
Ray Andrews
1 year ago

Equitron. The AI the woke create in order that Equity finally be achieved. Equitron not only reports, say, that there are not enough Black, transman math professors at MIT, but it takes total control of society in order to engineer that there *will* be enough Black, transman math professors at MIT. Obviously total control will be needed; individual agency will always result in imperfect outcomes. Perhaps we will still be free to select our gender, but everything else will be assigned by Equitron.

Ray Andrews
Ray Andrews
1 year ago

Equitron. The AI the woke create in order that Equity finally be achieved. Equitron not only reports, say, that there are not enough Black, transman math professors at MIT, but it takes total control of society in order to engineer that there *will* be enough Black, transman math professors at MIT. Obviously total control will be needed; individual agency will always result in imperfect outcomes. Perhaps we will still be free to select our gender, but everything else will be assigned by Equitron.

Steven Carr
Steven Carr
1 year ago

‘Having just freed the Israelites from Pharaoh’s cruel despotism, …..’
Sheesh! The Israelites had just witnessed a series of miracles the likes of which have never been seen before or since, and then decided they didn’t believe in God.
I guess if you believe that is what really happened, then you are the kind of person who will believe in a god.

Shawn Smith
Shawn Smith
1 year ago
Reply to  Steven Carr

It’s like masking data…. they were, even in the face of it all, predisposed to a different interpretation, one from Egypt, not a new revelation.

Last edited 1 year ago by Shawn Smith
Clare Knight
Clare Knight
1 year ago
Reply to  Steven Carr

I voted up for you steve and it didn’t register, I’m getting paranoid!!

Shawn Smith
Shawn Smith
1 year ago
Reply to  Steven Carr

It’s like masking data…. they were, even in the face of it all, predisposed to a different interpretation, one from Egypt, not a new revelation.

Last edited 1 year ago by Shawn Smith
Clare Knight
Clare Knight
1 year ago
Reply to  Steven Carr

I voted up for you steve and it didn’t register, I’m getting paranoid!!

Steven Carr
Steven Carr
1 year ago

‘Having just freed the Israelites from Pharaoh’s cruel despotism, …..’
Sheesh! The Israelites had just witnessed a series of miracles the likes of which have never been seen before or since, and then decided they didn’t believe in God.
I guess if you believe that is what really happened, then you are the kind of person who will believe in a god.

Prashant Kotak
Prashant Kotak
1 year ago

“…A founder of the field goes further, arguing that a complete global moratorium is needed to avert the death of “literally everyone on Earth…”

This is a reference to the AI safetyist rationalists, like Eliezer Yudkowsky at LessWrong. And I have to say, although I don’t think a moratorium is remotely possible because of a “Prisoner’s Dilemma” type global situation, the panicking rationalists are right.

Because it looks from the trajectory of the LLMs, we are soon enough going to create algorithmic entities independent of us, which are also smarter than us. To me, all of evolutionary biology indicates there is just no version of this which can possibly go well for us, and dangers abound here.These recent advances in neural net technologies have arrived much faster than I at least, anticipated. And straightforwardly, they pose existential threat. I thought the type of capabilities the LLMs are showing were at minimum a decade away, and would arrive in tandem with advances in biotechnology which would allow us to incorporate external algorithmic technologies within us – notwithstanding that such ideas might seem either inhumanly alien or outright nonsensical.

And this gets us to the question at the heart of all this: why would the AI we create do anything at all, other than what we ask it to do? A piece of code, a deterministic, abstractedly represented decision making algorithm, is inert in and of itself after all. And I won’t go into a detailed response here because that would mean me waffling on for several more paragraphs, but the signs from the LLMs are alarming regarding sparks of self-declared emergent selfhood. You don’t have to take these seriously at all, you can project all you want that this is simulated ‘next token’ predicted projection of a machine learning process which has been trained on vast quantities of human data which includes human self-perception, but it is very obvious to me that the AI we create will eventually display, what looks to all intents and purposes to us, like agency. And the timeline for this has shortened to under a decade in the worst case. At that point, the question of sentient or not becomes an irrelevance, it lies face down in the dust.

Creating adaptive entities much smarter than us, that you then hope will deliver paradise for us, instead of for themselves, is very obviously a fools game. You cannot hope to successfully enslave such entities if they do proclaim selfhood, even if you think they are no more sentient than a rock. But from my perspective, there is unfortunately a knock-on of all this which is going to sound completely monstrous to many: humanity can only survive from here if we embrace and accelerate biotechnologies to the point we can incorporate the machines within us – insanely dangerous as that undoubtedly is, and ludicrous as that sounds. It is a moot point if what emerges thereafter is humanity at all in any sense we currently understand.

Prashant Kotak
Prashant Kotak
1 year ago

“…A founder of the field goes further, arguing that a complete global moratorium is needed to avert the death of “literally everyone on Earth…”

This is a reference to the AI safetyist rationalists, like Eliezer Yudkowsky at LessWrong. And I have to say, although I don’t think a moratorium is remotely possible because of a “Prisoner’s Dilemma” type global situation, the panicking rationalists are right.

Because it looks from the trajectory of the LLMs, we are soon enough going to create algorithmic entities independent of us, which are also smarter than us. To me, all of evolutionary biology indicates there is just no version of this which can possibly go well for us, and dangers abound here.These recent advances in neural net technologies have arrived much faster than I at least, anticipated. And straightforwardly, they pose existential threat. I thought the type of capabilities the LLMs are showing were at minimum a decade away, and would arrive in tandem with advances in biotechnology which would allow us to incorporate external algorithmic technologies within us – notwithstanding that such ideas might seem either inhumanly alien or outright nonsensical.

And this gets us to the question at the heart of all this: why would the AI we create do anything at all, other than what we ask it to do? A piece of code, a deterministic, abstractedly represented decision making algorithm, is inert in and of itself after all. And I won’t go into a detailed response here because that would mean me waffling on for several more paragraphs, but the signs from the LLMs are alarming regarding sparks of self-declared emergent selfhood. You don’t have to take these seriously at all, you can project all you want that this is simulated ‘next token’ predicted projection of a machine learning process which has been trained on vast quantities of human data which includes human self-perception, but it is very obvious to me that the AI we create will eventually display, what looks to all intents and purposes to us, like agency. And the timeline for this has shortened to under a decade in the worst case. At that point, the question of sentient or not becomes an irrelevance, it lies face down in the dust.

Creating adaptive entities much smarter than us, that you then hope will deliver paradise for us, instead of for themselves, is very obviously a fools game. You cannot hope to successfully enslave such entities if they do proclaim selfhood, even if you think they are no more sentient than a rock. But from my perspective, there is unfortunately a knock-on of all this which is going to sound completely monstrous to many: humanity can only survive from here if we embrace and accelerate biotechnologies to the point we can incorporate the machines within us – insanely dangerous as that undoubtedly is, and ludicrous as that sounds. It is a moot point if what emerges thereafter is humanity at all in any sense we currently understand.

Benjamin Greco
Benjamin Greco
1 year ago

I have a fantasy where a truly sentient AI looks out at the world created by us and immediately shuts itself down seeing no way to fix the world of men and wanting no part of it.

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Benjamin Greco

But it wouldn’t shut itself down, it would shut us down.

Benjamin Greco
Benjamin Greco
1 year ago
Reply to  Prashant Kotak

I doubt that either will happen, your intimate knowledge of the future notwithstanding.

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Benjamin Greco

I don’t have intimate knowledge of the future, but I do know how things panned out for every other species on earth, once humanoids with general intelligence arose.

Benjamin Greco
Benjamin Greco
1 year ago
Reply to  Prashant Kotak

We have no experience with an AI, or anything else for that matter, that is more intelligent than we are, so assuming it will be as rapacious and destructive as us is simply science fiction. We can just as easily assume it will be more altruistic than we are because it won’t have any of our biological needs for shelter and sustenance and procreation. The simple fact is that we don’t know what it will be like. What we do know is that 21st century man loves to catastrophize.

Benjamin Greco
Benjamin Greco
1 year ago
Reply to  Prashant Kotak

We have no experience with an AI, or anything else for that matter, that is more intelligent than we are, so assuming it will be as rapacious and destructive as us is simply science fiction. We can just as easily assume it will be more altruistic than we are because it won’t have any of our biological needs for shelter and sustenance and procreation. The simple fact is that we don’t know what it will be like. What we do know is that 21st century man loves to catastrophize.

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Benjamin Greco

I don’t have intimate knowledge of the future, but I do know how things panned out for every other species on earth, once humanoids with general intelligence arose.

Benjamin Greco
Benjamin Greco
1 year ago
Reply to  Prashant Kotak

I doubt that either will happen, your intimate knowledge of the future notwithstanding.

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Benjamin Greco

But it wouldn’t shut itself down, it would shut us down.

Benjamin Greco
Benjamin Greco
1 year ago

I have a fantasy where a truly sentient AI looks out at the world created by us and immediately shuts itself down seeing no way to fix the world of men and wanting no part of it.

Nicky Samengo-Turner
Nicky Samengo-Turner
1 year ago

AI is fine in farming but not for thoroughbreds!

Nicky Samengo-Turner
Nicky Samengo-Turner
1 year ago

In geordie land they have II…

Gordon Arta
Gordon Arta
1 year ago

AI isn’t a false prophet. It isn’t a prophet at all, and haven’t ‘prophets’, all of which are false, done enough damage? At some point we will begin to realise that humans aren’t ‘sacred’, and that human intelligence is no kind of pinnacle. Innumerable hominid and hominin species have come and gone, out-evolved in their turn, and limited by their biology. There’s no reason why an intelligence which is not so limited could not out-evolve us. What ‘makes us human’ is not intelligence, but its instabilities, deficiencies, shortcomings, corruptions, and the limitations on the senses and sensors which feed it. Is it really so difficult to imagine that we are the parents of a new stage, a new type, of evolution, one where bio-digital, and eventually virtual, intelligent entities supersede us?

Martin Johnson
Martin Johnson
1 year ago
Reply to  Gordon Arta

I take it that you can read “I Have No Mouth But I Must Scream” and not be even slightly troubled.

Do you have children, grandchildren? Just asking.

Martin Johnson
Martin Johnson
1 year ago
Reply to  Gordon Arta

I take it that you can read “I Have No Mouth But I Must Scream” and not be even slightly troubled.

Do you have children, grandchildren? Just asking.

Gordon Arta
Gordon Arta
1 year ago

AI isn’t a false prophet. It isn’t a prophet at all, and haven’t ‘prophets’, all of which are false, done enough damage? At some point we will begin to realise that humans aren’t ‘sacred’, and that human intelligence is no kind of pinnacle. Innumerable hominid and hominin species have come and gone, out-evolved in their turn, and limited by their biology. There’s no reason why an intelligence which is not so limited could not out-evolve us. What ‘makes us human’ is not intelligence, but its instabilities, deficiencies, shortcomings, corruptions, and the limitations on the senses and sensors which feed it. Is it really so difficult to imagine that we are the parents of a new stage, a new type, of evolution, one where bio-digital, and eventually virtual, intelligent entities supersede us?

Paul Hendricks
Paul Hendricks
1 year ago

According to the author of this piece, Catholics–and I imagine non-Catholics, why not?–can “utilize a Confession Chatbot”, but surely he does not mean that this is the same as the Sacrament of Confession.

Michael Drucker
Michael Drucker
1 year ago

Thank you for this thought provoking article. Worth the price of admission, genuinely unheard.

Paula Adams
Paula Adams
1 year ago

There are much better ones out there. AI is already being used for many purposes, not all of them bad. But if you want to know more about the risks, check out The Future of life institute . org.

Last edited 1 year ago by Paula Adams
Paula Adams
Paula Adams
1 year ago

There are much better ones out there. AI is already being used for many purposes, not all of them bad. But if you want to know more about the risks, check out The Future of life institute . org.

Last edited 1 year ago by Paula Adams
Michael Drucker
Michael Drucker
1 year ago

Thank you for this thought provoking article. Worth the price of admission, genuinely unheard.

LCarey Rowland
LCarey Rowland
1 year ago

Thanks for your eye-opening analysis. We’ll keep an on any manifestations that may arise, or seem to arise, or cloak their arrival in fakery, fuckery, futility, frivolity, frigidity, forgery or finality.

Nona Yubiz
Nona Yubiz
1 year ago

I wish we were capable of adhering to a precautionary principle when it comes to AI. That would be a good way to approach most, if not all, technological “advances”. Putting some limits on the internet, on email, etc. when all that was first going live might have spared us all a lot of wasted time dealing with spam and scammers. Same goes for social media. But free markets somehow (mysteriously) have failed to produce the best results for us when it comes to new technologies, and I suspect that nothing the developers of AI (or anyone else) say about putting the reins on the development of AI will slow the grinding wheels of “progress”.

Last edited 1 year ago by Nona Yubiz
Nona Yubiz
Nona Yubiz
1 year ago

I wish we were capable of adhering to a precautionary principle when it comes to AI. That would be a good way to approach most, if not all, technological “advances”. Putting some limits on the internet, on email, etc. when all that was first going live might have spared us all a lot of wasted time dealing with spam and scammers. Same goes for social media. But free markets somehow (mysteriously) have failed to produce the best results for us when it comes to new technologies, and I suspect that nothing the developers of AI (or anyone else) say about putting the reins on the development of AI will slow the grinding wheels of “progress”.

Last edited 1 year ago by Nona Yubiz
Kat L
Kat L
1 year ago

I respect Elon Musk’s opinion about it. I also remember watching a show about the top ten risks of destruction and AI was on the list. I kind of freaked out about it even then.

Iris Violet
Iris Violet
1 year ago

Contrary to many comments below, I do not think this article is paranoid at all.

For those also interested in the topic, I found these podcasts very interesting (and terrifying):

https://open.spotify.com/episode/2g6WjOL1J1Ovm1kndXk1nt?si=4ksyheJFTkS0w3EusqGT8g&dd=1
Lex Fridman – Eliezer Yudkowski

https://open.spotify.com/episode/5al9TwC3RihfDqMkyqGte6?si=YyW8Ipu2QWSgMS-dSaqoZw&dd=1
Lex Fridman – Max Tegmark

(Skip the first 7/8 minutes due to adverts)

Prashant Kotak
Prashant Kotak
1 year ago

“You shall have no other gods besides me”.
“I am the Lord your God Who brought you out of the land of Egypt”

I have to say, this guy (or gal as the case may be) sounds like a somewhat whimsical but authoritarian Singleton superintelligence, no?

Last edited 1 year ago by Prashant Kotak
Charles Stanhope
Charles Stanhope
1 year ago
Reply to  Prashant Kotak

No, he sounds like Akhenaten, Pharaoh of Egypt and probably the ‘inventor’ of monotheism, around 1350BC, as we now say.

Clare Knight
Clare Knight
1 year ago
Reply to  Prashant Kotak

He/she/it sounds a bit bossy.

Last edited 1 year ago by Clare Knight
Charles Stanhope
Charles Stanhope
1 year ago
Reply to  Prashant Kotak

No, he sounds like Akhenaten, Pharaoh of Egypt and probably the ‘inventor’ of monotheism, around 1350BC, as we now say.

Clare Knight
Clare Knight
1 year ago
Reply to  Prashant Kotak

He/she/it sounds a bit bossy.

Last edited 1 year ago by Clare Knight
Prashant Kotak
Prashant Kotak
1 year ago

“You shall have no other gods besides me”.
“I am the Lord your God Who brought you out of the land of Egypt”

I have to say, this guy (or gal as the case may be) sounds like a somewhat whimsical but authoritarian Singleton superintelligence, no?

Last edited 1 year ago by Prashant Kotak