X Close

Your tweets will be powering Elon Musk’s new AI

November 6, 2023 - 10:00am

On Sunday Elon Musk’s X AI team launched an early testing model of Grok, a new AI “for understanding the universe”. Grok will, according to the promotional blurb, “answer questions with a bit of wit” and “a rebellious streak”. It will also “answer spicy questions that are rejected by most other AI systems”, which we can assume means it will not have as many of the “safety” restrictions that currently forbid models such as Chat-GPT responding on taboo themes such as race.

Far more significantly, though, the development team boasts that Grok “has real-time knowledge of the world via the X platform”. That is: unlike other large language models trained on a set of text that — however large — is static and finite, Grok will base its knowledge of the world on the live, fast-moving hive mind that is the website formerly known as Twitter.

Will this work? Those who lament Twitter’s decline in the Musk era may be shaking their heads. Once beloved of elites and credentialled journalists who saw themselves as the elect moral guardians of the Right Side of History, a question mark has hung over Twitter’s status since the Musk takeover. Recent reports bemoan its decline, thanks to feature tinkering, algorithm tweaks, and bleed in ad revenue and respected users. They warn that the only possible direction from here is further down into exploitative “enshittification”, essentially milking a captive user base until the ratio of quality to garbage finally becomes intolerable and people leave.

Others argue that it’s simply too addictive to be easily wrecked, and will weather whatever Musk has planned. Now, Grok potentially introduces whole a new set of incentives and pressures. For Twitter really is a kind of collective intelligence, where you can watch consensus-formation occur in real time. More completely than any other digital platform I can think of, Twitter fits the description David Bowie gave of the internet in a 1999 Paxman interview: “an alien lifeform”. 

And if the point was always to use its collective consciousness to power an AI, its usefulness will only be as good as the hive mind from which its “knowledge” is drawn. This in turn provides — in theory at least — a strong incentive to hold back from changes to the platform that would aggressively degrade its operation as a collective intelligence, however much such changes might result in short-term profit. 

So plugging Twitter in as the backend for synthetic super-consciousness may be what saves it from being strip-mined to trashy extinction on the scrapheap of exhausted social media platforms. Who knows, this might even make up for the sci-fi creepiness of knowing that when we tweet we’re participating in the genesis of this entity. 

Perhaps more importantly yet, Musk’s latest move answers a question that’s been nagging me for a while: what will provide the divine spark for AI? So far no AI even remotely resembles an actual intelligence, only pattern-recognising machines that make suggestions based on existing datasets. They can be eerily effective sometimes, but are really no more sentient than the autocorrect on one’s phone. So I have long wondered: how does such a technology make the leap to actual awareness? 

Well, now we know. Providing Musk manages to resist the temptation to wreck the hive intelligence he just bought for $44bn by driving its most dedicated users elsewhere, it turns out that the answer may be: you.


Mary Harrington is a contributing editor at UnHerd.

moveincircles

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

23 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
Lesley van Reenen
Lesley van Reenen
5 months ago

Of all the players I would trust Musk with AI the most – after all he was the one who warned the world against it many years ago.
I don’t know why writers such as Mary continue to refer to X as Twitter. Is this unconscious?
Anyhow, by far the most writers haven’t a cooking clue as to what Musk is really doing or thinking, they just love taking pop shots at him. I love the Guardian articles on Musk – the readers are OUTRAGED by him. They think he is stupid.

Simon Neale
Simon Neale
5 months ago

Grok will base its knowledge of the world on the live, fast-moving hive mind that is the website formerly known as Twitter.

Can’t they do it less expensively, and just recruit a team from the psychiatric ward of a maximum-security prison?

Daniel Lee
Daniel Lee
5 months ago

I don’t think AI will ever attain consciousness like we have, as ours grows not out of the complexity of our hard-wired brains but from the untouchable spirits which animate them.

laurence scaduto
laurence scaduto
5 months ago
Reply to  Daniel Lee

Yes.
One feature of these animating spirits that supports your idea is the overwhelming variety of types of spirits. There are math wizzes, natural born artists, funny people, people with amazing athletic talents, or ingrained organizational talents, or spookily charismatic people (for better or worse), etc, etc. Many of these things can be learned (skills), but the natural born talents are beyond our ken. We don’t have the words to begin to understand.
AI, trained on averages, has no way to even grasp it, much less reproduce that spirit.

Steve Murray
Steve Murray
5 months ago
Reply to  Daniel Lee

Our consciousness is so obviously a product of our biology (the entire human body, including heart, stomach, vision, endocrine system etc.) but AI may achieve a different kind of consciousness, which we may not be able to conceive of for similar reasons: the nature of its origins.

Mangle Tangle
Mangle Tangle
5 months ago
Reply to  Steve Murray

‘So obviously…’ – how so?

N Satori
N Satori
5 months ago

Presumably that term Grok has been lifted from Heinlein’s Stranger in a Strange Land where it is actually a verb. To Grok is to fully absorb and assimilate knowledge rather than to merely understand.

Andrew Dalton
Andrew Dalton
5 months ago
Reply to  N Satori

Yes, that’s its history, which has been thoroughly plagiarised (or paid homage to) in computing.
There is already software going by the name. Two that I know of are OpenGrok, a source code browser and linux’s grok tool for processing logs and similar.

Last edited 5 months ago by Andrew Dalton
Prashant Kotak
Prashant Kotak
5 months ago

“…So far no AI even remotely resembles an actual intelligence, only pattern-recognising machines that make suggestions based on existing datasets…”

I would strongly contest this statement. If it was true, I would not be remotely worried by the LLMs – and their development would mean humanity is on the way to a post-scarcity civilization, cost-free – hip-hip hooray.

By way of evidence, I point (in the first instance) not to the LLMs, but to AlphaZero and AlphaGo. These are game playing AIs, and they will beat every single human on earth, every time. If you ask top Go players, they will tell you these are games of deep strategy which involve a combination of calculation and imagination – in effect, originality. And this is borne out by the fact that the machines make moves no human has made, nor would think of making, and they win. So where has that type of original move come from? I don’t think the source is human data at all, but it comes entirely from *what the machines have learnt*.

To clarify, these AIs don’t hold a vast database of moves, to be able to lookup the best move to make in any given situation. They do depth calculations in any given position, sure, but this is not really different to what human expertise does. If you give a lost position to a machine it will analyse it out to the end but still lose to a human grandmaster. But the idea is to not get into those bad positions. And it is here the originality and strategy comes in, and it comes from a different source – quite often not articulatabe by the human experts themselves. What the game playing AIs hold, is knowledge about the nature of the game, learnt from playing lots of games – most efficiently, with itself. And this knowledge is held, not as explicit decision making rules, like if-then-else constructs or heuristic rules in a rules engine, which are all understandable, but in a diffuse form across zillions of ‘neurons’. This translates into absolutely vast matrices of high-precision floating point numbers (as Eliezer Yudkowsky might put it) where all trackability of causality is lost.

The situation is even more pronounced with LLMs, but the original output is also more obfuscated, because it is embedded in the context of huge quantities of human data. Ultimately, it is data that the AIs feed on, but this data doesn’t have to be about humans at all – it could be about say galaxies or about atomic processes. I now firmly believe, such an AI, capable of cognition of data about galaxies, but never having come across human data, will nevertheless have ‘opinions’ and ‘assessments’ about humans the first time it comes across them, because the wellspring of this is general intelligence, which is a prerequisite for the type of cognition we consider ‘human’ cognition.

Last edited 5 months ago by Prashant Kotak
Gordon Black
Gordon Black
5 months ago
Reply to  Prashant Kotak

Machines have “learned” nothing – the just recognise patterns. In a championship of machines against machines, at any game, they would all just draw … ad infinitum … what’s “intelligent’ about that?

Prashant Kotak
Prashant Kotak
5 months ago
Reply to  Gordon Black

But they don’t – Leela beats AlphaZero who beats Stockfish who then beats Leela…
Moreover, they don’t even when the machines are training themselves by playing against themselves.

Last edited 5 months ago by Prashant Kotak
Prashant Kotak
Prashant Kotak
5 months ago
Reply to  Gordon Black

This is the equivalent of saying that Einstein playing tic-tac-toe with Bohr would result in a continuous series of draws, ad infinitum – which it would – so we can reach the conclusion that there is nothing intelligent about the players. Could it be that we should be looking at the capabilities of the players, and not get hung up about the nature of one particular game? The question to ask is not if the machines make draws between themselves, but if they beat us. I bet if you managed to teach a chimpanzee how to play tic-tac-toe, humans would beat the chimpanzee.

Last edited 5 months ago by Prashant Kotak
Gordon Black
Gordon Black
5 months ago
Reply to  Prashant Kotak

You confuse pattern recognition with intelligence: intelligence is thinking, a property of life. AI would have to be alive and since all live creatures are just an assemblage of known inert atoms, we have to replicate that in building our machine.

Saul D
Saul D
5 months ago
Reply to  Gordon Black

To be more than a blank assertion, you need to describe what thinking is and how it differs from pattern matching…

K Joynes
K Joynes
5 months ago
Reply to  Prashant Kotak

this is borne out by the fact that the machines make moves no human has made, nor would think of making

As far as I can see this ‘fact’ originates from Lee Sedol saying it about one move in one of his games against AlphaGo.
I cheerfully accept his opinion carries far more weight than mine, but it’s still an opinion. Unless there’s a database record of every single move ever made in every game of Go ever played since the game was invented (I’m guessing there isn’t), there’s absolutely no way that claim can be said to be a ‘fact’.

Prashant Kotak
Prashant Kotak
5 months ago
Reply to  K Joynes

Lee Sedol’s opinion is an opinion, and there are many opinions. What is not an opinion, is the fact, that no human can now beat the game playing AIs at Chess, Go, Bridge etc. What is then needed from anyone claiming that the machines are “only pattern-recognising machines that make suggestions based on existing datasets” is an explanation of why the machines win, even when they completely ignore thousands of years of human heuristics, and train themselves, like AlphaGo eventually did. Where’s the existing dataset of human data here?

Last edited 5 months ago by Prashant Kotak
Dougie Undersub
Dougie Undersub
5 months ago

Remind me, whatever happened to Mastodon? Anyone still using Threads?

Peter B
Peter B
5 months ago

One interesting question here is for how long these AI systems will need to go on harvesting human knowledge. Clearly, almost all these companies rely on harvesting and processing users’ data today and that without that data, they would be able to do little. Elon Musk appears to be saying that he needed Twitter/X in order to have the data to create this Grok AI.
But at some point, do these systems become self-sustaining ? i.e. far less reliant on new data from users – and instead recycle their own new data ?
You also wonder what the quality of the data being used actually is in some of these cases. Is the stuff in Twitter really of any long term value ? And does it need to be to have any use ? I would imagine that most stuff on Twitter (I’m not a user) would be fairly transitory and perhaps not have much meaning or validity 5 years from now. Are the responses you’d get from Grok only likely to be valuable in the ‘blink and you miss it’ context of Twitter ?
People talk about the accuracy and potential bias from AI engines. They don’t ever seem to comment on whether the knowledge they serve up has any expiration date. Imagine you asked an AI to tell you “who’s at fault for the conflict in Gaza ?”. The answer you might get could quite easily vary wildly depending on what year you asked the question or the training data used by the AI you asked (and which AI you asked, in which country you’re based, etc). Not sure if we should expect a stable “reference” answer here (like from an encyclopdia in the past). Much still to be worked out here.

Prashant Kotak
Prashant Kotak
5 months ago
Reply to  Peter B

Re data and it’s quality and quality, there is a lot of data, but a lot is noise – repeated, or inaccurate etc. The machines are getting better and better at discerning this, because accuracy is useful, so we will look to design out inaccuracies – this is an engineering problem, or to be more cynical in its current iteration, an alchemy problem. However, it is by it’s very nature not a process without completely unpredictable side effects, as in, we are very far from being able to control the types of minds that emerge as a consequence of this.

The point at which they become self-sustaining is when they can, and more importantly, want to, probe the universe for themselves – for data. That this will happen, is I contend, inevitable – because we will create machines biased towards that particular variety of intentionality, because such machines are useful for us in the first instance. The question is when. I think we are at max half a decade away from this – based on everything I have seen from the LLMs.

Last edited 5 months ago by Prashant Kotak
Peter B
Peter B
5 months ago
Reply to  Prashant Kotak

Prashant, you seem by way the most informed commentator on AI here. UnHerd should be asking you for an article here ! There are a lot of articles about AI, but many of them are noise (to paraphrase you earlier).

Frederick Dixon
Frederick Dixon
5 months ago
Reply to  Peter B

“Imagine you asked an AÍ ‘who’s at fault for the conflict in Gaza’”. If you had uploaded the Bible, you would probably be told that it was all the fault of the Philistines.

Prashant Kotak
Prashant Kotak
5 months ago
Reply to  Peter B

Thank you for the kind words

Samuel Ross
Samuel Ross
5 months ago

If AI (Artificial Intelligence) is derived from the living human beings’ intelligence as contained on the continually-updated X Platform, can you truly call it ‘artificial’? Again, it’s pattern recognition based on an intelligent basis; rather like how Windows XP was based on a DOS Operating Kernel. Impressive, but I’m not sure how ‘artificial’ or how ‘intelligent’ it really is.