The trouble with the Covid-19 crisis is that there is so much uncertainty and so many conflicting sources of evidence that you can choose whatever you like to push you towards whichever conclusions you want. It’s an absolute playground for confirmation bias.
I am aware that I’m on the optimistic end, so I ought to be careful. But this caught my eye: the first death in the US was weeks earlier than thought.
Two people died in California, one on 6 February, one on 17 February. The previous earliest known death in the US was 29 February, in Washington state; the earliest in California was 4 March.
According to CNN neither person had a travel history that suggested that he or she could have caught it outside the US; they seem to have caught it from the community. The usual lag from infection to death is about three weeks, so that implies they caught it in mid- and late January respectively and that the disease was already circulating in California then. The previous earliest known instance of community spread — the disease infecting someone who had no links to a known patient or a high-risk region — was 26 February.
To me it implies that the disease has been going around for longer and may, therefore, mean it’s infected more people (and thus killed a smaller percentage of the people it’s infected). A statistician who works on this stuff agrees: he tells me it “adds weight the argument that this started much earlier and is more widespread than we realise”.
I wanted to quickly temper that, though: some sensible voices suggest it doesn’t change very much. Plus, early serology tests in Geneva seem to suggest a smaller percentage have had the disease than some have thought, implying a higher death rate; and, as many have pointed out, there are at least 10,000 deaths so far in New York City alone, which given a population of around 8 million would imply a death rate of more than 0.1% even if literally every single person in New York City had had it (10,000 divided by 8,000,000 = 0.125%). Since that seems unlikely, it’s probably much higher, which suggests that some lower-bound estimates of the fatality rate (including my own) are probably overoptimistic.
This is what I mean. I’m not advocating radical scepticism; we can find things out. But if you’re not really careful it becomes very easy to convince yourself of anything you like. So if you find you keep reading things that convince you of stuff you already believed, then be aware that that might not be representative of the whole picture.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
SubscribeOf all the players I would trust Musk with AI the most – after all he was the one who warned the world against it many years ago.
I don’t know why writers such as Mary continue to refer to X as Twitter. Is this unconscious?
Anyhow, by far the most writers haven’t a cooking clue as to what Musk is really doing or thinking, they just love taking pop shots at him. I love the Guardian articles on Musk – the readers are OUTRAGED by him. They think he is stupid.
Can’t they do it less expensively, and just recruit a team from the psychiatric ward of a maximum-security prison?
I don’t think AI will ever attain consciousness like we have, as ours grows not out of the complexity of our hard-wired brains but from the untouchable spirits which animate them.
Yes.
One feature of these animating spirits that supports your idea is the overwhelming variety of types of spirits. There are math wizzes, natural born artists, funny people, people with amazing athletic talents, or ingrained organizational talents, or spookily charismatic people (for better or worse), etc, etc. Many of these things can be learned (skills), but the natural born talents are beyond our ken. We don’t have the words to begin to understand.
AI, trained on averages, has no way to even grasp it, much less reproduce that spirit.
Our consciousness is so obviously a product of our biology (the entire human body, including heart, stomach, vision, endocrine system etc.) but AI may achieve a different kind of consciousness, which we may not be able to conceive of for similar reasons: the nature of its origins.
‘So obviously…’ – how so?
Presumably that term Grok has been lifted from Heinlein’s Stranger in a Strange Land where it is actually a verb. To Grok is to fully absorb and assimilate knowledge rather than to merely understand.
Yes, that’s its history, which has been thoroughly plagiarised (or paid homage to) in computing.
There is already software going by the name. Two that I know of are OpenGrok, a source code browser and linux’s grok tool for processing logs and similar.
Remind me, whatever happened to Mastodon? Anyone still using Threads?
“…So far no AI even remotely resembles an actual intelligence, only pattern-recognising machines that make suggestions based on existing datasets…”
I would strongly contest this statement. If it was true, I would not be remotely worried by the LLMs – and their development would mean humanity is on the way to a post-scarcity civilization, cost-free – hip-hip hooray.
By way of evidence, I point (in the first instance) not to the LLMs, but to AlphaZero and AlphaGo. These are game playing AIs, and they will beat every single human on earth, every time. If you ask top Go players, they will tell you these are games of deep strategy which involve a combination of calculation and imagination – in effect, originality. And this is borne out by the fact that the machines make moves no human has made, nor would think of making, and they win. So where has that type of original move come from? I don’t think the source is human data at all, but it comes entirely from *what the machines have learnt*.
To clarify, these AIs don’t hold a vast database of moves, to be able to lookup the best move to make in any given situation. They do depth calculations in any given position, sure, but this is not really different to what human expertise does. If you give a lost position to a machine it will analyse it out to the end but still lose to a human grandmaster. But the idea is to not get into those bad positions. And it is here the originality and strategy comes in, and it comes from a different source – quite often not articulatabe by the human experts themselves. What the game playing AIs hold, is knowledge about the nature of the game, learnt from playing lots of games – most efficiently, with itself. And this knowledge is held, not as explicit decision making rules, like if-then-else constructs or heuristic rules in a rules engine, which are all understandable, but in a diffuse form across zillions of ‘neurons’. This translates into absolutely vast matrices of high-precision floating point numbers (as Eliezer Yudkowsky might put it) where all trackability of causality is lost.
The situation is even more pronounced with LLMs, but the original output is also more obfuscated, because it is embedded in the context of huge quantities of human data. Ultimately, it is data that the AIs feed on, but this data doesn’t have to be about humans at all – it could be about say galaxies or about atomic processes. I now firmly believe, such an AI, capable of cognition of data about galaxies, but never having come across human data, will nevertheless have ‘opinions’ and ‘assessments’ about humans the first time it comes across them, because the wellspring of this is general intelligence, which is a prerequisite for the type of cognition we consider ‘human’ cognition.
Machines have “learned” nothing – the just recognise patterns. In a championship of machines against machines, at any game, they would all just draw … ad infinitum … what’s “intelligent’ about that?
But they don’t – Leela beats AlphaZero who beats Stockfish who then beats Leela…
Moreover, they don’t even when the machines are training themselves by playing against themselves.
This is the equivalent of saying that Einstein playing tic-tac-toe with Bohr would result in a continuous series of draws, ad infinitum – which it would – so we can reach the conclusion that there is nothing intelligent about the players. Could it be that we should be looking at the capabilities of the players, and not get hung up about the nature of one particular game? The question to ask is not if the machines make draws between themselves, but if they beat us. I bet if you managed to teach a chimpanzee how to play tic-tac-toe, humans would beat the chimpanzee.
You confuse pattern recognition with intelligence: intelligence is thinking, a property of life. AI would have to be alive and since all live creatures are just an assemblage of known inert atoms, we have to replicate that in building our machine.
To be more than a blank assertion, you need to describe what thinking is and how it differs from pattern matching…
As far as I can see this ‘fact’ originates from Lee Sedol saying it about one move in one of his games against AlphaGo.
I cheerfully accept his opinion carries far more weight than mine, but it’s still an opinion. Unless there’s a database record of every single move ever made in every game of Go ever played since the game was invented (I’m guessing there isn’t), there’s absolutely no way that claim can be said to be a ‘fact’.
Lee Sedol’s opinion is an opinion, and there are many opinions. What is not an opinion, is the fact, that no human can now beat the game playing AIs at Chess, Go, Bridge etc. What is then needed from anyone claiming that the machines are “only pattern-recognising machines that make suggestions based on existing datasets” is an explanation of why the machines win, even when they completely ignore thousands of years of human heuristics, and train themselves, like AlphaGo eventually did. Where’s the existing dataset of human data here?
One interesting question here is for how long these AI systems will need to go on harvesting human knowledge. Clearly, almost all these companies rely on harvesting and processing users’ data today and that without that data, they would be able to do little. Elon Musk appears to be saying that he needed Twitter/X in order to have the data to create this Grok AI.
But at some point, do these systems become self-sustaining ? i.e. far less reliant on new data from users – and instead recycle their own new data ?
You also wonder what the quality of the data being used actually is in some of these cases. Is the stuff in Twitter really of any long term value ? And does it need to be to have any use ? I would imagine that most stuff on Twitter (I’m not a user) would be fairly transitory and perhaps not have much meaning or validity 5 years from now. Are the responses you’d get from Grok only likely to be valuable in the ‘blink and you miss it’ context of Twitter ?
People talk about the accuracy and potential bias from AI engines. They don’t ever seem to comment on whether the knowledge they serve up has any expiration date. Imagine you asked an AI to tell you “who’s at fault for the conflict in Gaza ?”. The answer you might get could quite easily vary wildly depending on what year you asked the question or the training data used by the AI you asked (and which AI you asked, in which country you’re based, etc). Not sure if we should expect a stable “reference” answer here (like from an encyclopdia in the past). Much still to be worked out here.
Re data and it’s quality and quality, there is a lot of data, but a lot is noise – repeated, or inaccurate etc. The machines are getting better and better at discerning this, because accuracy is useful, so we will look to design out inaccuracies – this is an engineering problem, or to be more cynical in its current iteration, an alchemy problem. However, it is by it’s very nature not a process without completely unpredictable side effects, as in, we are very far from being able to control the types of minds that emerge as a consequence of this.
The point at which they become self-sustaining is when they can, and more importantly, want to, probe the universe for themselves – for data. That this will happen, is I contend, inevitable – because we will create machines biased towards that particular variety of intentionality, because such machines are useful for us in the first instance. The question is when. I think we are at max half a decade away from this – based on everything I have seen from the LLMs.
Prashant, you seem by way the most informed commentator on AI here. UnHerd should be asking you for an article here ! There are a lot of articles about AI, but many of them are noise (to paraphrase you earlier).
“Imagine you asked an AÍ ‘who’s at fault for the conflict in Gaza’”. If you had uploaded the Bible, you would probably be told that it was all the fault of the Philistines.
Thank you for the kind words
If AI (Artificial Intelligence) is derived from the living human beings’ intelligence as contained on the continually-updated X Platform, can you truly call it ‘artificial’? Again, it’s pattern recognition based on an intelligent basis; rather like how Windows XP was based on a DOS Operating Kernel. Impressive, but I’m not sure how ‘artificial’ or how ‘intelligent’ it really is.