Subscribe
Notify of
guest

17 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
Andrew Dalton
Andrew Dalton
1 year ago

At risk of doxing myself I work I work in the realm of the fourth industrial revolution, although overwhelmingly in the industrial application of this technology and not the social and political applications of it. The consequences of the emergent technologies are enormous, and although I’d rather not put time lines to the development and consequences of deployed technologies, it is clear their impacts will be significant across all domains of life.

AI is not my computing and software specialty but it is beginning to surprise: both in terms of it’s development but also how stupid it can be at times (unless it’s lulling us into a false sense of security). What bothers me is how innovation typically works, which is often about how existing technologies are merged to create something new.

It isn’t necessarily difficult to predict that advanced machinery/robotics + AI renders vast quantities of jobs redundant. As soon as this is cheaper than the cheapest labour in a particular location, it will become the chosen approach for corporate profiteering over globalisation. It could also have benefits for supply chains (and therefore environmental impacts) by allowing factories and plant to be near the source of raw materials. Science fiction writers and futorologists have been predicting this for decades.

The use of AI as a social control system is a little less discussed but not unheard of. However it is clearly the innovation of merging social media, public record and AI. Introducing other 4th IR concepts like digital currencies (central bank or otherwise) it isn’t exactly hard to see where social credit systems come into play. Considering the prior point regarding robots taking people’s jobs, it becomes essential (at least from a political point of view).

The welfare state, which was vastly expanded following skilled jobs being off-shored would need to expand again. In a world where an individual citizen/consumer has become decoupled from production, how exactly will their spending power be governed? As such, I see a certain inevitability as the consequence of automating jobs away will demand this form of response.

Yes, my outlook is dystopian, Huxley would be proud. The more “utopian” vision for an automated society, such as Jacque Fresco’s removes too much power from government, supranational and corporate interests and I see no evidence that those groups would ever surrender control. We will once again see a merger of corporate and political interest: one of Tony Blair’s favourite things.

Andrew Dalton
Andrew Dalton
1 year ago

At risk of doxing myself I work I work in the realm of the fourth industrial revolution, although overwhelmingly in the industrial application of this technology and not the social and political applications of it. The consequences of the emergent technologies are enormous, and although I’d rather not put time lines to the development and consequences of deployed technologies, it is clear their impacts will be significant across all domains of life.

AI is not my computing and software specialty but it is beginning to surprise: both in terms of it’s development but also how stupid it can be at times (unless it’s lulling us into a false sense of security). What bothers me is how innovation typically works, which is often about how existing technologies are merged to create something new.

It isn’t necessarily difficult to predict that advanced machinery/robotics + AI renders vast quantities of jobs redundant. As soon as this is cheaper than the cheapest labour in a particular location, it will become the chosen approach for corporate profiteering over globalisation. It could also have benefits for supply chains (and therefore environmental impacts) by allowing factories and plant to be near the source of raw materials. Science fiction writers and futorologists have been predicting this for decades.

The use of AI as a social control system is a little less discussed but not unheard of. However it is clearly the innovation of merging social media, public record and AI. Introducing other 4th IR concepts like digital currencies (central bank or otherwise) it isn’t exactly hard to see where social credit systems come into play. Considering the prior point regarding robots taking people’s jobs, it becomes essential (at least from a political point of view).

The welfare state, which was vastly expanded following skilled jobs being off-shored would need to expand again. In a world where an individual citizen/consumer has become decoupled from production, how exactly will their spending power be governed? As such, I see a certain inevitability as the consequence of automating jobs away will demand this form of response.

Yes, my outlook is dystopian, Huxley would be proud. The more “utopian” vision for an automated society, such as Jacque Fresco’s removes too much power from government, supranational and corporate interests and I see no evidence that those groups would ever surrender control. We will once again see a merger of corporate and political interest: one of Tony Blair’s favourite things.

Norman Powers
Norman Powers
1 year ago

Yeah nice essay, too bad it’s totally wrong. I got nothing against journalists and writers opining on technical subjects but they should at least try to find someone with knowledge to check their thesis before they embark on writing it. No Kai-Fu Lee doesn’t count, as his agenda these days is to make China look powerful first and be correct second.
To train LLMs like ChatGPT or “MossBot” you need lots of data, but it’s the sort of data you get from the public internet, including possibly books/magazines/newspapers/etc. It isn’t surveillance data of the sort this article is talking about. You got a billion iris images in a database? Good for you, that’ll be super helpful if you want to train an AI that can generate perfect looking random iris images and not much else. You got a billion swabs? Great, now you can generate fake swabs.
Getting the picture here? The reason you need a lot of data scraped from the internet to train a modern AI is because you want that AI to generate the sorts of things you find on the internet: answers to questions, news articles, photos, poetry, code.
So data isn’t the new oil, it’s not like anything even close to oil. Oil is fungible, one barrel is much like another. Data is not, which is why forced analogies like “The Saudi Arabia of data” are the mark of punditry, not expertise (not that you need much to know this stuff!).
Could the British government train a giant LLM if it wanted to? Uh, yes? DeepMind is based in London and has done exactly that, it’s just a matter of offering those people 2x what they currently get paid and then giving them the time and money they need to collect lots of web crawls, book scans and so on + a few tens of millions of dollars worth of hardware from NVIDIA. But why would it? Maybe people like Blair think there’s something strategic about all this but there isn’t. What does he even mean by an AI Framework? A British TensorFlow? Surely not.

A frequent refrain in the tech community is that “AI is communist”.

I’m a member of the tech community, read AI research papers regularly, take part in AI discussions regularly and have never heard anyone say this. What would that even mean? AI development is the opposite of communist, the most advanced AIs are all being trained by private corporations and the most talked-about AI right now (ChatGPT) is by a well funded startup. There’s nothing I can think of that’s less communist than a startup.

Last edited 1 year ago by Norman Powers
Robbie K
Robbie K
1 year ago
Reply to  Norman Powers

Yeah great observation, I was thinking the same thing as I was reading the article. The Chinese data referred to is constrained, controlled and influenced by policy. MossBot probably gives great answers to questions posed by the CCP.

Peter Kwasi-Modo
Peter Kwasi-Modo
1 year ago
Reply to  Norman Powers

I totally concur with the points you make, but I would not be quite so dismissive of surveillance data, etc. On its own, surveillance data such as iris recognition, may be of limited value beyond those directly involved in surveillance, but when combined with other data about the individuals its potential uses (and of course abuses) are much greater in scope.

Norman Powers
Norman Powers
1 year ago

Sure, but then you’re using AI to process surveillance data, not using that data to create AI. LLMs and other modern AI tech can certainly be used to build a dystopia, no doubt about it, but China doesn’t seem to have any particular advantage in building such tech beyond a desire to do so and lots of smart tech-savvy citizens.

Peter Kwasi-Modo
Peter Kwasi-Modo
1 year ago
Reply to  Norman Powers

Fair point!

Peter Kwasi-Modo
Peter Kwasi-Modo
1 year ago
Reply to  Norman Powers

Fair point!

Norman Powers
Norman Powers
1 year ago

Sure, but then you’re using AI to process surveillance data, not using that data to create AI. LLMs and other modern AI tech can certainly be used to build a dystopia, no doubt about it, but China doesn’t seem to have any particular advantage in building such tech beyond a desire to do so and lots of smart tech-savvy citizens.

Andrew Dalton
Andrew Dalton
1 year ago
Reply to  Norman Powers

Maybe people like Blair think there’s something strategic about all this but there isn’t

Blair was a big proponent of the knowledge based economy when he was PM. He wasn’t quite so big on explaining what it was though.

Robbie K
Robbie K
1 year ago
Reply to  Norman Powers

Yeah great observation, I was thinking the same thing as I was reading the article. The Chinese data referred to is constrained, controlled and influenced by policy. MossBot probably gives great answers to questions posed by the CCP.

Peter Kwasi-Modo
Peter Kwasi-Modo
1 year ago
Reply to  Norman Powers

I totally concur with the points you make, but I would not be quite so dismissive of surveillance data, etc. On its own, surveillance data such as iris recognition, may be of limited value beyond those directly involved in surveillance, but when combined with other data about the individuals its potential uses (and of course abuses) are much greater in scope.

Andrew Dalton
Andrew Dalton
1 year ago
Reply to  Norman Powers

Maybe people like Blair think there’s something strategic about all this but there isn’t

Blair was a big proponent of the knowledge based economy when he was PM. He wasn’t quite so big on explaining what it was though.

Norman Powers
Norman Powers
1 year ago

Yeah nice essay, too bad it’s totally wrong. I got nothing against journalists and writers opining on technical subjects but they should at least try to find someone with knowledge to check their thesis before they embark on writing it. No Kai-Fu Lee doesn’t count, as his agenda these days is to make China look powerful first and be correct second.
To train LLMs like ChatGPT or “MossBot” you need lots of data, but it’s the sort of data you get from the public internet, including possibly books/magazines/newspapers/etc. It isn’t surveillance data of the sort this article is talking about. You got a billion iris images in a database? Good for you, that’ll be super helpful if you want to train an AI that can generate perfect looking random iris images and not much else. You got a billion swabs? Great, now you can generate fake swabs.
Getting the picture here? The reason you need a lot of data scraped from the internet to train a modern AI is because you want that AI to generate the sorts of things you find on the internet: answers to questions, news articles, photos, poetry, code.
So data isn’t the new oil, it’s not like anything even close to oil. Oil is fungible, one barrel is much like another. Data is not, which is why forced analogies like “The Saudi Arabia of data” are the mark of punditry, not expertise (not that you need much to know this stuff!).
Could the British government train a giant LLM if it wanted to? Uh, yes? DeepMind is based in London and has done exactly that, it’s just a matter of offering those people 2x what they currently get paid and then giving them the time and money they need to collect lots of web crawls, book scans and so on + a few tens of millions of dollars worth of hardware from NVIDIA. But why would it? Maybe people like Blair think there’s something strategic about all this but there isn’t. What does he even mean by an AI Framework? A British TensorFlow? Surely not.

A frequent refrain in the tech community is that “AI is communist”.

I’m a member of the tech community, read AI research papers regularly, take part in AI discussions regularly and have never heard anyone say this. What would that even mean? AI development is the opposite of communist, the most advanced AIs are all being trained by private corporations and the most talked-about AI right now (ChatGPT) is by a well funded startup. There’s nothing I can think of that’s less communist than a startup.

Last edited 1 year ago by Norman Powers
Peter Kwasi-Modo
Peter Kwasi-Modo
1 year ago

Feminists ridicule the verbiage of us males as “manplanation”. I wonder how long it will be before “botplanation” enters the vocabulary. Sometimes, chatGPT raises its hands in the air and admits its just a piece of software, but more often, its responses are a mixture a fact and bu115h1t. As with any interlocutor, human or otherwise, who breezily answer a question when they don’t really know what they are talking about, one quickly learns to ignore them.
So if I instructed chatGPT to “Explain why President Xi’s Zero Covid Policy was such a monumental disaster.” In about two seconds I get a very good, balanced response with four headings: Economics, Human Rights, Long-term harm and Lack of transparency. Great! What do you suppose Moss would do if I asked it for the same information? So the two tools have a distinct purpose. And god help us if a latter-day Dominic Cummings bases policy on the strnegth of a chat with a Bot.
An ex-colleague, a professor of Artificial Intelligence, once told me, researchers only call it artificial intelligence when we don’t really understand why it works. As soon as we DO understand why it works, we start calling it software.

Last edited 1 year ago by Peter Kwasi-Modo
Andrew Dalton
Andrew Dalton
1 year ago

When I was student, in the immediate run up to the Iraq war, our Artificial Intelligence and Artificial Neural Networks professor announced he wouldn’t be available for a couple of weeks due to meetings with the government. A friend of mine quipped they’d developed an expert system to decide whether to invade or not. In the next lecture after his return, he made a point about expert systems being used in the decision process for invading Iraq. I do wonder if he heard my spit take.

Andrew Dalton
Andrew Dalton
1 year ago

When I was student, in the immediate run up to the Iraq war, our Artificial Intelligence and Artificial Neural Networks professor announced he wouldn’t be available for a couple of weeks due to meetings with the government. A friend of mine quipped they’d developed an expert system to decide whether to invade or not. In the next lecture after his return, he made a point about expert systems being used in the decision process for invading Iraq. I do wonder if he heard my spit take.

Peter Kwasi-Modo
Peter Kwasi-Modo
1 year ago

Feminists ridicule the verbiage of us males as “manplanation”. I wonder how long it will be before “botplanation” enters the vocabulary. Sometimes, chatGPT raises its hands in the air and admits its just a piece of software, but more often, its responses are a mixture a fact and bu115h1t. As with any interlocutor, human or otherwise, who breezily answer a question when they don’t really know what they are talking about, one quickly learns to ignore them.
So if I instructed chatGPT to “Explain why President Xi’s Zero Covid Policy was such a monumental disaster.” In about two seconds I get a very good, balanced response with four headings: Economics, Human Rights, Long-term harm and Lack of transparency. Great! What do you suppose Moss would do if I asked it for the same information? So the two tools have a distinct purpose. And god help us if a latter-day Dominic Cummings bases policy on the strnegth of a chat with a Bot.
An ex-colleague, a professor of Artificial Intelligence, once told me, researchers only call it artificial intelligence when we don’t really understand why it works. As soon as we DO understand why it works, we start calling it software.

Last edited 1 year ago by Peter Kwasi-Modo
Rocky Martiano
Rocky Martiano
1 year ago

Social credit scores coming to a postcode near you? Seriously though, the UK government can’t even produce a database of NHS medical records. How on earth would they manage a project like this? It won’t stop them trying though. Management consultants are already rubbing their hands at the prospect of the coming bonanza.

Steve Murray
Steve Murray
1 year ago
Reply to  Rocky Martiano

An excellent observation about the pitiful use of IT within the NHS. I had dealings with IT people drafted into our health service over the last two decades of my NHS career (up to 2016) and let’s just say the type of people recruited were pretty third rate. Why? Quite simply because anyone with any real IT talent could earn far more money in the private sector.
On the national level, i attended conferences in the early 2000s about the introduction of a UK database for medical records. I expect such conferences are still being attended two decades later, with the same blather and the same costs in attending.
To try to bridge the talent gap, the NHS employs IT Consultants at eye-watering rates. They come in, do their thing (with the tech available at the time) and disappear, leaving the system they’ve helped introduce to become a “legacy” within the space of a few short years since no-one remaining in the organisation understands it, nor they can change it. More efficient tech overtakes the legacy system and none of the disparate systems “talk” to each other. Repeat ad infinitum.
If this were to be the type of template for AI at the state level, it’d be a huge waste of time and money. However… i think what’s being suggested is something of a different order. In the NHS, the systems rely on overstretched staff inputting data (e.g. drug regime, changes to regime, drug administered, if not why not etc.) which simply can’t happen automatically. Errors creep in all the time which stymies the system. In an environment where data is routinely collated via an automation process (as with smartphones), there’s a different type of potential. There’s no reason in theory why the UK couldn’t replicate what’s happening in China, except that state control and citizen consent are of a different order. If this were to happen by stealth, i.e. without citizen consent (and it may already be happening) then we’re in a completely new ballpark. This article is very welcome as a warning of the double-edged sword that’s hanging over us all. We’re all Damocles now.

Last edited 1 year ago by Steve Murray
Rocky Martiano
Rocky Martiano
1 year ago
Reply to  Steve Murray

You are correct, it is already happening by stealth e.g. centralised ID database for access to all government services, BOE preparing to introduce a CBDC, handover of power to the WHO to control the UK population in the event of a new pandemic (see Unherd’s excellent piece today on this subject). None of this to my knowledge has even been debated in Parliament, let alone put to the British people.
Back on the subject of the NHS, my brother-in-law had a sinecure for many years working for one of the major consultancies, going round the country managing IT projects for NHS trusts. Exactly as you described, finish project, move on, leave the poorly equipped trusts to deal with the fallout, system falls into oblivion within a few years.

Rocky Martiano
Rocky Martiano
1 year ago
Reply to  Steve Murray

You are correct, it is already happening by stealth e.g. centralised ID database for access to all government services, BOE preparing to introduce a CBDC, handover of power to the WHO to control the UK population in the event of a new pandemic (see Unherd’s excellent piece today on this subject). None of this to my knowledge has even been debated in Parliament, let alone put to the British people.
Back on the subject of the NHS, my brother-in-law had a sinecure for many years working for one of the major consultancies, going round the country managing IT projects for NHS trusts. Exactly as you described, finish project, move on, leave the poorly equipped trusts to deal with the fallout, system falls into oblivion within a few years.

Steve Murray
Steve Murray
1 year ago
Reply to  Rocky Martiano

An excellent observation about the pitiful use of IT within the NHS. I had dealings with IT people drafted into our health service over the last two decades of my NHS career (up to 2016) and let’s just say the type of people recruited were pretty third rate. Why? Quite simply because anyone with any real IT talent could earn far more money in the private sector.
On the national level, i attended conferences in the early 2000s about the introduction of a UK database for medical records. I expect such conferences are still being attended two decades later, with the same blather and the same costs in attending.
To try to bridge the talent gap, the NHS employs IT Consultants at eye-watering rates. They come in, do their thing (with the tech available at the time) and disappear, leaving the system they’ve helped introduce to become a “legacy” within the space of a few short years since no-one remaining in the organisation understands it, nor they can change it. More efficient tech overtakes the legacy system and none of the disparate systems “talk” to each other. Repeat ad infinitum.
If this were to be the type of template for AI at the state level, it’d be a huge waste of time and money. However… i think what’s being suggested is something of a different order. In the NHS, the systems rely on overstretched staff inputting data (e.g. drug regime, changes to regime, drug administered, if not why not etc.) which simply can’t happen automatically. Errors creep in all the time which stymies the system. In an environment where data is routinely collated via an automation process (as with smartphones), there’s a different type of potential. There’s no reason in theory why the UK couldn’t replicate what’s happening in China, except that state control and citizen consent are of a different order. If this were to happen by stealth, i.e. without citizen consent (and it may already be happening) then we’re in a completely new ballpark. This article is very welcome as a warning of the double-edged sword that’s hanging over us all. We’re all Damocles now.

Last edited 1 year ago by Steve Murray
Rocky Martiano
Rocky Martiano
1 year ago

Social credit scores coming to a postcode near you? Seriously though, the UK government can’t even produce a database of NHS medical records. How on earth would they manage a project like this? It won’t stop them trying though. Management consultants are already rubbing their hands at the prospect of the coming bonanza.

Michael Coleman
Michael Coleman
1 year ago

Not all data has the same value. China certainly is monitoring and recording more conversations than any other entity (except maybe the NSA?) and this provides better training for AI based on LLMs like Chat GPT. Similarly for number of images of people and thus people & image recognizing AIs.
But as numerous others more knowledgeable than me have demonstrated, the current deep learning models have little true understanding and are far from a general AI. See Gary Marcus’ excellent Substack on AI
https://garymarcus.substack.com/p/smells-a-little-bit-like-ai-winter
It is not clear that the current dominant models will be the path towards AGI and that achieving AGI is just about more processors and more data.

Michael Coleman
Michael Coleman
1 year ago

Not all data has the same value. China certainly is monitoring and recording more conversations than any other entity (except maybe the NSA?) and this provides better training for AI based on LLMs like Chat GPT. Similarly for number of images of people and thus people & image recognizing AIs.
But as numerous others more knowledgeable than me have demonstrated, the current deep learning models have little true understanding and are far from a general AI. See Gary Marcus’ excellent Substack on AI
https://garymarcus.substack.com/p/smells-a-little-bit-like-ai-winter
It is not clear that the current dominant models will be the path towards AGI and that achieving AGI is just about more processors and more data.

Nicky Samengo-Turner
Nicky Samengo-Turner
1 year ago

I have no idea what chatbot CBT is?

John Solomon
John Solomon
1 year ago

Just pray you never need to find out.

Peter Kwasi-Modo
Peter Kwasi-Modo
1 year ago

With an ordinary browser, you type in some keywords and it gives you a list of web pages that contain the keywords. With a chatbot, such as chat GPT, you can (a) ask a question and you get an essay-style answer or (b) engage in a dialogue with the chatbot. the essay-style responses are produced remarkably quickly and are reasonably well structured. but for me, the big problem is that the chatbot only occasionally admits that it is out of its depth. It can produce nonsensical answers, sometimes becuase it has not be trained on the appropriate data.

Stephen Quilley
Stephen Quilley
1 year ago

No

Stephen Quilley
Stephen Quilley
1 year ago

No