Credit: Christopher Furlong / Getty


November 19, 2018   3 mins

The ‘Internet of Things‘ (IoT) is not a term I have much love for. Artificial Intelligence, Virtual Reality, even Augmented Eternity, I can put up with, but the Internet of Things? Must we? Surely there has to be a better way of referring to the fact that more and more of our machines – from central heating systems to electronic door locks – are getting hooked up to the internet? Even something blandly technical like ‘pervasive networks’ would be less annoying than Internet of Things.

Except now there’s something even worse: the ‘Internet of Bodies’, in which instead of machines being networked, it’s living organisms – up to and including human beings.

What would make this possible are devices that can be attached to or implanted within a body. Depending on its sophistication, the device would allow the subject individual to be identified, monitored or in some way controlled via the internet or a private network.

A basic example, already in widespread use, is the microchipping of pets. Then there are the growing range of medical devices, from implantable pacemakers to robotic prosthetic limbs and artificial organs, that can also be networked.

In a piece for the Wall Street Journal, Andrea M Matwyshyn explores the implications of technology:

“Who controls these “IoB” devices in our bodies? Who can use the body-derived data? Who is responsible for ensuring that the devices work as intended?”

If having a tech company in ultimate charge of your smartphone and the things you use it for, concerns you, imagine if they literally controlled your heart:

“Like most of the tech industry, existing IoB companies rely on end-user license agreements and privacy policies to retain rights in software and to create rights to monitor, aggregate and share users’ body data…  some end-user license agreements have allowed companies to deactivate, or ‘brick,’ a device unless a consumer agrees to changes in privacy or information-sharing provisions.”

Of course, we’re talking about major medical interventions here, that most of us, for most of our lives, won’t need (we might not be able to afford them either, but that’s another issue).

And yet the Internet of Bodies could be coming for us anyway. In a story for the Guardian, Julia Kollewe reports on the microchipping of humans in the workplace:

“UK firm BioTeq, which offers the implants to businesses and individuals, has already fitted 150 implants in the UK.

‘The tiny chips, implanted in the flesh between the thumb and forefinger, are similar to those for pets. They enable people to open their front door, access their office or start their car with a wave of their hand, and can also store medical data.”

Kollewe adds that another such company, Biohax, “is in discussions with several British legal and financial firms about fitting their employees with microchips” .

British trade unions (and the CBI) have raised concerns:

“The TUC is worried that staff could be coerced into being microchipped. Its general secretary Frances O’Grady said: ‘We know workers are already concerned that some employers are using tech to control and micromanage, whittling away their staff’s right to privacy.'”

Human microchipping does have the potential to streamline security procedures – saving employees the hassle of wearing passes and remembering passwords. So, if a workplace-based scheme was voluntary, what would be the problem?

I can remember when having a mobile phone (and giving your employer your number) was also voluntary. But by the first years of this century it had become the norm. In the absence of a contractual specifications to contrary, one can still, in theory, not have a mobile phone nor share your number with your employer… but good luck putting that one to the test. Once everyone else is doing it, it becomes really difficult to say no.

Still, it’s only a microchip – just a means of proving our identity in the workplace, something that is already required of us. But there’s more to it than identification. If more advanced versions of such chips allowed a constant connection to a network, then that would allow constant monitoring: of location, of movement and, before long, of other things like heart rate, body temperature and blood chemistry.

Back in May, James Bloodworth wrote an eye-opening piece for UnHerd on the practice and doctrine of Taylorism – i.e. the ‘scientific’ micromanagement of a workforce. He mentioned that in some warehouses and factories, personal electronic devices are already being used to bring Taylorism into the 21st century: Creepy, but at least these gizmos are worn or carried externally. Workers have the physical option (though possibly not the economic opportunity) to cast off their electronic chains. At the very least they can leave them behind at the end of each shift.

An implant, though, is with you at all times and wherever you go. And what will such things become in future? Beyond identification, beyond monitoring, there is the prospect of direct control. Imagine, for instance, a device programmed to provide a chemical (or electric) jolt of wakefulness should an employee drop-off in the workplace or oversleep at home.

Taylorism is named after its originator Frederick W Taylor, who said this in 1911: “In the past the man has been first; the future the system must be first.” 

In 2018, the prospect is one of cyber-Taylorism, in which the distinction between man and system is erased.


Peter Franklin is Associate Editor of UnHerd. He was previously a policy advisor and speechwriter on environmental and social issues.

peterfranklin_